We’re continuing our series on how AI is reshaping software development by tackling one of the most pressing questions in tech hiring today:
How do you spot the difference between AI-augmented talent and candidates who rely on AI to get by?
This article builds on our previous pieces, If AI Can Write Code, What’s Left for Developers? and AI Is Changing How We Code.
We already talked about vibe coders and the risks of generating functional code without truly understanding what that code does. Now it’s time to go a step further, looking at the talent behind the tools and how tech hiring is evolving fast.
When everyone “looks” like a developer
The truth is that the line between someone who “looks like a dev” and someone who truly is one has gotten blurry.
To an untrained eye, a junior dev with a code assistant might seem just as effective as someone experienced. But any experienced technical lead can tell you there is a huge difference between that and an AI-Augmented Developer—one who understands how systems behave and who can write code that scales, supports the business, and can prevent future problems.
As we explained in The Rise of the AI-Augmented Developer, these professionals are certified, experienced, and grounded in computer science fundamentals. They understand business logic, software architecture, and how to balance agility with long-term maintainability.
They’re pragmatic, but principled. They ship quickly, but they leave organized documentation. They know what needs to be fast—and what needs to be future-proof. Their prompts are precise, their solutions scalable, and their mindset architectural.
This article, created in collaboration with Pablo Bucci, Recruiting Lead at Inclusion Cloud, explains how to identify and hire this kind of top-tier talent.
Because trying to deny that people use AI at work is a battle already lost: this technology is here, and it’s becoming more embedded in our day-to-day. At Inclusion Cloud, we see AI not just as a trend, but as a foundational force that will redefine how businesses build, scale, and deliver digital products.
That’s why we’ve adapted our recruiting engine to this new reality. We integrate AI into the process, but always from an ethical, responsible perspective—and, above all, as a way to create real value for our clients.
What’s Changing in Tech Hiring Right Now?
As tech hiring evolves, remote processes are being reexamined—especially as it becomes harder to validate who’s really doing the work. The rise of AI has introduced a new challenge: distinguishing between people who work with AI and those who let AI do the work for them.
In interviews, some patterns are hard to ignore—long pauses, answers that sound rehearsed, or a noticeable struggle when the conversation shifts to real-world situations or cross-functional dynamics. These moments often hint that the candidate is leaning on an AI assistant in the background, rather than drawing from their own experience.
This issue becomes even more obvious with take-home coding challenges. “Too often, we get generic answers that lack context, documentation, or any real architectural reasoning,” says Pablo. These are clear signs that the candidate may be relying on AI without understanding the structure behind what they’re building.
He adds, “We then cross-check that information with their resume, LinkedIn profile, and other public data. We look at what projects they’ve actually worked on, what their role was, what technologies they used, and what goals they helped achieve.”
And that’s where many teams run into real problems—hiring someone who performs well on a test, only to realize days or weeks later that they can’t work in the complexity of a real-world work environment.
At Inclusion Cloud, we believe this shift is just getting started.
In the coming years, we expect AI skills to become a standard part of the hiring process. Companies will not only assess technical and soft skills but will also evaluate how effectively candidates collaborate with AI tools.
This means looking at prompt engineering, tool fluency, and the ability to use AI to accelerate delivery while still producing scalable, testable, and well-structured code. Knowing how to ask the right questions, use context, and guide an assistant is already becoming a key differentiator in identifying top-tier developers.
AI is here to stay—and that’s a good thing (if used right)
Let’s be clear. There’s no turning back. People are already using AI in their daily work, and it’s not going away.
We don’t believe AI should be restricted or banned. When used with purpose and expertise, it can be one of the most powerful accelerators of productivity we’ve seen in decades.
That’s why we’ve fully embraced it—not just as a tool, but as a necessary evolution in how software is built and how talent should be evaluated. And again: we believe that, soon enough, assessing AI fluency will be as standard as testing for technical fundamentals or soft skills.
This shift is already underway, and companies that fail to adapt their hiring strategies may fall behind—not because they lack AI tools, but because they lack the right people to guide them.
Let’s start by breaking down the different ways AI is currently being used in software development.
Two Types of AI Usage in Development
Pablo, explains that developers tend to fall into two main categories when it comes to AI use—and each one requires a different approach during the hiring process:
1. Low-code platforms with embedded AI
These are enterprise systems like ServiceNow, Oracle, SAP, or Salesforce where much of the work is performed through configuration, drag-and-drop logic, and predefined components. AI is often built into the platform to assist with automation and speed up delivery.
“In these environments, most of the heavy lifting is done by the platform itself,” Pablo explains. “What matters is asking what automations the candidate built and how. But it’s just as important to know if they understand the business and workflows, because that’s how you know if the automation made sense.”
Pablo says identifying top talent in these environments is usually more straightforward because there are clearer benchmarks. “If they’re certified and have experience working on real projects, you can dig deeper: what did they build, what problems did they solve, and how would they approach similar situations today?”
These conversations allow recruiters to go beyond buzzwords and understand the candidate’s actual involvement. By presenting hypothetical scenarios or discussing platform-specific edge cases, it becomes clear whether the developer understands the structure—or was simply following pre-set steps.
2. Generative AI and custom code
The second category is more complex. It includes developers using AI tools like ChatGPT or GitHub Copilot to generate custom code—whether backend logic, frontend components, or full-stack services.
“This is where things get trickier,” Pablo notes. “We’ve had interviews where candidates are clearly reading AI-generated responses or taking long pauses because they’re prompting something in another tab.”
That’s when his team changes approach. “In those cases, we ask real-world questions where they need to apply best practices or explain how they solved a specific problem in a project. Then we cross-check that with their resume and LinkedIn. That’s where you really see if they were hands-on or not.”
Validating these candidates requires deeper technical assessment. There’s no certification for good architecture or clean abstractions. “Too often,” Pablo adds, “we get generic answers that lack context, documentation, or any real architectural reasoning.”
To properly assess candidates in this category, the team relies on live coding challenges and peer-level technical interviews. “We observe how they solve problems, how they explain their thinking, and whether the solution makes sense. That tells us a lot more than just seeing lines of code.”
How We Built a Recruiting Engine for the AI Era
We didn’t want to stand still and wait for the wave to crash. We made an early decision to adapt our recruiting engine to meet the demands of this new era of software development.
Broadly speaking, we’ve built our current engine through two key stages.
In the first stage, we developed our AI-powered recruiting engine to identify top talent across the pipeline. We use advanced AI tools to match job requirements with the more than one million candidates in our database. The engine analyzes everything from resumes to public data sources like LinkedIn and GitHub, aiming to find the best fit based on data and algorithmic predictions of future performance.
In the second stage, we expanded our candidate profiling to include how each professional performs using AI. Beyond the traditional technical and soft skills, we now evaluate AI usage as a key indicator of adaptability and long-term potential. It’s become essential for identifying developers who can truly thrive in this new landscape of software development.
Once we’ve identified a potential match, the human side of the process begins. Our HR team leads interviews designed to go far beyond the resume. We focus on how a candidate communicates, how they explain their decision-making, and how they’ve collaborated with teams in the past. Cultural fit and adaptability often tell us more about long-term success than technical skills alone.
Then comes the technical deep dive. Every candidate is evaluated by our own senior engineers—not by someone reading from a script—through conversations grounded in real-world challenges. We ask how they made architectural decisions when scaling a payment system under high traffic, or how they handled version conflicts during a system migration. These discussions reveal how candidates work under pressure, how they evaluate trade-offs, and whether they can tie technical decisions back to business value.
We also take a close look at how they incorporate AI into their workflow. Do they understand prompt engineering as more than trial and error? Can they refine outputs to meet architectural and security standards? We ask which models they typically work with—GPT-4, Claude, open-source options—and why. This helps us assess how well their tools and approach align with a company’s existing stack and governance model.
Sometimes we go even deeper. How would they prompt a language model to build a data transformation pipeline? Would they trust it to write production-ready Terraform scripts? If not, what steps would they take to review, test, and refactor? The answers reveal not just their level of AI proficiency, but also how they think—how they evaluate risks, apply standards, and determine what still requires human oversight.
What we’ve learned is this: good developers can generate code. Great developers design systems.
What to Look For: Spotting AI-Augmented Developers
So how do you tell the difference between someone who uses AI well and someone who simply leans on it?
According to Pablo Bucci, the key is digging beyond resumes and surface-level answers. “The first thing I look for is context,” he says. “Anyone can generate working code. But the real value is in understanding why that code was written that way, what trade-offs were made, and how it fits into the bigger system.”
Here’s how Pablo and the Inclusion Cloud team evaluate whether a candidate truly fits the profile of an AI-augmented developer.
1. Ask for specific examples
“We avoid vague or theoretical questions,” Pablo explains. “We ask about real scenarios. What was the business problem? What technical decisions did they have to make? What would they do differently now?”
This helps reveal whether the person actually contributed to the solution, or if they’re just repeating/memorizing what they’ve seen elsewhere.
2. Check for technical consistency
“We always cross-check their answers with their public profiles. Resume, GitHub, LinkedIn,” says Pablo. If they describe backend decisions but everything on their resume is front-end work, that’s a red flag. The story needs to match the skills.
This not only validates experience but also helps surface exaggerated claims early in the process.
3. Use live code challenges
This is where real differences start to show. As Pablo puts it, “We use live code challenges to observe in real time how candidates perform. If they’re typing in another window, taking too long, or constantly searching for things, it becomes obvious.”
We understand that live challenges can make people nervous—and we take that into account. But it also helps us shift the focus away from what might be happening in another tab and toward how the candidate actually thinks, reacts, and solves problems.
Pablo also highlights the importance of video calls with camera on: “We ask candidates to have their camera on so we can observe if they’re reading, if they respond genuinely, how they react, and how long they take to answer. All of that helps us evaluate whether the person is really present and whether they have mastery over what they’re saying.”
4. Review the technical depth of their answers
Not all working code is good code. That’s why our team looks closely at the reasoning behind a candidate’s decisions—not just the outcome.
Generic or vague solutions are often a red flag that an AI tool was used without proper validation. Pablo notes this is especially common during technical assessments: “Our technical team reviews every submission and can tell when a solution makes sense and when it’s just generic—which is often the case with AI-generated answers.”
That’s why we present candidates with more specific challenges, like building API integrations, connecting with external systems, or solving scenarios based on actual use cases.
5. Pay attention to non-verbal details
“In remote interviews, the camera must be on,” Pablo says. “You can tell when someone is reading. There are strange silences or oddly paced answers.” It’s subtle, but over time, you learn to spot when someone isn’t fully present or engaged.
These small cues often reveal whether a candidate truly owns their thinking—or is quietly relying on an AI tool in the background. And to be clear, we’re not against people using AI. But during this stage of the interview, we’re evaluating their problem-solving skills. The discussion about how they use AI to enhance their work comes later.
6. Test communication and ownership
Technical skills are just part of the picture. Pablo emphasizes the importance of situational questions that reflect real-world collaboration.
“I’ll ask something like, ‘What would you do if a delivery is delayed and the issue came from another team?’ The answer shows you how they think under pressure, how they communicate, and whether they take responsibility.”
We’re not just hiring people who write clean code. We’re hiring people who can work with others, manage complexity, and contribute to building lasting digital products.
7. Always include a technical evaluation by senior engineers
Last but not least, every candidate at Inclusion Cloud goes through a rigorous technical evaluation led by our own senior engineers. This is what truly sets our process apart.
As Pablo puts it, “What makes our method different is that it’s not just HR doing the screening. First, we conduct internal validations—and then, there’s always a second round with a technical expert. It’s a double filter, and it’s always technical.”
The goal of this second step is simulating the kind of challenges and dynamics that developers face in clients’ projects. Our team brings up actual architecture problems, legacy constraints, or integration scenarios that require more than just working code—they demand thoughtful engineering.
One of our most effective tools is the live coding challenge here.
“We observe how the candidate performs in real time,” Pablo explains. “If they’re switching screens, taking too long, or relying heavily on search, you can quickly see their level.”
We also listen for overly polished answers. “If the responses feel rehearsed, we switch gears,” he adds. “We might ask how they’d communicate a delay caused by another team. It’s a simple question, but one that shows whether they’ve really worked in a collaborative, real-world environment.”
This technical validation step is a core part of what allows us to deliver senior, certified, and field-tested developers to our clients.
Why Senior, Certified Engineers Still Matter
Everyone seems to have an opinion about what will happen to software developers as AI takes on a more central role. Some believe that AI—which, in Mark Zuckerberg’s words, can already code at the level of a mid-tier developer—will completely replace junior roles. Others argue the opposite: that AI will act as an accelerated learning tool for juniors, helping them improve faster, almost like a personal tutor. In that scenario, it’s mid-level developers who may be most at risk.
But if there’s one thing most people agree on, it’s that senior developers will become even more valuable.
As AI lowers the barriers to entry, more people, startups, and—of course—mid-sized to enterprise-level organizations will be able to build apps, tools, and workflows. That will lead to a surge in new projects hitting the market. And with that, the demand for certified, experienced engineers with deep knowledge of Computer Science will rise sharply.
Why? Because when AI handles more of the “soul-crush” tasks, these senior engineers can focus on designing innovative solutions that deliver value to their orgs.
At the same time, they play a critical role in setting the architectural direction. They help prevent costly mistakes, like growing technical debt caused by rushed or poorly structured code.
They’re the ones who can spot flaws in AI-generated code that others might miss. Their experience gives them the context to say: “This works, but it won’t scale” or “This logic introduces a risk we’ll regret later.”
Need developers who think beyond the prompt?
If you think about it, AI is changing the way we build digital products. But it hasn’t really changed what makes a great developer. Certified, senior engineers are still the ones who move the needle. What’s new is how proficient they are at using AI to amplify the strengths they already bring to the table.
What is changing, though, is how we hire this kind of talent: The AI-Augmented Developer. You need a recruiting engine that’s built for today’s reality—one where AI is part of everyday workflows and where the gap between generated code and engineered code keeps growing.
We don’t pretend developers aren’t already using these tools. We know they are. That’s the future.
Need help finding the kind of talent that’s built to lead in this new era? Contact us!