What Do IT Leaders Think About AI In Interviews
Table of Contents

A few days ago, we asked a question on Reddit that’s been coming up more and more in conversations with hiring teams.

Would you allow candidates to use AI tools during a technical interview?

We were careful about how we framed it. The idea wasn’t to talk about candidates hiding tools, impersonating someone else, or trying to game the system. The scenario was much simpler: a candidate using AI openly as part of how they work through a problem, the same way many engineers already do in their day-to-day work.

That framing matters because it shifts the discussion away from fraud and into something more uncomfortable.

What are you actually trying to evaluate in an interview?

Are you trying to simulate real working conditions, where AI is already part of the workflow? Or are you trying to isolate how someone performs without any assistance at all?

The answers didn’t point in a single direction, but they did reveal a few clear ways of thinking about the problem.

AI in tech interviews

LLMs Changed Technical Interviews. We’re Still Figuring Out What to Expect

Before getting into how IT leaders are thinking about this, it’s worth taking a step back and adding a bit of context.

Since 2022, when large language models became widely accessible, the expectations around hiring in tech have started to shift in a meaningful way. Not just in how candidates perform during interviews, but in what companies can realistically expect to evaluate in the first place.

That shift touches both sides of the process.

On the technical side, tasks that once required recall, experience, or time can now be assisted in real time. Writing code, structuring a solution, even debugging approaches—things that used to signal depth of knowledge—can now be partially generated, suggested, or refined on the fly.

On the softer side, something similar is happening. Communication itself can be drafted, polished, and even fed live into a conversation. There are already tools designed to “listen” during interviews and suggest responses in real time, effectively turning the interaction into something closer to a guided performance than a spontaneous exchange.

Some estimates already suggest that by 2028, a significant portion of job candidates globally could be AI-assisted or even entirely fabricated profiles. Not every case falls into that category, but the direction is hard to ignore.

Inside companies, expectations have also shifted.

AI is no longer optional in many teams. In some environments, it’s encouraged. In others, it’s directly measured. At organizations like JPMorgan, internal dashboards track how often engineers use tools like Copilot or Claude, creating visibility into adoption and usage patterns.

That creates a tension that didn’t really exist before:

The job assumes AI.
The hiring process is still figuring out how to deal with it.

We Spotted a Fake Candidate During an Interview:

Some time ago, we interviewed a candidate who, on paper, looked like a strong match for the role.

The experience was there, the profile made sense, and there were no obvious gaps in the résumé. The first red flag came later, during a cross-check against the candidate’s LinkedIn profile: no profile picture and only a handful of connections. Unusual, but still within the range of something you could explain. Maybe it was just someone who doesn’t care much about social media. That alone isn’t enough to rule a candidate out, so we kept digging.

That’s part of what makes this new hiring environment so tricky. Not every suspicious profile has the same motivation behind it. In some cases, candidates are simply under pressure to land a job in a tighter tech market and use AI or outside help to present themselves in the best possible way. In others, the stakes can be much higher. Some fake candidates are not just trying to get hired, but to gain access to company systems, customer data, or intellectual property once inside. At that point, the risk goes beyond hiring the wrong person; it becomes a potential security, compliance, or data exposure issue.

Fake candidates with AI in tech hiring

The interview itself started slightly off.

The candidate mentioned being from a specific region, but a few details didn’t quite align. The way they spoke and presented themselves didn’t fully match what you would expect from someone who had lived their whole life there. Not in a definitive way, but enough to raise an eyebrow early on.

We kept going.

From a technical standpoint, the conversation was fine. The candidate communicated well, the answers were coherent, and their level of English was good, fluid enough to handle the interview comfortably.

But those small inconsistencies didn’t go away. If anything, they started to stack.

At that point, we shifted the conversation on purpose.

Instead of staying in technical questions, we moved into something more informal to put a bit of pressure on those inconsistencies and see if things would naturally align. We asked about everyday things from that region (places, food, small cultural references).

That’s when the pace changed.

Answers started taking longer. Small pauses before each response. And after a few minutes, a pattern became pretty clear: a quick glance to the side, then the answer.

It didn’t feel like someone thinking.

It felt like someone checking something off-screen.

At that point, we already had a pretty strong signal. But by protocol, we needed to be sure before cutting the conversation.

So we did one final check.

We asked to switch to the native language of the region the candidate claimed to be from. They had already said (more than once) that they were born and raised there, so this should have been straightforward.

There was a pause.

A slight, uncomfortable smile.

“I’d prefer to continue in English.”

We insisted.

“I can’t.”

That was enough to end the interview.

What IT Leaders Are Saying About AI in Interviews

Situations like this are becoming more common.

Not always this obvious, and not always with the same intent, but common enough that most hiring teams have started to develop a sense for when something doesn’t fully add up. The signals are subtle, but they repeat: slight delays, overly polished answers, small inconsistencies that only show up when you go off script.

At the same time, there’s another side to this conversation.

Some companies are starting to take a very different approach. Instead of trying to remove AI from the interview process, they’re leaning into it. The argument is simple: if AI is going to be part of the job, then the interview should reflect that reality.

Companies like Canva and Meta have openly experimented with allowing candidates to use AI during technical interviews, treating it as just another tool, something closer to how work actually gets done once someone joins the team.

On one side, more cases of candidates relying on AI in ways that are hard to detect (or intentionally hidden). On the other hand, companies are asking whether banning AI even makes sense anymore.

So we asked IT leaders on Reddit a simple question:

Would you allow candidates to use AI during a technical interview?

1) “During an interview? Absolutely not. I need to understand how you think.”

This was one of the most immediate reactions.

The concern here isn’t really about AI itself, but about losing visibility into the candidate’s thinking process. For many hiring managers, the interview is one of the few moments where they can observe how someone approaches a problem without assistance.

Once AI enters that space, that signal becomes harder to isolate. You’re no longer evaluating just the candidate, you’re evaluating how well they can collaborate with a tool.

And for teams that prioritize fundamentals, that’s a problem.

2) “What matters is whether they understood what they did.”

Others framed it differently.

AI is already part of the job, whether we like it or not.

So instead of trying to remove it from the interview, the focus shifts toward understanding. The idea is that AI can help generate an answer, but it can’t replace the ability to explain and defend that answer.

A candidate who understands what they’re doing should be able to explain why a solution works, what trade-offs were made, and where it could break. They should also be able to adapt it if you change the constraints or introduce a new variable.

Even AI can actually accelerate the detection of weak candidates. Someone who is just copying or reading answers will struggle the moment the conversation goes off script. They won’t be able to explain the reasoning, defend their choices, or connect the solution to real-world scenarios.

3) “You could have a section with AI and one without.”

Some responses moved away from the binary.

Rather than deciding for or against AI, they suggested changing the structure of the interview itself. One part focused on fundamentals without assistance, while another allowed AI to see how the candidate operates in a more realistic setting.

What These Answers Are Really Telling Us

What’s interesting is that the disagreement isn’t random.

Most responses are reacting to the same underlying change.

On one side, teams still want to understand how someone thinks, how they break down problems, and how they perform when things don’t go perfectly.

On the other side, there’s a growing awareness that the job itself has changed. AI is already part of the workflow, and pretending otherwise creates an artificial environment that doesn’t reflect reality.

That’s why the conversation keeps circling around the same point.

It’s not really about whether AI should be allowed.

It’s about what you’re trying to evaluate in the first place.

The part everyone seems to agree on

Despite the different positions, there’s one boundary that comes up consistently.

“If they’re using it, they should be transparent about it.”

Using AI openly is one thing.

Trying to hide it is something else entirely.

That’s where most hiring teams draw the line.

Because at that point, the issue is no longer technical, it’s about trust.

And once trust is compromised, the rest of the evaluation loses meaning.

So What Are We Actually Evaluating Now?

The line that matters most is no longer between “uses AI” and “doesn’t use AI.”

It’s between someone who understands what they’re doing and someone who is just moving through the process by copying and pasting. Between a “vibe coder” and an engineer who understands architecture, trade-offs, and best practices.

What we’re really trying to understand is simple: can this person solve the kinds of problems they’ll face inside a company?

Because that’s the job.

Certifications and educational backgrounds can give us useful signals. They help us understand what kind of exposure a candidate has had, what tools they’ve worked with, or what environments they’ve been in. But they don’t tell the full story.

We need to see how candidates think, how they explain their decisions, and how they handle real situations. And now, we also need to understand how they use AI.

Getting external help isn’t new.

For years, developers have relied on Stack Overflow, documentation, and forums. AI is just the next step. That’s why we’ve had to adapt how we hire.

At Inclusion Cloud, we use a strict double-validation process. On one side, our HR team focuses on communication, consistency, and background checks, making sure what the candidate says actually holds up. Like in the case we shared, they’re trained to spot signals such as undeclared AI use or inconsistencies in the story.

Then we go deeper.

Our technical leaders run practical evaluations that simulate real working conditions.

If you want to go deeper into how we approach this, you can check out inMOVE™ by Inclusion Cloud, our recruiting engine built for this new reality.

Enjoy this insight?

Share it in your network

Related posts

Connect with us on LinkedIn

Stay connected with us

Talk to real specialists,
not bots.
Share your priorities.
We’ll get right back to you.


Ready for the next step?

Talk to real specialists, not bots.
Share your priorities. We’ll get right back to you.
Ready for the next step?