AI in Recruitment: Friend or Foe?

An amplification of our shared human intelligence, AI is changing the world around us and helping our civilization to flourish in new ways. But it isn’t without risk. Used improperly, AI can become a wildly destructive force. Take the use of AI in recruitment: used unthinkingly it can reinforce the bias it was built to overcome and lead to poorer outcomes for recruiters, hiring managers, and candidates.

This article will explore the potential pitfalls of AI in tech recruitment and offer an insight into how the Code Pilot team has attempted to develop a platform free from these pitfalls.


A report from Tractica estimates that AI will become a technology worth $59.8 billion worldwide by 2025. It’s little wonder when you consider that the technology is driving efficiencies across almost every imaginable industry while also creating the fertile ground for new ones to grow upon.

The recruitment industry is no exception.

After steady growth this gigantic industry is set to reach an estimated worldwide value of $334.28 billion by 2025 according to QY Research, in part thanks to the influence of AI.

As more and more AI systems emerge, Dr. Terri Horton of TLT Consulting estimates that 65% of fundamental recruiting tasks – such as resume screening and interviewing – will be replaced by AI systems. But there is a danger here.

Many AI advocates – ourselves included – believe that when AI is implemented pragmatically it has the potential to improve the outcomes for recruiters and candidates alike. But there are some serious risks that are involved in relying too heavily upon AI in the wrong areas.

Here are the pitfalls that recruiters must avoid when using AI for recruitment.

1. The Danger of Keyword Searches

As the recruitment process becomes more automated through artificial intelligence, those systems will make more invisible judgements on an automatic basis, comparing basic keyword searches on thousands of candidates in mere seconds.

It is this process that will save recruiters a lot of time and drive savings, but it could be bad news for candidates if the results or basis for these decisions are not shared with them.

It would, of course, be highly unfair, but it would also lead to a reduction in the quality of candidates given that they will be increasingly detached from the expectations of recruiters and organizations.

This issue extends beyond the initial sorting phase of candidates and even into the initial interview phases. Platforms like Mya aim to offer recruiting support at scale, but without a defined method for providing accurate and fair feedback to candidates, this trend toward automation through AI could have negative effects far into the future.

2. Social Media used for Screening

Some in the recruitment industry are optimistic that the examination of a candidate’s social profile through the lens of big data can reveal profound truths, capable of being uncovered by AI. Not only is this ethically problematic – candidate’s should submit their social profile for consideration of their own volition and they are an individual who should not be reduced to their social media profiles – it’s also laden with the potential for error.

Let’s say a recruiter would like to figure out which Facebook pages are the most popular among intelligent candidates with a college education. This could be a useful piece of information and be a good indicator of the personal traits of a candidate.

They would discover that liking curly fries on Facebook is an indicator of high intelligence.

Homophily is a concept that essentially means “birds of a feather fly together”. The person who started the Facebook page for curly fries was a smart, college-educated individual and their smart, college-educated friends also liked the page. This chain continued and liking curly fries on Facebook is a good indicator of intelligence (obviously, they’re delicious).

But this is problematic and could lead to heterogony throughout organizations if the same criteria are used by AI systems. Certain groups would find themselves potentially excluded because they didn’t like the correct fry on Facebook or listened to Taylor Swift a few too many times.

3. Shallow Data Pattern Matching

AI systems are built by humans and they have our fingerprint on them – along with the biases that we aim to eliminate.

Case in point, Amazon just shut down its AI recruiting tool which demonstrated a bias against women. This system was seen as the holy grail and attempted to extend the automation that made Amazon’s marketplace so successful.

But, because the system was built upon patterns based on the resumes which were submitted to the company over a 10-year period, it reflect the male dominance found within the tech industry and perpetuated it unknowingly.

This shows that even though many believe machines are free from bias, they can be imbued with our own.

4. AI beyond Recruiter Understanding

The concept of technological singularity – the point at which artificial intelligence learns to self-modify – was reserved for the realm of science fiction until recently, but it’s becoming increasingly easier to see our path towards the last thing we humans might ever invent.

Artificial Intelligence Netflix GIF by MANIAC - Find & Share on GIPHY

The prospect of technological singularity has caused a stir among technologists and they’re looking for solutions to help humans retain autonomy. Take Elon Musk’s “Neuralink” solution, for example.

The point of Musk’s “high bandwidth and safe brain-machine interfaces” is to help humans to better compete with artificial intelligence. If not, we will be at its mercy with no possible way to understand the decisions that it makes or the ways that it chooses to evolve.

This is already a concern for AI used in recruitment. If decisions are being made by sophisticated systems without an understanding among recruiters themselves, they stand to quickly lose control over the system and run the risk of receiving inappropriate candidates.

How Code Pilot Uses AI

The Code Pilot platform has been designed to avoid these common pitfalls by ensuring we use goals to train our AI versus human annotated training sets.

We started by wiping the slate clear, and rather than marking up resumes for keywords as many companies in this space who proclaim to use AI do, we took a behavioral science approach. Instead, we use proprietary assessments, such as scenario assessments or code assessments, and we extract telematic signal data from the assessment output.

We then take similar outputs from the hiring manager job profile, and create a neural network where the goal is to find the shortest distance between the hiring managers input and all the candidate models we have in our database.

This approach is referred to as Reinforcement Learning because the network is learning through incentivization rather than human labelled data, and together with interaction signals collected from our applications, ensures our models aren’t affected by human bias. It also means the neural network’s goal is aligned to the hiring manager, based on signal criteria which is behaviorally oriented, not traditional attributes that are largely personality based.

The Future

So why don’t more hiring platforms leverage this approach? For one, it’s very new. Applied data scientists at large companies like Microsoft or Google have access to state of the art algorithms and technologies together with massive data sets like search indexes and content catalogs, so are able to hone their abilities in this space. Outside of these companies, it’s harder to build the experience and knowledge needed to meaningfully exploit deep learning capabilities.

I mean, let’s face it, most hiring websites struggle to maintain their CSS files let alone add anything remotely intelligent into their platforms.

Secondly, you have to design your platform using an AI first mentality. Like security, user experience, and scale, bolting on AI after you’ve built your system is near impossible. It has to be designed upfront, from the data model, through the UX, even down to the CI/CD process. Again, this is something that is difficult for entrenched companies to do, but much easier for a startup.

Over time we’ll hopefully see more of this approach to AI engineering make its way into products and platforms, but for now, it’s early days, so we should still practice bowing before our toaster every morning, just in case.

Are you a software engineer? You can build your portfolio free today.

Are you an employer? Click here to create your profile.