woman on phone SH006 v2

The 4 Imperatives of Artificial Intelligence Assessments

How to make sense of the exciting, confusing, and sometimes unnerving world of artificial intelligence assessment technology.

The phrase Artificial Intelligence (AI) is thrown around everywhere. In the assessment space alone there are probably hundreds of companies talking about AI assessment capabilities, but what does that mean? How much of it is actually AI and how much of it is pure hype? Is AI a small piece of the offering, or have they fully given the robots control? It’s hard for me to tell, and I’ve worked in this industry my entire adult life.

There’s even more uncertainty among the general public. If you look at the comments section on any article that talks about AI-based hiring tools, it is chaos. Mass confusion. I bravely waded into the comment section of one recent article reporting on how a company’s AI software is being used. Here are some highlights from the comment section:

"How do they know the algorithm isn’t just selecting charming sociopaths? They often get flagged as potential leaders."

"How would this system handle someone with a speech impediment? Someone who is blind? What if I’m sick that day and a cold has my voice down an octave? What happens if my cat jumps up to add her thoughts?"

In short, the world of AI assessment is confusing, and there’s little written on how to make sense of it all. Are we all waiting for the first major court case to reach the U.S. Supreme Court so we have precedent to follow? How is the general public going to accept any AI assessment tools if we can’t all mutually agree upon what is off-limits when it comes to the technology?

I can’t solve this problem on my own, but I think it is time to establish a foundation we can all agree upon. I give you my proposed foundation: The 4 Imperatives for AI Assessments

#1—No Black Box!

Sure, AI is hard to understand. There’s so much jargon and far too many acronyms to learn. But just because it is sophisticated doesn’t mean we can’t demand that anyone that creates an AI Assessment has to be able to explain how it works, right? This isn’t the time to let the robots start running everything.

Nobody should build an AI-Assessment with scoring and features they don’t (or can’t) understand.

All features and variables used by AI assessments should be relevant to the purpose for which they are being used.

#2—No Purely ‘Atheoretical’ AI!

I know climate change is causing droughts, wildfires, and flooding, but even if the world becomes an actual dustbowl, we cannot resort to dustbowl empiricism. Theory matters. Sure, the power of AI and Machine Learning can tease out relationships and predictions that traditional analytical techniques cannot, but that doesn’t mean we revert to a second period of dustbowl empiricism. We all know that correlation is not causation, and while these new technologies enable us to build predictive models out of datasets once thought too noisy to analyze, they also can lead us to make predictions that don’t hold up over time.

You may be able to throw all the data from your current workforce into the analytics ether and come up with some seemingly impressive predictive features. But without theory, there’s a good chance you’re likely taking advantage of some statistical artifact in your dataset that won’t hold up when you unleash it on the general population. Does it matter if you’re able to ‘prove’ that your algorithm reliably predicts people who identify as Hufflepuffs are 37% more likely to stay on the job than those that identify with any other Hogwarts house? Do you want to go to court with that defense?

All features and variables used by AI assessments should be relevant to the purpose for which they are being used.

#3—AI Needs Oversight

Let’s not forget, most AI assessments are trained using data from a single organization. There isn’t a single organization in the world that has made all employee decisions without some form of bias. If you just turn an AI loose on a data set, odds are that it is going to simply bake the existing bias into its own programming, and double-down on the biases that preceded the implementation of the AI assessment.

Now, instead of humans making biased decisions, you’ll have a computer doing it for them, and that’s not acceptable. We need experienced, ethical and educated professionals involved in closely supervising the creation of AI Assessments. Trained practitioners are required to help select the variables and features ultimately used in an AI to help reduce the risk of bias against race and gender. Just because an algorithm doesn’t show racial bias when run against data from the very organization used to create the algorithm doesn’t mean it won’t be biased against race when used outside of the organization.

All AI assessments must be developed under the close supervision of experienced, ethical, and educated professionals.

We have a moral obligation to ensure that every candidate is given her or his best chance to perform.

#4—Humans Are More Important than AI

At the end of the day, it is going to come down to the data scientists, I-O psychologists, behavioral economists, engineers, etc., to drive the ethical application of these new technologies. It is easy to lose sight of the fact that these tools have a big impact on the lives of millions of people. We talk about the statistical probabilities associated with increased prediction, but for the applicant on the other side of the screen – the woman who needs this job so she can feed her family – this is as high as the stakes get. We have a moral obligation to ensure that every candidate is given her or his best chance to perform.

We owe it to companies and candidates alike to ensure that all assessments used to determine someone’s chances for employment are fair, bias-free, job-related, and truly predictive of job performance.

The world of AI is exciting, confusing, and unnerving. Now is the time to take a stand and insist on the ethical application of AI in assessment and agree on our own cultural norms that puts good science and human impact at the center of our practices.

SHL has been using machine learning for over ten years, and we find that the most valid, useful, and unbiased technology is based on a foundation of testing and research from our 300-plus scientists. To learn more about SHL’s powerful science-backed, machine-learning driven tools – contact us today!

headshot andrews lance


Lance Andrews

Lance is a former Head of Specialist Solutions for SHL's America's business. A consultant at heart, Lance applied his experience and expertise gained from over 16 years in consulting and consulting leadership to provide strategic guidance to SHL's customers. Lance created greater value for our customers through analyzing, diagnosing and understanding their needs, and ensuring our products and services are positioned and optimally aligned to address those complex needs.

Explore SHL’s Wide Range of Solutions

With our platform of pre-configured talent acquisition and talent management solutions, maximize the potential of your company’s greatest asset—your people.

See Our Solutions