people meeting UN061

Talent Assessment Validity Claims May Be Exaggerated

In light of new research, SHL provides guidance for users of talent assessments to better evaluate claims of validity made by vendors.

An article recently published in the Journal of Applied Psychology is expected to create quite a disruption in the world of talent assessment and employee selection. This article reviewed common meta-analytic practices and concluded that the level of the relationship between assessment scores and job performance is often overestimated.

To help employees, practitioners, and customers unpack the findings, SHL has published a report called Guidance for the Interpretation of Validity Coefficients. In this blog, I talk about SHL’s approach to evaluating talent assessment validity and why generally accepted research findings need a new look. But first, let’s understand some terminology used in the talent assessment domain.

Terminology in talent assessment and employee selection

In the world of talent assessment for employee selection, the term validity refers to the accuracy of interpretations drawn from assessment scores (e.g., that assessment scores predict job performance). Validation is the process of establishing evidence that supports these interpretations.

In talent assessment, the strongest form of validation evidence is generally considered to come from criterion-related validation, which demonstrates that scores on an assessment (i.e., the predictor) are related to scores on a criterion measure of interest (most often job performance). Criterion-related validity evidence is usually presented in the form of a validity coefficient, or correlation (r), which ranges from 0 to 1 and indicates the magnitude of the relationship between assessment and criterion scores.

Predictive validation studies collect predictor data (i.e., assessment scores) from job applicants prior to selection and criterion data (e.g., manager ratings) from those hired after they have been on the job for some time. Concurrent validation studies collect both predictor and criterion data from job incumbents close together in time.

Meta-analysis is a method of cumulating results across multiple validation studies to reduce the influence of sampling error and get a more accurate estimate of the true correlation between variables than is possible in a single study. Meta-analyses of the validity of selection procedures are very common in the academic literature and among assessment vendors and have been used to establish the expected mean level of validity for different types of assessments. Meta-analyses of selection procedures often apply statistical corrections to validity coefficients to account for limitations in validation studies that suppress validity estimates (e.g., restriction of range due to selection based on assessment scores). This is where errors are most commonly made because different kinds of validation studies require different kinds of corrections.

In talent assessment, the strongest form of validation evidence is generally considered to come from criterion-related validation.

SHL’s approach and perspective

SHL has historically taken a conservative approach to correct validity coefficients. Corrections for range restriction have typically not been conducted, primarily because most criterion-related validation studies are concurrent and, therefore, realistic estimates of correction formula variables are difficult to make. Therefore, the conclusions reached by the recent article mentioned at the beginning of this blog are unlikely to impact our own validation study results or meta-analyses. Our technical manuals contain all information necessary to evaluate validity calculations and any statistical corrections. This may not necessarily be the case for validity claims made by other talent assessment vendors.

Because of (a) the impact the article is likely to have on both science and practice, (b) the likelihood that other talent assessment vendors make similar mistakes in their own meta-analyses, and (c) the complexity of the topic for those who are not immersed in the details of validation analyses, SHL has published a white paper called Guidance for the Interpretation of Validity Coefficients. This report has been developed to:

  1. Explain the issues associated with estimating validity coefficients in primary studies and in meta-analyses,

  2. Summarize how the level of validity for different types of selection procedures has changed, and

  3. Provide guidance for buyers and users of talent assessments to better evaluate claims of validity made by vendors.

The intended audience for this report includes current or potential users of talent assessments who want to be informed consumers when evaluating technical documentation provided by talent assessment vendors.

Read the report to learn more about our findings and some practical implications, and contact SHL to inquire about our validated talent assessment inventory.

headshot johnson jeff

Author

Jeff Johnson

Dr. Jeff Johnson is a Principal Research Scientist at SHL. His current responsibilities on the Research and Development team focus on designing and developing innovative products and solutions that support employee selection and development, particularly with respect to the identification and placement of current and future leaders. His research demonstrating the impact of context on diversity and the prediction of leader performance led to the development of SHL’s Leader Edge selection and development tool and was awarded the M. Scott Myers Award for Applied Research in the Workplace by the Society for Industrial and Organizational Psychology in 2018.

Explore SHL’s Wide Range of Solutions

With our platform of pre-configured talent acquisition and talent management solutions, maximize the potential of your company’s greatest asset—your people.

See Our Solutions