New study providing updated validity estimates
Sometimes a study comes along that changes the perspective of the science community and/or of industry standards. Such a study came out recently, and it changes what meta-analysis we reference in the future on the operational validity of selection procedures. But in the same breath, it is more of a beginning than an end.
Sackett et al. (2021) systematically revisited prior meta-analysis on the criterion-related validity of different selection procedures. That is how well different selection procedures (e.g., GMA, Big Five, interviews) predict job performance. Their results show, that for many selection procedures, the validity has been substantially overestimated in earlier studies (Sackett et al., 2021).
Our aim here is to give you an overview of some of the main findings (here summarized in table and graph) and offer our take on them, - with some interesting take-aways.
The main issue
The main issue raised by Sackett et al. (2021), is that a method to correct for range restriction in meta-analysis has often been applied inappropriately in the past.
The new study revisits prior meta-analytic conclusions summarised in Schmidt and Hunter (1998) and reports the revised validity estimates, where the appropriate correction has been applied.
This work has demanded a good deal of detective work on prior studies included in the individual meta-analysis in Schmidt and Hunter (1998) and recalculation of validity estimates based on the available data.
Some of the old studies included in the individual meta-analysis in Schmidt and Hunter (1998) did not provide enough information for the calculations to be repeated and have therefore not been updated. These include reference checks, graphology, age, peer ratings and years education.
When interpreting the results, we find it important to keep the following points in mind:
- The methodological correction relates to meta-analytic conclusions and not to individual studies.
- The reported results are not all based on new data, but on recalculation of old data and the available information on individual studies within different meta-analysis.
- There are many other factors that are relevant in evaluating and comparing validity estimates that this study does not deal with, including (but not limited to) different types of selection procedures, different measures of job performance (the criterion), incremental validity of using several selection procedures and utility analysis.
- The revised results are NOT meant as a new go-to-list to compare validity estimates. The revised estimates are based on what they know now, but that further refinement is needed.
Main Results
The main results show that over-estimation of validity estimates is rampant in the litterature. After the revision by Sackett et al. (2021), validity estimates are reduced with an average of .10 - .20 points.
For comparison, the table below lists the validity estimates summarised in Schmidt & Hunter (1998) and revised estimates by Sackett et al. (2021). The figure includes additional revised validity estimates by Sackett et al. (2021), not available in Schmidt & Hunter (1998).
Table 1 Summary of validity estimates reported in Schmidt & Hunter (1998) and the revised validity estimates by Sackett et al. (2021).
Figure 1 Summary of validity estimates reported in Sackett et al (2021). The bars illustrate the old validity estimates summarised in Schmidt & Hunter (1998) in red colour and all revised validity estimates (Sackett et al., 2021) in blue colour, by selection procedure.
What selection procedures are the winners?
The top selection procedures in terms of validity, build on comprehensive job analysis (structures employment interviews, job knowledge tests, empirically keyed biodata, work samples tests) and have relatively consistent validity estimates (low standard deviation).
Keep in mind that selection procedures that are based on a thorough job analysis are job-specific in themselves and are in that regard connected to the outcome measure of job performance.
In contrast, selection procedures based on measuring general psychological phenomenon are job-independent and show comparatively lower associations with job performance.
However, within job-independent selection procedures that measure general psychological phenomenon, GMA and Integrity tests come out on top, followed by personality-based tests of emotional intelligence. Compared with the selection procedures that build on comprehensive job analysis, these have relatively inconsistent validity estimates (high SD), meaning that the estimates vary more across individual studies. When considering this variance in estimates, GMA comes out better than Integrity test (i.e., lower SD).
Within the Big-Five personality traits, conscientiousness is the best predictor of job performance. The tests that are contextualized (use questions that are specific to a work situation instead of general) do better in predicting actual job performance.
Take away
The main take away message is that most selection procedures are still useful. Those who ranked high previously, still rank high; but are just not as predictive of job performance as previously thought.
Results indicate that within psychological tests, that are job-independent, GMA and Integrity test still come out on top, but with less disparity in estimates for GMA. Therefore, when chosing between psychological tests for a recruitment process, GMA is a good predictor of future job performance. Examples of GMA tests are ACE and CORE.
GMA is closely accompanied by Integrity test. Integrity here refers to the tendency to be honest, trustworthy, and dependable. And a lack of integrity is associated with counterproductive work behaviours. Examples of personality-oriented integrity measures include dimensions of Cooperation, Resilience, Compliance and Delivery in OPTO.
Within the Big Five personality traits, conscientiousness is the best predictor of job performance. Examples of measures of conscientiousness are the aspects Dutifulness, Industriousness, Structure and Quality Assurance in OPTO.
The Sackett et al. (2021) study is a reminder that science (in general, but especially on predicting human behaviour) is a moving field, where our aim should be to use the latest and best evidence available. We say this, as most of the literature shows a tendency to reference what we know and what is already being referenced. We do this as well, - and therefore we also find it important to point it out when there is a study that changes our perspective.
However, we must also keep in mind, that Sackett et al. (2021) focused on revising the evidence in terms of one methodological issue, and that there are many other factors that are not considered in this study. Therefore, the study is first and foremost an invite for further study to close in on better validity estimates in meta-analysis of predictor-criterion relationships in personnel selection.
In terms of outlook, research still support our recommendation of combining assessment procedures, and using selection procedures based on a thorough job analysis.
References
Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2021). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, No Pagination Specified-No Pagination Specified. https://doi.org/10.1037/apl0000994
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274.