The companies that make and sell employment algorithms now commonly used in hiring processes have been criticised by researchers for a lack of transparency and consistency.
In a study conducted by researchers from Cornell University’s faculty of Computing and Information Science, it was found that the vast majority of vendors keep the details of their employment algorithms under wraps, which undermines efforts to combat bias in automated hiring processes.
This is despite bias being an increasingly high-profile issue in algorithmic recruiting tools. Amazon, for example, experienced considerable heat for developing an unintentionally sexist recruiting tool, despite abandoning the project before it was ever used to evaluate candidates.
The products assessed in this study, however, are being used in real-world hiring processes, and as employment algorithms are considered intellectual property, there is no legal requirement to disclose information about how the algorithms are developed and so what steps are taken to tackle transparency.
And in the case of the vast majority of the 19 vendors assessed in the study, creators of these products generally don’t choose to disclose such information.
Bias in the machine: The transparency challenge in employment algorithms
The researchers found that bias is often not mentioned at all by vendors of employment algorithms, something that they considered a cause for concern.
“Plenty of vendors make no mention of efforts to combat bias, which is particularly worrying since either they’re not thinking about it at all, or they’re not being transparent about their practices,” said study first author Manish Raghavan, a doctoral student in computer science.
Where vendors have addressed the bias issue, their commentary has often been vague, according to the researchers, using terms such as “fairness” without clarification of what this actually means.
“Calling an algorithm ‘fair’ appeals to our intuitive understanding of the term while only accomplishing a much narrower result than we might hope for,” said Raghavan.
The problem is, the researchers say, that there is no consistent concept of what a fair employment algorithm is – and vendors are ultimately deciding for themselves what defines a biased or non-biased algorithm without external oversight.
Waking up to bias
Despite the concerns about a lack of transparency or consistency among the creators of employment algorithms, the researchers do believe that attitudes towards the issue are beginning to change, and that the matter is starting to be taken seriously.
“I think we’re starting to see a growing recognition among creators of algorithmic decision-making tools that they need to be particularly cognizant of how their tools impact people,” said Raghavan.
The problem is now developing a consistent approach to doing that.
“Many of the vendors we encountered in our work acknowledge this (impact) and they’re taking steps to address bias and discrimination,” he added.
“However, there’s a notable lack of consensus or direction on exactly how this should be done.”
The research paper, Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices, will be presented in January at the Association for Computing Machinery Conference on Fairness, Accountability and Transparency.
Verdict deals analysis methodology
This analysis considers only announced and completed cross border deals from the GlobalData financial deals database and excludes all terminated and rumoured deals. Country and industry are defined according to the headquarters and dominant industry of the target firm. The term ‘acquisition’ refers to both completed deals and those in the bidding stage.
GlobalData tracks real-time data concerning all merger and acquisition, private equity/venture capital and asset transaction activity around the world from thousands of company websites and other reliable sources.
More in-depth reports and analysis on all reported deals are available for subscribers to GlobalData’s deals database.