Discover how data can help small businesses identify and address employee retention issues by providing deep insights into workplace dynamics.
AI for DEI Requires a Human Touch
Why Artificial Intelligence is a flawed solution to a systemic problem
Is it fair to market Artificial Intelligence (AI) tools as a viable solution to the lack of diversity in the workplace?
Can resume reading machines, chatbots, video analysis and speech-to-text transcription cancel out human bias, sexism, ablism and racism?
According to Cambridge’s Centre for Gender Studies, the answer is no, citing the reliance on AI in the hiring process as “little better than an automated pseudoscience.”
It is a mistake to lean on techno-solutionism in a process as nuanced as unbiased identification, interviewing and hiring ideal candidates for a position. There is no quick fix for incorporating diversity, equity and inclusion (DEI) into a company’s values and day-to-day operations.
Likewise, it takes more than a singular investment in machine learning technology to eradicate hiring discrimination. DEI requires a multi-layered long-term investment of time and expert consultation to facilitate perpetual growth and change within the culture of an organization.
Judging a book by its cover
Undergraduate researchers on the Cambridge team built an AI “Personality Machine,” modeled after AI hiring technology, in order to highlight the inherent flaws in its use. The results showed how subtleties like the lighting, the background, and even wardrobe choices and facial expressions yielded wildly variable personality readings in the same person. AI made correlations between personality and arbitrary properties of the video image like brightness. The baseless high and low scores that resulted could be the difference between advancing to the next level or r from a hiring process.
AI technology lauded as a tool to facilitate diversity instead serves to perpetuate uniformity with every candidate held up against a standard ideal candidate composite. The candidate best able to win over the algorithm is the one that mirrors the outputs the AI has learned to identify. With AI training being based on employment trends from the past, history will repeat itself and AI will promote white male candidates that most resembles the current employee pool.
An infamous case of AI discrimination
In 2014 AMZN.O machine learning specialists set about on the task of building technology that would automate the process of resume reading. The work of reviewing tens of thousands of resumes is a costly and time-consuming endeavor for human eyes. Human resource teams are desperate for software to cut costs associated with high-volume recruitment.
It made sense for the automatization that Amazon is famous for to extend into their hiring process. Their AI rating system, not unlike their product rating system, assigned one to five stars to each resume. The tool they developed was to ideally scan 100 resumes, select five top scorers and hire those five. Instead of rating candidates in a gender-neutral fashion though, the machine revealed itself to be sexist.
Where did Amazon go wrong with AI to achieve DEI?
AI is only as smart as you train it to be. In the case of Amazon, the technology vetted potential new hires against a criterion from 10 years prior. As a result, white male dominance in tech over the 10-year lookback manifested in AI decision-making. Industry giant Amazon created AI that did not like women, because of the inherent bias in the creation process.
Where did Amazon go right?
Amazon observed the problem pretty quickly, however attempts to build in more gender neutrality were ineffective. The machines were susceptible to finding other ways to discriminate in their effort to base hiring decisions on the ideal candidate examples it was trained to uphold. By 2016 all hope was lost, and the project scrapped.
Amazon was observant enough to recognize that its AI was problematic. Unfortunately, other companies continue to build and release AI tech without any consideration of its level of bias. Sadly, there is little regulation or accountability on how these technologies are tested and much of how they operate is shrouded in mystery. Any company relying on these tools should acknowledge their shortcomings and offer alternative avenues for women, POC and disabled persons to gain employment.
Something to talk about – AI discrimination in speech
Analysis software for video and speech-to-text transcription technology has evidenced racial, cultural and ability-based discrimination when used in the hiring process. One-way AI that video records an applicant’s response to pre-determined questions may yield a higher word error rate (WER) for accented English, dialect speakers or speech-impaired applicants. If WER value is a hiring consideration this may result in the rejection of non-native speakers, non-Standard English speakers or the speech impaired. Research is lacking in this particular domain, and it is vitally important as yet another example of AI-driven discriminatory practices.
Bias is bigger than data points
These high-profile examples of the threat of AI to marginalized communities are clear: AI and hiring do not mix.
Reducing DEI down to programmable data points is not as achievable as these technologies might suggest.
Industry giants like Amazon have recognized the false hope behind these well-meaning advancements. Technology made by humans, trained by humans, and put to work in highly discriminatory industries cannot help but perpetuate the very biases they claim to eliminate.
Bias is built into the data used to teach the AI tool because bias is ingrained in the worldview of the designers, developers, and administrators that accept AI decisions as fair.
AI is here to stay
According to the US equal employment opportunity commission as of 2020, 55% of hiring managers were utilizing AI technology to guide their applicant selection process. Another 2020 study of 500 companies across a myriad of industries in five countries reported 24% of businesses use AI for recruitment, with 56% planning to use it the following year. A poll of 334 human resource professionals in early pandemic April 2020 evidenced 86% of companies putting AI to work in their hiring and recruitment process.
In other words, the use of AI in hiring practices is growing in appeal and popularity. Striving to address DEI shortcomings within an organization’s hiring practices cannot be managed by an AI tool alone. Nor can the work of removing bias within an AI tool be delegated to the data scientist and developer alone. Support, information and insight building must come from all directions. Machine learning specialists and engineers are key but so are sociologists, DEI experts, civil rights attorneys and risk management.
Using AI in hiring practices is here to stay – at least for a while. It is an imperfect tool that much like imperfect humans is best used in conjunction with a diversified approach and a critical eye that dares to learn from the mistakes of the past, tear down outdated harmful practices and strive to promote DEI considerate outcomes.
AI technology reform is a multidisciplinary endeavor.
As conversations around AI use come up in your organization, it’s important to consider the following:
Government Regulation
- US Equal Employment Opportunity Commission has launched an initiative to:
- Monitor AI use in hiring practices.
- Hold AI accountable to Federal Civil Rights Laws.
- The Algorithmic Accountability Act, if passed would require ”Impact assessments” that scan for bias, and effectiveness regarding AI use in employment, loan, and housing applications.
Technologists’ Responsibilities
- Make the details of AI use readily available to the public, and pull back the curtain on the proprietary black box.
- Describe how personal data will be used.
- Offer an opt-out.
Companies’ Efforts
- Acknowledge that the technology is flawed and bridge the gap with alternative hiring events or outreach as it pertains to women, persons of color and those with a disability.
- Educate hiring personnel about bias in humans and tech.
- Support an inclusive company culture steeped in the nuances of DEI inside out and outside in.
- Consult with professionals in the field of DEI. At Spectra Diversity our experienced consultants and facilitators work hard to identify the DEI needs of an organization and provide a customized approach to attaining DEI-sensitive outcomes.
- Seek the representation of diverse perspectives in future hires who understand the problems and can deconstruct the barriers that impede fair unbiased hiring.
Applicants and Employees can help too
- Blow the whistle on hiring discrimination.
- Hold employers accountable by keeping local news outlets and elected officials abreast of discriminatory practices.
Sources:
- ACLU, “How Artificial Intelligence Can Deepen Racial and Economic Inequities,” Olga Akselrod, July 13, 2021.
- CNN, “AI Can Be Racist, Sexist, and Creepy. What Should We Do About It?” Zachary Wolf, March 18, 2023.
- New America, “AI Discrimination in Hiring and What We Can Do About It,” Aditi Peyush, September 27, 2022.
- Pulitzer Center, “ Are AI Hiring Tools Racist and Ableist?” Hilke Schellmann, February 8, 2023.
- Reuters, “Amazon Scraps Secret Recruiting Tool That Showed Bias Against Women,” Jeffrey Dastin, October 10, 2018.
- University of Cambridge, “Claims AI Can Boost Workplace Diversity are Spurious and Dangerous,” Drage & McInerney, October 10, 2022.