Algorithms, artificial intelligence (AI), “data scraping” and other means of evaluating vast amounts of information about people have indeed become widespread and are increasingly common tools in the hiring toolbox. As predicted the use and scope of big data has grown exponentially over the past several years and continues to influence employment and hiring decisions. We are operating in a world where automated algorithms make impactful decisions that amplify the power of business. However, as with the use of any new technology, the legal landscape for businesses is rapidly changing so it is critical to closely evaluate these tools before incorporating them into your hiring practices. Why? Because these tools may unintentionally discriminate against a protected group.
The challenge is straightforward: AI algorithms are based on datasets collected or selected by humans. That means those data sets are subject to intentional or unintentional bias, which could lead to biased algorithmic models. Examples of algorithmic bias have already started popping up in the news. In 2018, for example, a large company decided to scrap its proprietary hiring algorithm when it discovered the algorithm was biased in favor of men, simply because the algorithm was trained on patterns from resumes received over the past 10 years—resumes that were mostly from men because the tech industry skews male. So, rather than taking away the existing bias against women in technology, this company’s system amplified the bias.
How the EEOC is Handling Algorithmic Discrimination
In the face of increasingly broad use of algorithms the Equal Employment Opportunity Commission (EEOC) is responsible for enforcing federal laws that make it illegal to discriminate against job applicants or employees because of their membership in a protected class. The EEOC has begun to challenge the use of hiring and employment practices that have a statistically significant disparate impact on a certain group and cannot be justified as a business necessity. The EEOC expects companies that use algorithms and AI to take reasonable measures to test the algorithm's functionality in real-world scenarios to ensure the results are not biased, in addition the EEOC expects companies to test their algorithms often. The EEOC has also redefined the protected category of “sex”, for example, to include sexual orientation and gender identity. With these changes it is possible that the number and type of individuals protected from discrimination will continue to expand.
How Businesses Are Mitigating Risk
Lacking any concrete laws or guidelines, how can businesses mitigate the risks around algorithmic hiring systems? The key is using extreme vigilance and strong contracting practices if or when your business is relying on AI in recruiting and selecting candidates even when trusting on third-party vendors. Companies are responsible for ongoing and daily assessments and audits of their own algorithms and hiring practices. If a third party is providing or managing the algorithms used to make hiring decisions, it’s still up to the employer to scrutinize validation claims and results before acting. It is also wise to consider including indemnification, hold harmless clauses and appropriate disclaimers in any agreements. The Octillo Emerging Technologies team is ready to help assess how your business can use algorithms in your hiring practices effectively and responsibly and to help clients deploying AI driven services and products in areas such as compliance with laws and regulations, data privacy issues, and AI governance and ethics.
*Attorney Advertising. Prior Results Do Not Guarantee A Similar Outcome.