Is artificial intelligence creating a new age of discrimination?

Welcome to the first of our series on artificial intelligence and its impact on the Diversity, Equity, and Inclusion (DEI) industry. This week we’ll focus on the recent advances in AI and how they have played into larger discussions about bias, race, and discrimination. Next week, we’ll look more specifically at how AI is disrupting the DEI industry.

We’ve all been hearing a ton about AI over the past year. With the advent of OpenAI, ChatGPT, and all their competitors’ responses, new questions have arisen over how we can responsibly employ these new technologies in a range of industries.

The complexity of the jobs we ask AI to do has increased exponentially in the past few years. And ChatGPT is only one—admittedly very dazzling—application of the technology. Industries of all kinds have been using AI for years to analyze large data sets, make recommendations, and troubleshoot performance. Automation is the central promise of AI, and it’s an attractive sell to most individuals and organizations in any industry.

The other major benefit is the potential reduction, or even elimination, of human error. IBM Watson, another famous AI system, at one point touted its ambition to become a super-powered analysis tool for healthcare data. The idea was that Watson would be able to compare healthcare data from around the world in various fields, like oncology, and make recommendations to doctors, based on data sets that would have taken years for humans to analyze.

This dream eventually crumbled when it became clear that Watson couldn’t simply look at data from patients in one part of the world and yield recommendations that were relevant to patients in healthcare systems anywhere else.

And this points to one of the biggest caveats of artificial intelligence, machine learning, and the “eliminating human error” promise that both make: AI is only as good as the data we feed it.

Watson acquired companies with healthcare data on millions of cancer patients in the U.S., but that didn’t mean it could make recommendations for patients in China. While it could analyze the data sets to reveal unseen patterns and insights, it couldn’t safely apply them to patients where, for example, different treatment protocols existed in different hospitals or in regions where different medicines were available.

For IBM, this was a chilling and humbling conclusion to years and billions of dollars of investment. As one commentator noted, the Watson team mistakenly “led with marketing first, product second.” They saw massive potential for AI in healthcare, and made ambitious claims about the technology’s ability to reduce human error, detect unforeseen patterns, and accelerate diagnoses. But over the course of the next eight years, they learned there’s much more to medicine than processing and analyzing data. The result is that today, IBM and Watson have fallen behind their competitors and become a cautionary tale for “technological hype and hubris around AI”.

But Watson only represents one case in which input data has led to faulty or irrelevant results from artificial intelligence.

As we progress into the next age of AI, and all the promises of technological wizardry it brings, we have to ask ourselves: who is responsible for the design of these systems? What data are they using, and is it representative of the human beings they claim to serve? And if AI is only ever as good as its inputs, what are we doing to ensure those inputs reflect the future we want to live in, and not the past we need to leave behind?

Race, bias, and artificial intelligence

Many of us have probably heard about AI’s potential for repeating and propagating human biases. All of them come down to the same issue: biased data sets.

For example, in an online project titled Beauty.AI, which touted itself as the first beauty pageant judged by machines, sophisticated algorithms were designed to judge the beauty of entrants based on “objective” criteria.

But the designers soon discovered that the algorithms had a preference for people with lighter skin. The data sets they used to help the algorithm learn what was “beautiful” all severely underrepresented people of color. So when the results were announced, only one person out of 44 winners had “dark” skin.

The idea that AI can judge beauty objectively, and thereby reduce subjective bias, is a corollary of the view that machines eliminate human error. By constructing bias as an individual fault—a “user error”—this view obscures the systemic ways in which bias is embedded in our societies and cultures.

As several academics explained in the Beauty.AI case, the belief that AI technology is impartial tends to transfer over to its products. The result is we trust the outputs of machine learning and AI to be “neutral and scientific,” when they are necessarily contingent on the data that led to them.

Facial recognition software is one of the biggest areas in which these biases have been proven to rear their heads. In “AI, Ain’t I A Woman?” poet and coder Joy Buolamwini exposed the inability of various AI programs to correctly judge the gender of famous Black women like Oprah, Michelle Obama, and Serena Williams. Buolamwini was inspired to do the project at MIT while studying facial recognition software, which she realized only recognized her face if she had a white mask on.

Maybe part of the reason these oversights keep happening is that STEM roles that traditionally work on AI—like engineers, data and computer scientists, and coders—remain underrepresented in terms of Black and Latinx talent. But those numbers alone can’t explain why systemic bias and discrimination continue to creep into algorithms.

Instead, we should look to the constantly repeated, severely misinformed belief that AI exists to replace human intelligence—by virtue of error reduction, automation, and resource savings—rather than enhance it. The belief that AI is a superpowered fix to the woes of human error and subjectivity is misguided. That belief allows the brands who serve us to falsely market their products as neutral, unbiased, and devoid of flawed human judgment.

The problem with AI and HR

This belief has been particularly pernicious in the field of HR, where the race to automate has led to several well-known scandals in hiring and employment discrimination.

Résumé-sorting and qualifying is one of the more onerous tasks in the hiring process, and AI engineers began tackling it over a decade ago, promising to find ways to suss out the right talent for the right jobs in a fraction of the time.

But because many of these algorithms are trained on data sets that reflect our systemically unequal workforces, they often end up making false statistical associations between identity markers and job performance. For example, one algorithm designed by a résumé-screening company several years ago found two factors on résumés to be most indicative of job performance: being named Jared and playing high school lacrosse.

Another high profile story broke when Amazon announced they were retiring their own in-house, AI-powered résumé-screening tool. It had been designed not only to identify promising candidates, but to score them—almost like a five-star Amazon rating on a product.

An internal study eventually found that candidates whose résumés mentioned the word “women” were being assigned lower scores. These résumés were penalized for mentioning women’s sports teams or colleges, and downgraded in the rankings system. That’s simply because past successful hires at the company had been largely male, and that’s the only data the AI tool used to learn.

And it’s not only at the résumé-screening stage that AI can have harmful consequences in the hiring process. Most jobs, at all levels, are advertised on platforms like Facebook and LinkedIn, which use sophisticated analytics to deliver ads to target audiences. These advertising platforms have to make “intelligent” predictions about which targets will click on the ads they serve, and studies have shown that broadly targeted ads for certain positions are overwhelmingly delivered to certain gender and race demographics. For example, in one study of Facebook job ads, listings to work at taxi companies went to an audience that was approximately 75% Black.

AI-powered predictions can only use the data that’s out there already. If we want to effect change in racial employment, wealth, and income gaps, then we need to acknowledge that AI alone is not going to get us there.

A brighter future for AI and talent acquisition

Allowing our past biases to reproduce in the real world through these technologies is unacceptable. Thankfully, there are people who are developing tools that use AI more responsibly in talent acquisition.

Plum, for example, is a startup that uses surveys, drills, and games to test hiring candidates for different attributes like attention to detail and risk aversion. This allows recruiters to judge candidates by their skills and “work personality traits,” rather than their adherence to a “successful hire” profile in a data set.

Pymetrics.ai is another company that allows recruiters to design custom algorithms, so they can ensure they’re unbiased before they use them on live candidates. Like Plum, they also have AI-powered analytics tools that score candidates on their “soft skills,” so recruiters are not forced to rely on resumes alone.

Both of these approaches help recruiters sidestep the resume as the be-all, end-all standard by which they rate candidates, which in turn prevents AI from making faulty recommendations based on implicitly biased résumé data.

AI can also be used to ensure job postings use balanced, gender-neutral language. Other tools have even been programmed to disregard all extraneous information on résumés, besides hard skill sets, which allows companies to hire people from unconventional backgrounds (like this story of a Syrian refugee dentist who knew how to code and got hired at a tech company in Texas).

Other tools are also designing AI-powered systems that ensure real human beings are looking at résumés thoroughly before they’re discarded. One tool ensures that all résumés disqualified by one recruiter are sent to another for vetting, ensuring final rejections only happen after résumés have been given a fair shot with different people.

Which brings us back to the original point: AI exists to enhance human judgment, not replace it. When we’re looking at the potential of these new technologies to save costs and time through automation, we have to also consider the extent to which they might erase the potential for human corrective action.

AI is never infallible, because it is designed by highly fallible humans and it can only learn from existing data sets. It will help us create a better future—one which yields more desirable and equitable data sets—but only if humans are there to analyze its outputs and help shape the direction in which they guide us.

And for the less technical among us, it is our collective responsibility to understand the role of AI in creating the products we use —the same way we demand to understand the ethical and safe sourcing of our clothes, food, and water. When it comes to intersections between race or identity and technology, we have to be particularly vigilant to ensure that the beautiful future AI promises isn’t reduced to a reiteration of past discrimination and outdated bias.

Porter Braswell is the Cofounder and Executive Chairman of Jopwell, Founder of Diversity Explained, author of Let Them See You, and host of the podcast Race at Work. Subscribe to his weekly content pieces at Diversity Explained to stay up to date on all new content.

This content was originally published here.

Comments are closed, but trackbacks and pingbacks are open.

Malcare WordPress Security