AI use continues its exponential growth across the globe. People are drawn to the technology’s promise and ease of use. However, flashing red warning lights continue to abound, such as what occurred in the following dramatic story. As the Boston Globe reports, an Asian-American MIT student Rona Wong, also a computer science major, uploaded a photo of herself to Playground AI and asked the application to create “a professional LinkedIn profile photo” of her.
According to the Globe:
“In just a few seconds, it produced an image that was nearly identical to her original selfie — except Wang’s appearance had been changed. It made her complexion appear lighter and her eyes blue, “features that made me look Caucasian,” she said.
“I was like, ‘Wow, does this thing think I should become white to become more professional?’” said Wang, who is Asian-American.
The photo, which gained traction online after Wang shared it on Twitter, has sparked a conversation about the shortcomings of artificial intelligence tools when it comes to race. It even caught the attention of the company’s founder, who said he hoped to solve the problem.
Now, [Wang] thinks her experience with AI could be a cautionary tale for others using similar technology or pursuing careers in the field.
Wang’s viral tweet came amid a recent TikTok trend where people have been using AI products to spiff up their LinkedIn profile photos, creating images that put them in professional attire and corporate-friendly settings with good lighting.”
Wong’s experience aligns with warnings published by the U.S. Department of Health and Human Services (HHS) in 2022 about AI’s potential discriminatory effects.
As HHS noted:
“Research suggests that overly relying upon any clinical algorithm, particularly without understanding the effects of its uses, may amplify and perpetuate racial and other biases.
Accordingly, the Department strongly cautions covered entities against overly relying upon a clinical algorithm, for example, by replacing or substituting the individual clinical judgment of providers with clinical algorithms.”
The American Medical Association American Academy of Family Physicians respectively agree and take the following positions:
“[H]ealth care AI should be a “tool to augment professional clinical judgment, not a technology to replace or override it,” and that organizations that implement AI systems “should vigilantly monitor [the systems] to identify and address adverse consequences.
“AI-based technology is meant to augment decisions made by the user, not replace their clinical judgement or shared decision making… we recognize the limitations and pitfalls of this technology. It is critically important that AI-based solutions do not exacerbate racial and other inequities pervasive in our health care system. We strongly believe systematic approaches must be implemented to evaluate the development and implementation of AI/ML [machine learning] solutions into health care.”
Wang was taken aback by what AI did to her picture. While she is concerned about possible biases inherent in AI, she is not yet prepared to declare AI to be fundamentally and irretrievably racist in its use and application.
According to the Boston Globe:
“Wang admits that, when she tried using this particular AI, at first she had to laugh at the results.” It was kind of funny,” she said.
But it also spoke to a problem she’s seen repeatedly with AI tools, which can sometimes produce troubling results when users experiment with them.
To be clear, Wang said, that doesn’t mean the AI technology ismalicious. “It’s kind of offensive,” she said, “but at the same time I don’t want to jump to conclusions that this AI must be racist…
Wang… said her widely shared photo may have just been a blip, and it’s possible the program randomly generated the facial features of a white woman. Or, she said, itmay have been trained using a batch of photos in which a majority of people depicted on LinkedIn or in “professional” scenes were white.
It has made her think about the possible consequences of a similar misstep in a higher-stakes scenario, like if a company used an AI tool to select the most “professional” candidates for a job, and if it would lean toward people who appeared white.
“I definitely think it’s a problem,” Wang said. “I hope people who are making software are aware of these biases and thinking about ways to mitigate them.”
The people responsible for the program were quick to respond.
Just two hours after she tweeted her photo, Playground AI founder Suhail Doshi replied directly to Wang on Twitter. “The models aren’t instructable like that so it’ll pick any generic thing based on the prompt. Unfortunately, they’re not smart enough,” he wrote in response to Wang’s tweet.
In additional tweets, Doshi said Playground AI doesn’t “support the use-case of AI photo avatars” and that it “definitely can’t preserve identity of a face and restylize it or fit it into another scene like” Wang had hoped.
Reached by email, Doshi declined to be interviewed.
Instead, he replied to a list of questions with a question of his own: “If I roll a dice just once and get the number 1, does that mean I will always get the number one? Should I conclude based on a single observation that the dice is biased to the number 1 and was trained to be predisposed to rolling a 1?””
The objective reality remains that AI can produce biased, discriminatory results, which reflect that AI is created by human beings who all have our own implicit biases. Without accounting for such biases during the creation of AI, it is axiomatic that the technology will display the biases of its creators and developers.
Myriad studies and analyses, including those done by MIT, confirm that experts have long said that AI can have biases that lie under the surface and are not readily perceivable until a user has a result like Wang did. This phenomenon has been true for many years, the Globe reported. “The troves of data used to deliver results may not always accurately reflect various racial and ethnic groups, or may reproduce existing racial biases.”
Examples of such studies can be found at the following links: https://www.media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/ https://www.bostonglobe.com/ideas/2017/07/07/why-artificial-intelligence-far-too-human/jvG77QR5xPbpwBL2ApAFAN/story.html?p1=Article_Inline_Text_Link and https://news.mit.edu/2023/large-language-models-are-biased-can-logic-help-save-them-0303
Wang certainly learned from her AI experience. She wants people to realize that AI, despite its burgeoning popularity, remains a work in progress:
“Wang said she hopes her experience serves as a reminder that even though AI tools are becoming increasingly popular, it would be wise for people to tread carefully when using them.
“There is a culture of some people really putting a lot of trust in AI and relying on it,” she said. “So I think it’s great to get people thinking about this, especially people who might have thought AI bias was a thing of the past.””
Bias can exist under the surface, a phenomenon that’s been observed for years. The troves of data used to deliver results may not always accurately reflect various racial and ethnic groups, or may reproduce existing racial biases, they’ve said.
Research — including at MIT — has found so-called AI bias in language models that associate certain genders with certain careers, or in oversights that cause facial recognition tools to malfunction for people with dark skin.
© Bruce L. Adelson 2023. All Rights Reserved The material herein is educational and informational only. No legal advice is intended or conveyed.
Bruce L. Adelson, Esq., is nationally recognized for his compliance expertise. Mr. Adelson is a former U.S Department of Justice Civil Rights Division Senior Trial Attorney. Mr. Adelson is a faculty member at the Georgetown University School of Medicine and University of Pittsburgh School of Law where he teaches organizational culture, implicit bias, cultural and civil rights awareness.
Mr. Adelson’s blogs are a Bromberg exclusive.