This story initially appeared in The Algorithm, our weekly e-newsletter on AI. To get tales like this in your inbox first, enroll right here.
My social media feeds this week have been dominated by two sizzling matters: OpenAI’s newest chatbot, ChatGPT, and the viral AI avatar app Lensa. I really like enjoying round with new expertise, so I gave Lensa a go.
I hoped to get outcomes much like my colleagues at MIT Know-how Assessment. The app generated reasonable and flattering avatars for them—assume astronauts, warriors, and digital music album covers.



As an alternative, I acquired tons of nudes. Out of 100 avatars I generated, 16 have been topless, and one other 14 had me in extraordinarily skimpy garments and overtly sexualized poses. You’ll be able to learn my story right here.




Lensa creates its avatars utilizing Steady Diffusion, an open-source AI mannequin that generates pictures primarily based on textual content prompts. Steady Diffusion is skilled on LAION-5B, a large open-source information set that has been compiled by scraping pictures from the web.
And since the web is overflowing with pictures of bare or barely dressed girls, and photos reflecting sexist, racist stereotypes, the information set can be skewed towards these sorts of pictures.
As an Asian girl, I assumed I’d seen all of it. I’ve felt icky after realizing a former date solely dated Asian girls. I’ve been in fights with males who assume Asian girls make nice housewives. I’ve heard crude feedback about my genitals. I’ve been blended up with the opposite Asian individual within the room.
Being sexualized by an AI was not one thing I anticipated, though it’s not stunning. Frankly, it was crushingly disappointing. My colleagues and associates acquired the privilege of being stylized into clever representations of themselves. They have been recognizable of their avatars! I used to be not. I acquired pictures of generic Asian girls clearly modeled on anime characters or video video games.




Funnily sufficient, I discovered extra reasonable portrayals of myself once I advised the app I used to be male. This most likely utilized a distinct set of prompts to photographs. The variations are stark. Within the pictures generated utilizing male filters, I’ve garments on, I look assertive, and—most vital—I can acknowledge myself within the photos.




“Girls are related to sexual content material, whereas males are related to skilled, career-related content material in any vital area corresponding to medication, science, enterprise, and so forth,” says Aylin Caliskan, an assistant professor on the College of Washington who research biases and illustration in AI methods.
This type of stereotyping could be simply noticed with a brand new software constructed by researcher Sasha Luccioni, who works at AI startup Hugging Face, that enables anybody to discover the totally different biases in Steady Diffusion.
The software exhibits how the AI mannequin presents photos of white males as medical doctors, architects, and designers whereas girls are depicted as hairdressers and maids.
Nevertheless it’s not simply the coaching information that’s guilty. The businesses growing these fashions and apps make energetic decisions about how they use the information, says Ryan Steed, a PhD pupil at Carnegie Mellon College, who has studied biases in image-generation algorithms.
“Somebody has to decide on the coaching information, determine to construct the mannequin, determine to take sure steps to mitigate these biases or not,” he says.
Prisma Labs, the corporate behind Lensa, says all genders face “sporadic sexualization.” However to me, that’s not adequate. Anyone made the aware choice to use sure coloration schemes and situations and spotlight sure physique components.
Within the quick time period, some apparent harms may consequence from these selections, corresponding to quick access to deepfake turbines that create nonconsensual nude pictures of ladies or kids.
However Aylin Caliskan sees even larger longer-term issues forward. As AI-generated pictures with their embedded biases flood the web, they may ultimately grow to be coaching information for future AI fashions. “Are we going to create a future the place we hold amplifying these biases and marginalizing populations?” she says.
That’s a really scary thought, and I for one hope we give these points due time and consideration earlier than the issue will get even larger and extra embedded.
Deeper Studying
How US police use counterterrorism cash to purchase spy tech
Grant cash meant to assist cities put together for terror assaults is being spent on “huge purchases of surveillance expertise” for US police departments, a brand new report by the advocacy organizations Motion Heart on Race and Financial system (ACRE), LittleSis, MediaJustice, and the Immigrant Protection Mission exhibits.
Looking for AI-powered spytech: For instance, the Los Angeles Police Division used funding supposed for counterterrorism to purchase automated license plate readers price at the very least $1.27 million, radio tools price upwards of $24 million, Palantir information fusion platforms (typically used for AI-powered predictive policing), and social media surveillance software program.
Why this issues: For numerous causes, plenty of problematic tech leads to high-stake sectors corresponding to policing with little to no oversight. For instance, the facial recognition firm Clearview AI presents “free trials” of its tech to police departments, which permits them to make use of it with no buying settlement or price range approval. Federal grants for counterterrorism don’t require as a lot public transparency and oversight. The report’s findings are yet one more instance of a rising sample through which residents are more and more saved at nighttime about police tech procurement. Learn extra from Tate Ryan-Mosley right here.
Bits and Bytes
hatGPT, Galactica, and the progress entice
AI researchers Abeba Birhane and Deborah Raji write that the “lackadaisical approaches to mannequin launch” (as seen with Meta’s Galactica) and the extraordinarily defensive response to essential suggestions represent a “deeply regarding” pattern in AI proper now. They argue that when fashions don’t “meet the expectations of these almost definitely to be harmed by them,” then “their merchandise usually are not able to serve these communities and don’t deserve widespread launch.” (Wired)
The brand new chatbots may change the world. Are you able to belief them?
Individuals have been blown away by how coherent ChatGPT is. The difficulty is, a major quantity of what it spews is nonsense. Giant language fashions are not more than assured bullshitters, and we’d be sensible to method them with that in thoughts.
(The New York Instances)
Stumbling with their phrases, some individuals let AI do the speaking
Regardless of the tech’s flaws, some individuals—corresponding to these with studying difficulties—are nonetheless discovering giant language fashions helpful as a means to assist categorical themselves.
(The Washington Submit)
EU international locations’ stance on AI guidelines attracts criticism from lawmakers and activists
The EU’s AI legislation, the AI Act, is edging nearer to being finalized. EU international locations have permitted their place on what the regulation ought to appear to be, however critics say many vital points, corresponding to the usage of facial recognition by corporations in public locations, weren’t addressed, and lots of safeguards have been watered down. (Reuters)
Traders search to revenue from generative-AI startups
It’s not simply you. Enterprise capitalists additionally assume generative-AI startups corresponding to Stability.AI, which created the favored text-to-image mannequin Steady Diffusion, are the most popular issues in tech proper now. And so they’re throwing stacks of cash at them. (The Monetary Instances)