Technology

Ghost in the Machine’s Valerie Veatch isn’t drinking the AI Kool-Aid

Like many people, director Valerie Veatch was intrigued when OpenAI first released its Sora text-to-video generative AI model to the public in 2024. Though she didn’t fully understand the technology, she was curious about what it could do, and she saw that other artists were building online communities to share their new AI creations. The hope of connecting with people drew Veatch into the AI space, but once she was there, she was shocked to see how often the technology would generate images dripping with racism and sexism.

Veatch was even more unsettled by the way her new AI-enthusiast peers did not seem to care that the machine they rallied around spewed out hateful, bigoted garbage without being explicitly prompted to do so. The bizarre situation drove Veatch away from her early experimentation with gen AI. But it also inspired her to make Ghost in the Machine, a new documentary about the technologies and schools of thought that laid the groundwork for gen AI’s existence.

Instead of focusing on the potential (if highly improbable) benefits to society that gen AI accelerationists swear are just around the corner, Ghost in the Machine explores the technology’s history to explain why it works the way it does now. When I recently spoke with Veatch about the film, she told me that she wanted to chronicle gen AI’s genesis to give people a clear view of the very intense cycle of industry hype we’re currently living through. First, however, she had to cut through AI firms’ purposeful obfuscation of the entire concept.

“In order to use the phrase ‘artificial intelligence,’ we have to know what the fuck that phrase means,” Veatch told me over a video call. “The truth is, it doesn’t mean anything; it’s a marketing term and always has been.
It’s a completely misleading, stupid phrase that has taken on its own cultural meaning, and I think being really clear about the words we use and the meaning of those words is essential.”

As Ghost in the Machine repeatedly stresses, “artificial intelligence” was originally coined in 1956 by computer scientist John McCarthy when he was trying to secure more funding for his projects. But the documentary presents the term’s coinage as just one of many important points on a timeline that actually begins in Victorian-era England with the birth of eugenics. In addition to being Charles Darwin’s cousin, Francis Galton was the originator of eugenics — the racist and discredited belief that humanity can be improved through the systemic elimination of “inferior” (read: non-white) races.

While Galton definitely made some useful contributions to academia, in our interview, Veatch explained that it is important to not minimize the fact that his deeply held white supremacist beliefs informed the era’s social sciences. Galton and his fellow eugenicist / protegee Karl Pearson were not directly involved in the development of early computational machines. But Galton’s foundational work with multidimensional modeling — a technique he used while measuring the attractiveness of African and European women — shaped Pearson’s thinking as he developed statistical tools like logical regression, which is one of the fundamental components of modern machine learning.

“Am I going to hug Sam Altman on camera? Is that a truthful film about this technology? That’s propaganda.”

Galton Pearson helped normalize the idea that people of various races were fundamentally different in quantifiable ways. This kind of racist thinking is what led to Galton and his peers believing that human intelligence could be measured, and that human brains function very much like machines. That jump, Veatch says, played a major role in selling the public on the fantastical idea of artificial intelligence.

“What was really surprising to me during my initial dive into all of this was how, when you look at the question of superintelligence as a documentarian or journalist, it doesn’t take long before you smack your forehead into the low doorframe of race science, because it’s baked into this technology,” Veatch said, explaining that these concepts are “soaked” in eugenic thinking.

Rather than trying to disprove the idea that gen AI models produce hateful ideology because they’ve been trained on it (a concept commonly known as “GIGO” — garbage in, garbage out), Ghost in the Machine uses its historical analysis to explain why the companies building this technology seem so disinterested in addressing its present-day issues. This historical context helped Veatch make sense of some of her own troubling experiences with gen AI, back when she was playing around with an early version of Sora in an artists’ Slack. Veatch remembers the group as being a friendly, welcoming place right up until another member — a woman of color — began voicing concerns about the way the model whitewashed her every time she prompted it to generate images based on photos of herself.

“It kept her braids and it kept her fashion, but she was prompting herself into an art gallery,
which the program understood to be a ‘white space,’” Veatch explained. “My reaction was ‘what the fuck,’ and I tried explaining to the group how this was really a problem with the software itself.” No one else in the group engaged with her post. “This was a Slack where, normally, there are always like dozens of screaming koala emoji reactions on every post. But this time, there was nothing.”

Image: Independent Lens

Veatch took it upon herself to get in contact with OpenAI directly to alert the company about “how racist, sexist, and misogynistic the outputs [she] was seeing were — outputs where women would start growing extra tits and twerking after like two rounds of generating a scene.” Veatch thought OpenAI would see this as a critical bug worth fixing before encouraging more people to adopt Sora into their lives; instead the company brushed her concerns aside.

“The feedback I got was basically, ‘This is very cringe to be bringing up; there’s nothing we can do to change it,’” Veatch recalled.

That situation lit a fire within Veatch to learn about why so many different forms of generative intelligence consistently behave in such ugly, troublesome ways. At first, she didn’t really think that having Zoom calls with the authors of white papers about the technology could be turned into a compelling documentary, but that changed as she began to see a clear line from Galton’s eugenic statistics work to modern gen AI outfits.

The voices featured in Ghost in the Machine — a blend of AI researchers, historians, and critical theorists — make a compelling case that basically every facet of the AI space has been profoundly influenced by its historical connections to fields of science built to support discriminatory world views. When I asked Veatch if she had ever been interested in speaking directly with the heads of the companies Ghost in the Machine takes to task, she laughed. Getting that kind of access, she said, would require her to go through all kinds of ideological gymnastics and make compromises that would make her film complicit in gen AI’s harms.

“There’s the idea, you know, these people won’t trust just anyone,” Veatch said. “Yeah, no shit, and I certainly hope they wouldn’t trust me. I don’t want them in the film and they already speak enough to the media. Am I going to hug Sam Altman on camera? Is that a truthful film about this technology? That’s propaganda.”

Ghost in the Machine will be available to stream via Kinema from March 26th to March 28th before it airs on PBS some time this fall.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Show More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

Our content is free because of ads. Please support New Trend by disabling your ad blocker.

I've Whitelisted New Trend