From Silicon Valley to Remote Villages: Human Consequences of AI
Given that hordes are already accessing ChatGPT, or AI in some form or the other, Madhumita Murgia’s book on the human consequences of an “intelligence” that is being rapidly unfurled across domains – from buttressing legal arguments to diagnosing medical conditions, from creating art works to predicting crop yields – is required reading. As the AI editor at the Financial Times, Murgia is not a technophobe or a Luddite. But she also doesn’t subscribe to the fanaticism of a technocracy.
As a reflective, wide-thinking journalist digging into stories, lives and people, listening especially to voices on the margins, she unearths the unseen. For those who fear nebulous, future effects of AI, she compels us to attend to the present. To the ways in which seismic changes in Silicon Valley or Seattle or Bangalore eddy into Nairobi or into tribal villages in Nandurbar, Gujarat.
Cookie Trails to Surveillance Capitalism
Like many investigative journeys, this too started out by chance. Ten years ago, Murgia set out to explore “cookies.” Only to discover the grim and somewhat sinister world of data brokers, of companies who monetize our online trails. Opting to profile someone she knew best – herself – she was taken aback by the report generated. While the profile wasn’t nuanced, it was shockingly accurate: “My life – and yours – is being converted into such a data package that is then sold on. Ultimately, we are the products.”
AI is the logical stepchild. Shoshana Zuboff, an American social psychologist and philosopher calls this “surveillance capitalism”. As Murgia puts it, “It is altering the very experience of being human.”
Invisible Hands, Visible Impact
Take the non-profit Sama, which hires 2,800 people to draw shapes around images, sometimes adding labels. “Room after room is filled with twenty-somethings, young women and men, clicking, drawing, tapping. It requires precision and focus, but is repetitive, a game of shape-sorting, word-labelling and button-clicking.”
Ian and Benja are among the data workers that Madhumita interviews. Earlier, they used to be rabble rousers, pelting stones or stirring up disturbances for politicians. They currently live in Kibera, in central Nairobi, a slum. Kibera is marked by ditches, hoardings for M-PESA (a digital wallet) with boda-bodas running through the streets. “A pungent scent of slick garbage, heat and humans sits heavily in the still air.”
Sama laid fiber optic cables inside the slum, so that workers could work from home. Work starts at 7 am and lasts for eight hours. Each team has about 20 agents. They label toxic content for ChatGPT, so that OpenAI can block the corrupting, dirty stuff.
Murgia watches from glass door screens as workers involved in “content moderation” sieve toxic content. Some had nightmares, got on antidepressants later. In 2023, Sama – after facing lawsuits – exited the content moderation business. Milagros Miceli who researched this sector said that apart from human rights issues, one also has to be aware that many of the workers are poor at English and instructions are translated into their languages. Which can lead to more errors in AI models.
Empowering Refugees With Jobs
It’s not all bleak for data workers. Murgia also meets Hiba, an Iraqi refugee who’s now settled in Bulgaria. Who’s supporting her family with her data work. It’s flexible work that she can handle from home. The organization was founded to help migrant and refugee families. Hiba, who has to tag the impurities on images of oil samples, loves her job. It allows her to be a working mother and do her household chores.
Murgia acknowledges both sides. She observes that working conditions for data labelers are not uniformly bad or toxic. The work has also lifted many workers and their families from poverty. But if they display any agency or resistance, they are likely to be in trouble. In general, she sees the need to award these workers more equitable pay, better working conditions and closer interaction with clients.
Deepfake Dread: Pixels of Pain
Helen Mort is a novelist, poet and memoirist who lives in Sheffield. Growing up near hills, she’s used to being a climber, morphing from a nervy adolescent into an equally tremulous parent. She talks about how women’s bodies have always been surveilled, both from outside and from within. In 2015, she deleted her Facebook account.
One morning, after dropping her toddler off at nursery, she sat down to clear some space and start working. Her husband, a Professor, worked in their basement. Just then, a stranger knocked on her door. And asked to talk to her inside, “for privacy.” She was afraid something had befallen her child. Instead, he told her that he had been browsing some porno sites, and that her pictures had been used.
At first, Helen was relieved. At least, her child was okay. Then she was startled. She had never dispatched revealing photographs of herself to anyone. The gentleman was clearly trying to help, and he directed her to a “revenge porn” website where she could lodge a complaint. That day, when she went to pick up her son, she burnt with unwarranted shame. She would write later: “Since the pictures, you have learned to retreat without moving a muscle.”
Later she asked her husband to look at the website that carried her images. Her Facebook and Insta photographs had been scraped, and her face affixed to other bodies, showing her in horrific situations and positions, including one of her being violently gangraped. With even a comment attached: “this is my girlfriend Helen, I want to see her humiliated, used and abused, and here are some ideas.”
The images seeped into her, becoming pictures she could no longer “unsee.” They were deepfakes, created using AI, with real human photographs. The technology, however, is not gender neutral. According to a study by Sensity AI, 96% of deepfake videos using humans in pornography without their consent, were of women. As Murgia notes: “AI image tools are also being co-opted as weapons of misogyny.”
Laws, in most places, are yet to catch up with this new crime. Helen compares what happened to her with mountain climbing, of how glaciers stand upright despite eventually crashing. She even wrote a poem about it.
She asked the website to take down the pictures, but they didn’t respond. She was baffled that it wasn’t illegal. For a while, she grew suspicious of all her male friends, wondering which one of them had done this to her. Assuming, of course, that it was a man. “The encounter poisoned Helen’s relationships with men in general, and dimmed her outlook on the world for a long while.” She added tattoos to her body, to separate and distance herself from deepfake versions of her.
Deepfake Abuse and Digital Harrassment
In 2019, an app called DeepNude morphed images of women – partially clothed – into completely naked versions. Though that app was temporarily suspended, the technology since then has proliferated. The owner of a similar website added features, including the ability to add custom-sized breasts and other private parts. Morgia notes about that website’s owner: “Deepfakes, he promised, would move from passive entertainment into a participative sport: choose a woman, any woman, strip her down, transmute her, and put her on display. Rinse and repeat.”
Carrie Goldberg is a lawyer in Brooklyn, New York, who fights cases on behalf of girls and women who have been harassed in this way. Goldberg herself was once blackmailed by an ex-boyfriend, so she plays the role of a confidante, friend, mother and therapist to her clients. Once, to support a law professor who was fighting online porn and getting viciously trolled, Carrie sent her a “Mac lipstick in the fiery shade Lady Danger.” With a card that read: “This is what I wear when I want to feel like a warrior.”
Goldberg refuses to distinguish between online and offline incidents. The crimes and their aftermath are being felt by real women. She avoids terms like “cyberstalking” or “online harassment.” “…I say no, it’s just harassment, it’s just stalking.” She’s especially bothered that Big Tech companies like Google, Facebook and Instagram do not face liabilities for the content posted on their platforms.
Even the Metaverse, as journalist Yinka Bokinni discovered, can be an unsafe and abusive space for women. When Bokinni entered the Metaverse, at one point, she was surrounded by seven male avatars who asked her to strip off her safety shield so that they could target her body. “It was the virtual equivalent of sexual assault,” she wrote in a Guardian article.
Caking Biases into AI
Karl Ricanek was used to being pulled up by cops for “driving while black.” He grew up as one of five kids in a pocket of Washington DC. Studying engineering, he later became an academic at University of North Carolina, Wilmington. Where he trained computers to recognize faces, an ability that suddenly became exponentially stronger after deep learning became the technology of the day.
Ricanek, in particular, was focused on early disease detection. For instance, small neuro-muscular changes could detect Parkinson’s at an early stage. The idea was to build a mirror that could warn viewers well before the disease became more apparent. Gradually, however, the technology was expanding in ways that Ricanek could not have foreseen. Like the terrorist detection systems after 9/11 that relied on biometrics captured of millions of Iraqis and Afghanis. Or for crime detection by police and even to tamp down on civil protests. Soon, the technology was being used on Black Lives Matter activists and on Afro-Caribbean folks who participated at an annual carnival in Notting Hill.
At first Ricanek had dismissed the critics of such uses as “social justice warriors” – folks who had no direct knowledge of technology. But the tech already had inherent racial limitations that would prove problematic. For instance, it was better at scanning white male faces – with an error rate lower than 1 % – than at colored female faces, for which the error rates ratcheted up to 35%. False arrests were being made, based on faulty identifications by facial recognition systems – and most of those falsely humiliated in the presence of families, or in public spaces were darker skinned men and women.
Knowing that black folks were going to suffer more from misidentifications, Karl felt that he could no longer play a distanced, behind-the-scenes role. “Now I have to ponder and think much deeper about what it is that I’m doing.”
Unveiling the Digital Dystopia
Murgia’s book does highlight the occasional bright spots, but in general, it strikes an ominous note. It compels us to peer behind and below the “black box” into Kafkaesque snares, that might get all of us eventually. She quotes novelist Rana Dasgupta, on what this would mean, when everything about everyone was always known: “When every single truth was known – fully and simultaneously to everyone, everywhere – the lie disappeared from society. The consequence was that society also disappeared.”
References
Madhumita Murgia, Code Dependent: Living in the Shadow of AI, Picador India, 2024