Everything you need to know about gender inequality, all in one place.
Sponsored by Sage
Welcome to The Evidence, a supplement of the Impact newsletter designed to help you understand gender inequality – and show how we might fix it. I’m Josephine Lethbridge, a journalist from London dedicated to explaining the intersections of today’s crises – from the planetary to the personal – and empowering people to find the agency to act on them. Every month in The Evidence, I will draw on the latest research into gender inequality from the world of social sciences and make that knowledge accessible to you, whether you’re trying to change your community, your workplace or the laws of your country. AI is sexist – can we change that?
by Josephine Lethbridge It was back in 2016 that María Pérez-Ortiz realised how dangerous artificial intelligence could be. She’d been working with Spanish hospitals since 2011 to produce a machine learning model to help doctors decide which patients to prioritise on the liver transplant list. The idea was to build a more comprehensive model that not only took into account doctors’ existing decision-making process, but also better predicted compatibility between organ and patient, to lessen the likelihood of the patient’s body rejecting the liver. In 2014 the project finished and doctors began to use the model in hospitals. It was much more accurate at predicting compatibility than previous systems and the team were confident it would lead to fewer failed transplants, saving more lives. Pleased, Pérez-Ortiz and her colleagues went on to new roles. But two years on, the doctors came back to them, confused. They had realised that the model was assigning hardly any transplants to women. “This was shocking to me, having participated in building that system, ” she said. Her team looked into it and realised that the data they had used to build the model was itself significantly biased against women. Scientific literature going back decades hadn’t spotted that biomarkers and other physiological factors involved in organ transplantation were different for women and men. This is unsurprising in the context of the male-centric history of medical research. “I realised then that AI is simply a microcosm that reflects the world.” Pérez-Ortiz said. “In this case, it replicated, even exacerbated, the bias that was already there … My education had never prepared me for this.” She went on to dedicate her career to AI ethics and sustainability. Today, the idea that AI replicates and often amplifies inequality of all kinds is far more prevalent. And in just the last few months, there has been a flurry of new research into the issue. Here’s The EvidenceWidely used large language models, including Open AI’s GPT-3.5 and META’s Llama 2, demonstrate “unequivocal evidence of bias against women”, according to a recent UNESCO report on which Pérez-Ortiz was a co-author. Female names were found to be associated with “home”, “family”, “children”, and “marriage”; while male names were associated with “business”, “executive”, “salary”, and “career”. Other research has found similarly strong biases in AI-generated reference letters, news stories, and healthcare advice. The same problem is evident in models that create AI-generated images from written prompts. Prompt an AI to create images of a person cleaning and you are far more likely to get back pictures of a woman than you are a man. And of course this is not unique to sexism – research has also highlighted rampant racism, ableism and other forms of discrimination in AI. AI models are being trained on biased data – whether that’s text or images scraped from the internet, or peer-reviewed research that has historically treated white men as default (for more on this, read Caroline Criado Perez’s excellent book, Invisible Women). This is problematic in itself, but to make things worse, the AIs often amplify this bias. The effects could be enormous: AI systems are already being used to write news articles, recommendation letters, to decide who gets hired, who sees which advertisements and to help make life-and-death decisions within medical and justice systems. These systems do have the potential to do good – but the power they have to deeply entrench inequality is a terrifying prospect. And while companies are working to fix these issues, progress is slow and mired in controversy. Google’s recent attempt to fix bias in its Gemini image generator offered up images of racially diverse Nazis – probably the result of applying a one-size-fits-all approach to a very complex problem. And in Kenya, the human moderators hired to review content used by OpenAI have complained of psychological trauma and poor working conditions. As well as interrogating the nature of the data that trains these models, we also need to think about the power of the assumptions and unconscious bias of the people building them. It’s crucial, then, that those designing, building, deploying and regulating AI are representative of the wider population. So it’s concerning that many of the biggest tech companies are cutting diversity, equality and inclusion programmes. In 2018, 22% of those working in AI were women. Women also experience a lack of recognition and career progression relative to men in the field. As things stand, many more women than men will lose work due to AI. In high-income countries, more than double (7.8%) the amount of women than men (2.9%) have the potential for their jobs to be automated. Recent research by the Institute for Public Policy Research has found that up to eight million UK jobs could be at risk, with far more women than men affected. This doesn’t even take into account the more insidious effects of AI-driven recruitment models disadvantaging women. Other studies have raised worries about women falling behind their male peers at work because they aren’t using AI as much in their roles. (Perhaps in part because they mistrust the technology, which seems a cruel twist of fate.) Looking for some positives in the face of all this, I spoke to six experts to see what hope they have that we can swerve a world in which AI makes gender inequality even worse. “This all depends on us”Everyone I spoke to emphasised that the most important thing to understand is that behind the chatbots, image generators and machine learning systems lie human beings. AI is trained, fed and designed by people. The technology (like all others) is therefore not neutral, nor is it without politics. As such, all agreed that if carefully designed and deployed, AI has great potential as a tool for empowerment: reducing drudgery and time poverty; improving health outcomes for all; tackling climate change and environmental destruction. But is this the current trajectory? Bhargav Srinivasa Desikan, one of the researchers who looked at AI-driven job losses, told me: “If we treat AI as we’ve treated all major technologies for the last century, then I am not too optimistic. Just consider what happened with social media. That being said, the future is not being written without our control.” Revi Sterling, Senior Technical Director for Digital Inclusion at CARE, expressed similar misgivings. The problem is that “most of the AI out there isn’t designed for end users or women,” she told me. “They are commercial, they are designed to save 1% in manufacturing and banking and commerce.” “In the world of AI, whoever owns the data has the power”, said Kutoma Wakunuma, Co-Director at the Centre for Computing and Social Responsibility at De Montfort University. “Data is the new oil. And all of it is concentrated in a few hands: big tech corporations in the Global North.” The problem here is therefore predominantly a political and economic problem, not a technological one. Recognising this is crucial – and potentially empowering. Many of the experts I spoke to were encouraged by the discussions we are already having about bias in AI. “Recognizing problems is the first step,” said Yixin Wan, a PhD researcher who has published on the topic of bias in text-to-image and large language models. “But I do think we need to extend the scope of analysis to a broader definition of gender, including queer groups, and in improving the geocultural understanding of models. Because right now they are predominantly Western.” Erin Young, who published research on the lack of investment in female-led start-ups, said: “As long as we – and by we, I mean everyone – are attentive to this, we can work towards ways to mitigate these encoded biases.” Pérez-Ortiz agrees: “This all depends on us. It’s humans who dictate what AI models are built and what topics of research are explored in the field.” Research Round-upHere’s what else is making the news in gender inequality research:
Get in touchWe’d love to know what you think about The Evidence. Do you have any suggestions about format or content? What topics relating to research into gender equity are you particularly interested in hearing about? What insights would be particularly useful to you? About The EvidenceThe Evidence is a brand new supplement to the Impact newsletter designed to help you understand gender inequality – and show how we might fix it. Impact is a weekly newsletter of feminist journalism, dedicated to the rights of women and gender-diverse people worldwide. This is the English version of our newsletter; you can read the French one here.
|