ֱ̽ of Cambridge - Kanta Dihal /taxonomy/people/kanta-dihal en Cinema has helped 'entrench' gender inequality in AI /stories/whomakesAI <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Study finds that just 8% of all depictions of AI professionals from a century of film are women – and half of these are shown as subordinate to men.</p> </p></div></div></div> Mon, 13 Feb 2023 10:17:06 +0000 fpjl2 236801 at Cambridge awarded €1.9m to stop AI undermining ‘core human values’ /research/news/cambridge-awarded-eu1-9m-to-stop-ai-undermining-core-human-values <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/aistorythis.jpg?itok=IDNjdEhP" alt="Artificial intelligence " title="Artificial intelligence , Credit: Getty images" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Artificial intelligence is transforming society as algorithms increasingly dictate access to jobs and insurance, justice, medical treatments, as well as our daily interactions with friends and family. </p>&#13; &#13; <p>As these technologies race ahead, we are starting to see unintended social consequences: algorithms that promote everything from racial bias in healthcare to the misinformation eroding faith in democracies.   </p>&#13; &#13; <p>Researchers at the ֱ̽ of Cambridge’s <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence</a> (LCFI) have now been awarded nearly two million Euros to build a better understanding of how AI can undermine “core human values”.</p>&#13; &#13; <p> ֱ̽grant will allow LCFI and its partners to work with the AI industry to develop anti-discriminatory design principles that put ethics at the heart of technological progress. </p>&#13; &#13; <p> ֱ̽LCFI team will create toolkits and training for AI developers to prevent existing structural inequalities – from gender to class and race – from becoming embedded into emerging technology, and sending such social injustices into hyperdrive.      </p>&#13; &#13; <p> ֱ̽donation, from German philanthropic foundation Stiftung Mercator, is part of a package of close to €4 million that will see the Cambridge team – including social scientists and philosophers as well as technology designers – working with the ֱ̽ of Bonn.   </p>&#13; &#13; <p> ֱ̽new research project, “Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures”, comes as the European Commission negotiates its Artificial Intelligence Act, which has ambitions to ensure AI becomes more “trustworthy” and “human-centric”. ֱ̽Act will require AI systems to be assessed for their impact on fundamental rights and values. </p>&#13; &#13; <p>“There is a huge knowledge gap,” said Dr Stephen Cave, Director of LCFI. “No one currently knows what the impact of these new systems will be on core values, from democratic rights to the rights of minorities, or what measures will help address such threats.” </p>&#13; &#13; <p>“Understanding the potential impact of algorithms on human dignity will mean going beyond the code and drawing on lessons from history and political science,” Cave said.</p>&#13; &#13; <p>LCFI made the headlines last year when it launched the world’s only <a href="https://www.lcfi.ac.uk/master-ai-ethics/">Masters programme</a> dedicated to teaching AI ethics to industry professionals. This grant will allow it to develop new research strands, such as investigations of human dignity in the “digital age”. “AI technologies are leaving the door open for dangerous and long-discredited pseudoscience,” said Cave. </p>&#13; &#13; <p>He points to facial recognition software that claims to identify “criminal faces”, arguing such assertions are akin to Victorian ideas of phrenology – that a person’s character could be detected by skull shape – and associated scientific racism.  </p>&#13; &#13; <p>Dr Kanta Dihal, who will co-lead the project, is to investigate whose voices actually shape society’s visions of a future with AI. “Currently our ideas of AI around the world are conjured by Hollywood and a small rich elite,” she said. </p>&#13; &#13; <p> ֱ̽LCFI team will include Cambridge researchers Dr Kerry Mackereth and Dr Eleanor Drage, co-hosts of the podcast “<a href="https://www.gender.cam.ac.uk/technology-gender-and-intersectionality-research-project/the-good-robot-podcast"> ֱ̽Good Robot</a>”, which explores whether or not we can have ‘good’ technology and why feminism matters in the tech space.  </p>&#13; &#13; <p>Mackereth will be working on a project that explores the relationship between anti-Asian racism and AI, while Drage will be looking at the use of AI for recruitment and workforce management. </p>&#13; &#13; <p>"AI tools are going to revolutionize hiring and shape the future of work in the 21st century. Now that millions of workers are exposed to these tools, we need to make sure that they do justice to each candidate, and don’t perpetuate the racist pseudoscience of 19th century hiring practices,” says Drage. </p>&#13; &#13; <p>“It’s great that governments are now taking action to ensure AI is developed responsibly,” said Cave. “But legislation won’t mean much unless we really understand how these technologies are impacting on fundamental human rights and values.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Work at the Leverhulme Centre for the Future of Intelligence will aim to prevent the embedding of existing inequalities – from gender to class and race – in emerging technologies.  </p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">AI technologies are leaving the door open for dangerous and long-discredited pseudoscience</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Stephen Cave</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/" target="_blank">Getty images</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Artificial intelligence </div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Wed, 09 Feb 2022 08:52:03 +0000 fpjl2 229781 at Whiteness of AI erases people of colour from our ‘imagined futures’, researchers argue /research/news/whiteness-of-ai-erases-people-of-colour-from-our-imagined-futures-researchers-argue <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/34327888294c17b7bd3833k.jpg?itok=rNwtPbuT" alt="Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, Geneva" title="Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, Geneva, Credit: ITU/R.Farrell" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>This is according to experts at the ֱ̽ of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a “racially homogenous” workforce of aspiring technologists, building machines with bias baked into their algorithms.</p>&#13; &#13; <p>They say that cultural depictions of AI as White need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased.</p>&#13; &#13; <p> ֱ̽researchers, from Cambridge’s <a href="https://www.lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence (CFI)</a>, say that AI, like other science fiction tropes, has always reflected the racial thinking in our society.</p>&#13; &#13; <p>They argue that there is a long tradition of crude racial stereotypes when it comes to extraterrestrials – from the "orientalised" alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.</p>&#13; &#13; <p>But artificial intelligence is portrayed as White because, unlike species from other planets, AI has attributes used to "justify colonialism and segregation" in the past: superior intelligence, professionalism and power.</p>&#13; &#13; <p>“Given that society has, for centuries, promoted the association of intelligence with White Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a White machine,” said Dr Kanta Dihal, who leads CFI’s ‘<a href="https://www.lcfi.ac.uk/research/project/decolonising-ai">Decolonising AI</a>’ initiative.</p>&#13; &#13; <p>“People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. In cases where these systems are racialised as White that could have dangerous consequences for humans that are not,” she said.</p>&#13; &#13; <p>Together with her colleague Dr Stephen Cave, Dihal is the author of a new paper on the case for decolonising AI, published today in the journal <a href="https://dx.doi.org/10.1007/s13347-020-00415-6"><em>Philosophy and Technology</em></a>.</p>&#13; &#13; <p> ֱ̽paper brings together recent research from a range of fields, including Human-Computer Interaction and Critical Race Theory, to demonstrate that machines can be racialised, and that this perpetuates "real world" racial biases.</p>&#13; &#13; <p>This includes work on how robots are seen to have distinct racial identities, with Black robots receiving more online abuse, and a study showing that people feel closer to virtual agents when they perceive shared racial identity.  </p>&#13; &#13; <p>“One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard White middle-class English,” said Dihal. “Ideas of adding Black dialects have been dismissed as too controversial or outside the target market.”</p>&#13; &#13; <p> ֱ̽researchers conducted their own investigation into search engines, and found that all non-abstract results for AI had either Caucasian features or were literally the colour white.</p>&#13; &#13; <p>A typical example of AI imagery adorning book covers and mainstream media articles is Sophia: the hyper-Caucasian humanoid declared an “innovation champion” by the UN development programme. But this is just a recent iteration say researchers.</p>&#13; &#13; <p>“Stock imagery for AI distills the visualizations of intelligent machines in western popular culture as it has developed over decades,” said Cave, Executive Director of CFI.</p>&#13; &#13; <p>“From Terminator to Blade Runner, Metropolis to Ex Machina, all are played by White actors or are visibly White onscreen. Androids of metal or plastic are given white features, such as in I, Robot. Even disembodied AI – from HAL-9000 to Samantha in Her – have White voices. Only very recently have a few TV shows, such as Westworld, used AI characters with a mix of skin tones.”</p>&#13; &#13; <p>Cave and Dihal point out that even works clearly based on slave rebellion, such as Blade Runner, depict their AIs as White. “AI is often depicted as outsmarting and surpassing humanity,” said Dihal. “White culture can’t imagine being taken over by superior beings resembling races it has historically framed as inferior.”</p>&#13; &#13; <p>“Images of AI are not generic representations of human-like machines: their Whiteness is a proxy for their status and potential,” added Dihal.</p>&#13; &#13; <p>“Portrayals of AI as White situate machines in a power hierarchy above currently marginalized groups, and relegate people of colour to positions below that of machines. As machines become increasingly central to automated decision-making in areas such as employment and criminal justice, this could be highly consequential.”</p>&#13; &#13; <p>“ ֱ̽perceived Whiteness of AI will make it more difficult for people of colour to advance in the field. If the developer demographic does not diversify, AI stands to exacerbate racial inequality.”</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ֱ̽overwhelming ‘Whiteness’ of artificial intelligence – from stock images and cinematic robots to the dialects of virtual assistants – removes people of colour from humanity's visions of its high-tech future.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">If the developer demographic does not diversify, AI stands to exacerbate racial inequality</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Kanta Dihal</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/itupictures/34327888294" target="_blank"> ITU/R.Farrell</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, Geneva</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ֱ̽text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ֱ̽ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Thu, 06 Aug 2020 07:08:06 +0000 fpjl2 216922 at