̽»¨Ö±²¥ of Cambridge - Stefanie Ullmann /taxonomy/people/stefanie-ullmann en ̽»¨Ö±²¥codemakers /stories/darwin-lectures-2025 <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p> ̽»¨Ö±²¥2025 Darwin Lecture series looks at codes, be they computational, mathematical, biological, linguistic or even musical.</p> </p></div></div></div> Fri, 24 Jan 2025 11:58:18 +0000 ps748 248661 at Part IV: Celebrating the Cambridge Women Changing the World /stories/celebrating-cambridge-women-part-four <div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Part IV - ̽»¨Ö±²¥Finale: To mark International Women's Day and Women's History Month, the ̽»¨Ö±²¥ is delighted to shine a light on some of the incredible women living and working here at Cambridge. </p> </p></div></div></div> Thu, 30 Mar 2023 07:43:37 +0000 jek67 238311 at Online hate speech could be contained like a computer virus, say researchers /research/news/online-hate-speech-could-be-contained-like-a-computer-virus-say-researchers <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/fig6web.jpg?itok=eYI7rif7" alt="Screenshot of system" title="Credit: None" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p> ̽»¨Ö±²¥spread of hate speech via social media could be tackled using the same 'quarantine' approach deployed to combat malicious software, according to ̽»¨Ö±²¥ of Cambridge researchers.</p>&#13; &#13; <p>Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual: graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats, for example.</p>&#13; &#13; <p>As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended "psychological harm" is inflicted, with armies of moderators required to judge every case.</p>&#13; &#13; <p>This is the new front line of an ancient debate: freedom of speech versus poisonous language.</p>&#13; &#13; <p>Now, an engineer and a linguist have published a proposal in the journal <em><a href="https://link.springer.com/article/10.1007/s10676-019-09516-z">Ethics and Information Technology</a></em> that harnesses cyber security techniques to give control to those targeted, without resorting to censorship.</p>&#13; &#13; <p>Cambridge language and machine learning experts are using databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.</p>&#13; &#13; <p>As these algorithms get refined, potential hate speech could be identified and "quarantined". Users would receive a warning alert with a "Hate O'Meter" – the hate speech severity score – the sender's name, and an option to view the content or delete unseen.</p>&#13; &#13; <p>This approach is akin to spam and malware filters, and researchers from the 'Giving Voice to Digital Democracies' project believe it could dramatically reduce the amount of hate speech people are forced to experience. They are aiming to have a prototype ready in early 2020.</p>&#13; &#13; <p>"Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining," said co-author and linguist Dr Stefanie Ullman. "In fact, a lot of hate speech is actually generated by software such as Twitter bots."</p>&#13; &#13; <p>"Companies like Facebook, Twitter and Google generally respond reactively to hate speech," said co-author and engineer Dr Marcus Tomalin. "This may be okay for those who don't encounter it often. For others it's too little, too late."</p>&#13; &#13; <p>"Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation," he said.</p>&#13; &#13; <p>Former US Secretary of State Hillary Clinton <a href="https://www.youtube.com/watch?v=Sz7eDCDpw-Y&amp;feature=youtu.be">recently told a UK audience</a> that hate speech posed a "threat to democracies", in the wake of many women MPs <a href="https://www.theguardian.com/politics/2019/oct/31/alarm-over-number-female-mps-stepping-down-after-abuse">citing online abuse</a> as part of the reason they will no longer stand for election.</p>&#13; &#13; <p>While in a <a href="https://about.fb.com/news/2019/10/mark-zuckerberg-stands-for-voice-and-free-expression/">Georgetown ̽»¨Ö±²¥ address</a>, Facebook CEO Mark Zuckerberg spoke of "broad disagreements over what qualifies as hate" and argued: "we should err on the side of greater expression".</p>&#13; &#13; <p> ̽»¨Ö±²¥researchers say their proposal is not a magic bullet, but it does sit between the "extreme libertarian and authoritarian approaches" of either entirely permitting or prohibiting certain language online.</p>&#13; &#13; <p>Importantly, the user becomes the arbiter. "Many people don't like the idea of an unelected corporation or micromanaging government deciding what we can and can't say to each other," said Tomalin.</p>&#13; &#13; <p>"Our system will flag when you should be careful, but it's always your call. It doesn't stop people posting or viewing what they like, but it gives much needed control to those being inundated with hate."</p>&#13; &#13; <p>In the paper, the researchers refer to detection algorithms achieving 60% accuracy – not much better than chance. Tomalin's machine learning lab has now got this up to 80%, and he anticipates continued improvement of the mathematical modeling.</p>&#13; &#13; <p>Meanwhile, Ullman gathers more 'training data': verified hate speech from which the algorithms can learn. This helps refine the 'confidence scores' that determine a quarantine and subsequent Hate O'Meter read-out, which could be set like a sensitivity dial depending on user preference.</p>&#13; &#13; <p>A basic example might involve a word like 'bitch': a misogynistic slur, but also a legitimate term in contexts such as dog breeding. It's the algorithmic analysis of where such a word sits syntactically - the types of surrounding words and semantic relations between them - that informs the hate speech score.</p>&#13; &#13; <p>"Identifying individual keywords isn't enough, we are looking at entire sentence structures and far beyond. Sociolinguistic information in user profiles and posting histories can all help improve the classification process," said Ullman.</p>&#13; &#13; <p>Added Tomalin: "Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses."</p>&#13; &#13; <p>However, the researchers, who work in Cambridge's <a href="https://www.crassh.cam.ac.uk/">Centre for Research into Arts, Humanities and Social Sciences (CRASSH)</a>, say that – as with computer viruses – there will always be an arms race between hate speech and systems for limiting it.</p>&#13; &#13; <p> ̽»¨Ö±²¥project has also begun to look at "counter-speech": the ways people respond to hate speech. ̽»¨Ö±²¥researchers intend to feed into debates around how virtual assistants such as 'Siri' should respond to threats and intimidation.</p>&#13; &#13; <p> ̽»¨Ö±²¥work has been funded by the <a href="https://hscif.org/">International Foundation for the Humanities and Social Change</a>.</p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Artificial intelligence is being developed that will allow advisory 'quarantining' of hate speech in a manner akin to malware filters – offering users a way to control exposure to 'hateful content' without resorting to censorship.</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">We can empower those at the receiving end of the hate speech poisoning our online discourses</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Marcus Tomalin</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; ̽»¨Ö±²¥text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright © ̽»¨Ö±²¥ of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Tue, 17 Dec 2019 17:37:13 +0000 fpjl2 210032 at