AI tool designed to identify coercive language patterns receives Home Office funding
A seed project that could help police identify patterns of coercive control facilitated by technology has received a cash boost of £115,000, courtesy of the UK Home Office.
The project, for which De Montfort University Leicester (DMU) is a consultant, aims to create an artificial intelligence (AI) tool that can assess the risks of potential perpetrators of domestic violence by analyzing the language they use with their partners on text messages and social media.
Professor Vanessa Bettinson with Nonso Nnamoko
It is hoped that the AI tool, currently under development, will help police quickly scan conversations for red flags using keywords and patterns, without manually reading the phone. of a victim.
Professor Vanessa Bettison, who teaches criminal law at DMU and is one of the project’s lead consultants, believes the tool can encourage more victims of domestic violence and coercive control to come forward.
She said: “As technology has evolved, so has abusive and coercive behavior. Abusers can now control light settings around the house and monitor door or central heating systems, leaving victims vulnerable to abusive behavior when home alone.
“However, we still believe that language holds the key to identifying those at risk of coercion or abuse. This funding from the Home Office allows us to put this theory to the test and allows us to see if a tool AI analysis is feasible.
“The police forces we interviewed as part of our research are very supportive of the development of the tool and want to see this type of technology introduced, in the hope that it can increase the chances of identify more perpetrators of domestic violence at an early stage.
“While we are only at the beginning of the project, I am convinced that what we are developing will protect the privacy of the victim. Having the AI tool analyze conversations instead of an individual may encourage more victims to come forward, as their private conversations will not be manually scrutinized. »
Nonso Nnamoko, who is a lecturer in computer science at Edge Hill University and leads the development of the AI tool, explained that the datasets were taken from court transcripts and social media posts organized online. . The datasets will be used to train the AI to be able to identify what meets the legal threshold for abusive posts in text messages or social media.
He said: “The AI algorithm will give victims greater privacy and it will also be a much faster process. A survivor will provide their mobile phone, which will give us their conversation log. The AI will be able to analyze. The AI model is 88% accurate so far, and we hope to reach a threshold of 96% to 98%.”
London South Bank University will undertake the feasibility test and document its findings in the research team’s Home Office report.
Once the feasibility study is completed, the coalition of universities hopes to secure additional funding to conduct more testing, eventually reaching a point where police forces can regularly offer this program to victims of domestic violence.
Posted on Friday, May 6, 2022