Alana Daly Mulligan looks at the use of Artificial Intelligence by police and asks the question, is our privacy worth our protection, and how protected are we?

Content Warning: This article contains a discussion of image-based sexual abuse, sexual violence & paedophilia which some readers may find upsetting. 

 

Whether or not you agree with police, prisons, or the rest of our often-confusing justice system, some people do bad things that threaten and undermine our right to live safely in society. The law is in place to make sure that people’s dignity and existence is protected, no matter how big or small the crime. From the introduction of fingerprinting, to more advanced means of matching DNA from crime scenes; the way we understand conviction and the law and order has majorly shifted thanks to the development of science. In particular, the exponential increase in the rate of Artificial Intelligence development in the last three decades. You can unlock your phone with your face, you can ask your phone to play music, turn on lights, it’s all very Back to the Future II. Facial-recognition software, in particular, is a controversial development, one that comes with many caveats and questions about the right our police have to this data, and how truly responsible will they be with it?

One of the main problems that have arisen with facial recognition software is its often racist results. We’ve seen this in relation to commercial releases: not recognising non-white faces as different from each other has proven that this technology is indeed trained on white-washed biases and not a product of diversity. The products being sold to police departments are also not 100% free of these biases, even though they should be. 

In the United States during the Black Lives Matter protests in Summer 2020, facial recognition software was used by a number of police forces. With access to more than just the Police Department or FBI image database, police officers can access billions of images thanks to companies like Amazon and Clearview AI (which you can be horrified about further in this New York Times article on the company) selling these packages. If you’re not looking at it from an invasion of privacy perspective (which it absolutely is), from major issues with racial prejudice, or the potential for police to misuse this science, it’s a useful tool: solving cold cases, scanning through not only national policing image databases (which on a digital level have existed for more than twenty years) but an international repository of images often taken from personal profiles. It makes finding fits for crimes easier and faster. But the fact that these services often sift through social media profiles is a violation of many platforms terms of service, but as of yet, while companies have come out and condemned such actions, no legal case has been taken up to stop this. 

Perhaps a complication to the argument against this AI is the use of image-based machine-learning to stop the likes of sexual abusers, namely in the case of paedophiles. Dame Professor Sue Black is a forensic anthropologist, most famously known for her work in Kosovo with the UN during the 1990s, now works on the H-Unique project at Lancaster University. The €2.4 million project arose from Black being asked to assist in a case where a young girl came forward claiming to have been raped and recorded by her father, which, unfortunately, did not merit a conviction. Despite the case not being successful, Professor Black’s expert knowledge of human anatomy led to the conclusion that hands are a very reliable way of leading to the conviction of perpetrators. Being one of the most unique facets of a human being, from vein patterns to the individualistic nature of scars and nail blemishes, Professor Black aims to create a database of hands that can be searched: in other words, the hands in an abusive image or video could be cross-referenced with police records of hands around the world instantly, a way of effectively targeting this “global crime” and catching paedophiles. Her work using this research has resulted in 28 life sentences and an 82% result in a change of plea to guilty. This is a powerful result in the United Kingdom where one year saw 47,000 instances of child sexual abuse take place: 130 cases per day. 

This work, which at present is completed like a spot-the-difference exercise, is laborious and time-consuming, with single collections sometimes containing millions of graphic images. This challenges police and the investigative teams who are working against the clock to get a confession or a conviction. Further, this work can be deeply distressing to law officials who have to wade through materials that are mentally disturbing to potentially find a match. Professor Black claims that the development of this technology could end up becoming as valuable as fingerprinting in gaining evidence.

 

What could AI do for victims of image-based sexual abuse? The answer, in short, is to make the unruly wild-west world of the internet safer. Data is key, and while there are certainly questions about how much we should have to provide and our rights to it, along with an architecture of consent that needs constant re-examining. There is also a question around do we delay this technology until it is 100% bias-proof and proper training can be provided, or should we unleash it now knowing the good it has already done? The power of these image-profiling developments is scary in the wrong hands, and technology, being the exciting medium it is, develops with gusto, but often not in tandem with law and regulations to keep people safe.

Tags: