May 29, 2019, von: Hajo Greif

AI Ethics in Predictive Policing – DML@MCTS Public Lecture by Peter Asaro

In predictive policing, historic crime data are used by Artificial Intelligence (AI) systems to identify individuals or geographic areas that bear increased risks for the commitment of crimes. More aggressive supervision and policing of these individuals and areas is the most typical solution derived from the results of these statistical models. There is little evidence though that these algorithmically powerful but  both otherwise unsophisticated measures succeed in bringing down crime rates. They might even lose out against technologically much simpler but socially and ethically more nuanced methods of getting individuals at risk out of potential trouble.In his talk, Peter Asaro described the approaches and results of two field trials in Chicago, Illinois, that targeted at-risk youths but wiedly diverged in the predictive methods and preventive measures chosen. One of them, called the “Strategic Subjects List” (SSL), used risk scores based on  data on the individuals’ past involvement in violent crime. Notably, even having been the victim of such crimes would increase the risk scores if other indicators were in place, as they were taken as indicators of potential retribution attacks. In terms of crime prevention, the individuals identified as “strategic subjects” were approached by police who told them that they are under increased surveillance – which largely exhausted the preventive measures on offer. No decrease in rates of violent crime was reported, but as a side-effect, strategic subjects were more frequently arrested any crimes in which they might have been involved. Most notably, this approach counteracted the Chicago Police Department’s own policies of trust-building in heavily crime-affected communities. In the other field trial, called “One Summer”, at-risk youths were identified by approaching schools in crime-ridden city districts to identify students with a record of indicative behaviour. In a controlled study, a selection of these students were directly and personally invited to participate in a scheme of subsidised summer jobs, mostly of the community service kind. The plan was to get at-risk youths off the streets and provide them and their families with a modest income. As accompanying measures, training and conflict management sessions were offered. Under this approach, a 51 per cent reduction in rates of violent crime was reported for the subject population, as compared to the control group.  The results of this study were encouraging enough to get the programme adopted by the Chicago city council as a social policy and crime prevention measure.</div><div> </div><div>Where the SSL approach could be characterised as “statistics on sterorids”, which uses any and all data available to model threats in terms of so-called “precrime” scenarios while being largely unsuccessful in actually preventing crime, a much more ‘low-tech’ approach that was based on an ethics of care proved to be much more successful. In an intriguing way, Peter Asaro concluded, an AI system proved not to provide an intelligent solution to a social problem that could be addressed by more models technological means. The intelligence or lack thereof involved lies in the ethical values that go into the solution’s design, not in its computational power and sophistication.

Tags:  AI, Philosophy, Law, Machine Learning
Kategorien:  Updates, Blog, Science, Events, Research, Science, Theory