In 2004 the movie I, Robot portrayed a civilization assisted by humanoid robots in which some people (no spoilers, but you should have seen it already) are sceptical of the reliability of these forms of artificial intelligence. As for 2022, the threats of AI nor the use we make of robots follow Hollywoodian scripts, yet some real potential issues must be addressed.
The impact of predictions and decisions assisted by AI/ML may vary depending on the field. They range from face recognition softwares to medical diagnosis, from marketing analytics to assistance to judges in courts. It is precisely this last type of application that this brief enquiry focuses on: the COMPAS case as a tool to discuss broader implications of AI decision-based systems.
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a software package for crime prediction. Its aim is to assess a defendant’s probability of reoffending, thus supporting judges in their decisions concerning whether a culprit should be detained before trial or might be released on bail, whether she should go to prison or might be eligible for probation, or whether she should stay in jail or might be a candidate for parole. The model uses the answers to 137 questions and computes a risk coefficient.
It has been used since 2015 in several USA courts. Even though it is now an “old” tool, the use of AI instruments aiding human decision-making is growing, hence it is an excellent example to take into consideration for discussing some ethical matters.
More specifically, a simple confusion matrix was able to point out an alarming fact: white people were statistically more likely to be falsely classified by the algorithm as “not prone to recidivism” than black people who, on the other hand, were more likely to be falsely classified as possible to be arrested again in case of parole.
This issue is about statistical fairness, a sort of algorithmic discrimination making the expected probability of recidivism different for people with different ethnicities. From here two new matters arise:
Why did it happen?
How can we make it fair?
“Race” is not a variable of the model, so why do we see discrimination? In the training set, there may be a significant correlation between race and the predicted variable, not because criminality rates are actually higher in the black population, but because of the systematic racism which punishes, ceteris paribus, blacks more than whites. Hence, the dataset used for training the model is flawed and model predictions cannot ignore that.
Now that we set an answer to the first question, how can we fix the problem? Should we take a compensatory approach? Should we ask the model to take into account the diversity of rates between groups? It all depends on how we intend to measure fairness. A thorough discussion suggests that the predicted rates should be taken as equal, or at least more equal than they are now. This must be achieved parallelly to some affirmative action to counterbalance the effect of past wrongs in the conscientious sense of not letting past disadvantages determine the future fates of people belonging to discriminated groups.
Thanks to the article Two Kinds of Discrimination in AI-Based Penal Decision-Making by Dietmar Hübner we learned that before worrying that AI takes control of civilization, we have to be extremely careful about not letting it perpetrate the injustices our society enacted in the past.
Bibliography:
Hübner, D. (2021) Two Kinds of Discrimination in AI-Based Penal Decision-Making. SIGKDD explorations. [Online] 23 (1), 4–13.