The most controversial issue is that the introduct

  • Detail

Controversy continues. The introduction of artificial intelligence into the current judicial system is a double-edged sword

[CNMO] at the data for black lives conference held last weekend, technical experts and legal experts

[CNMO] at the data for black lives conference held last weekend, technical experts, legal experts and community activities. Therefore, the most mature experts in the industry discussed the criminal justice system in the United States and put forward some views

the United States has more prisoners than any other country in the world. By the end of 2016, nearly 2.2 million adults were held in prisons, and 4.5 million were in other correctional facilities. In other words, one in 38 adult Americans is subject to some form of correctional supervision

introducing artificial intelligence into the judicial system

under the great pressure of not increasing the risk of crime and reducing the number of prisons, courts across the United States have resorted to automated tools. Police departments use prediction algorithms to formulate strategies, and law enforcement agencies use face recognition systems to identify suspects. These practices have been subjected to various reviews to determine whether they have really improved security or merely perpetuated existing inequities. For example, researchers and civil rights advocates have repeatedly proved that the face recognition system may fail, especially for people with dark skin. The system even mistook congressmen for criminals

but by far the most controversial tool is the crime risk assessment algorithm used after the police arrested the suspect

the risk assessment tool aims to do one thing: collect the details of the defendant and give a recidivist score, a number indicating the likelihood of his or her recidivism. Then, the judge used this score in various considerations, including deciding which type of judgment the defendants should accept, but their cost will increase with the passage of time, whether they should be imprisoned before trial, and to what extent they should be sentenced

the logic of using this algorithmic tool is that if you can accurately predict criminal behavior, you can allocate resources accordingly, whether it is recovery or imprisonment. In theory, it also reduces some bias, because judges make decisions based on data rather than their intuition

but modern risk assessment tools usually use historical crime data. Machine learning algorithms use statistics to find patterns in the data. Therefore, if you provide historical crime data, it will pick out patterns related to crime. But these patterns are statistical correlation rather than causality. For example, if the algorithm finds that low income is related to high recidivism, which every logistics company hopes to do, AI cannot analyze whether low income is really the cause of crime

now, people who have been identified as high recidivism risk by law enforcement officials include low-income and ethnic minority groups. It can be seen that the algorithm may generate bias and generate more data contaminated by bias, leading to a vicious circle

the debate about these tools continues. Last July, more than 100 civil rights and community organizations, including ACLU and NAACP, signed a statement, hoping not to use risk assessment. But at the same time, more and more jurisdictions and states have begun to resort to these tools because of the prison burden

Copyright © 2011 JIN SHI