Criminals sentenced by artificial intelligence would face fairer and more consistent punishments than those handed out by human judges, two Swinburne University researchers argue.

In a paper for the Criminal Law Journal, law school dean professor Dan Hunter and researcher professor Mirko Bagaric argue artificial intelligence would remove emotional bias and human error from the sentencing process.
They say the technology can better "identify, sort and calibrate all the variables associated with sentencing" to arrive at a fairer and more accurate conclusion than a human.
Sentencing decisions are "often influenced by more than 200 considerations, many of which are variables [like education, criminal history, emotional motivations, and drug/alcohol use] which have been established prior to court hearings", the pair said.
Human judges subconsciously hand down harsher penalties to criminals with a particular background or race, Bagaric argued.
AI, however, could process many variables without emotional bias or prejudice to deliver a fair sentence, he said.
Artificial intelligence would also deliver more consistency in sentencing than a human, the pair argued, given the technology does not rely on human intuition.
“Currently, sentencing is a discretionary process, which means it is not a checklisted, methodical approach,” Bagaric said.
“Instead, it is an intuitive process. This leads to patent inconsistency because different judges have different intuitions, thereby resulting in judges prone to harsh or soft sentencing.”
However, AI systems have been found in some cases to contain just as much bias as the humans that trained them.
Some recent examples include an automated system that unfairly ranked US teachers, and racial bias in commercial AI software.
Additionally, the software vendors behind these AI systems generally refuse to disclose the underlying code, making it extremely difficult for an individual to appeal an AI-generated decision, as US man Eric Loomis found.
Even Google's head of AI has warned about the danger of intelligent systems taking on human prejudices.
The issue of bias in AI is expected to become more serious as the technology is further adopted within areas like law and medicine.
Groups like the Algorithmic Justice League are studying bias in AI and the impact of using the technology in criminal justice.
The Swinburne researchers admitted that AI as a sentencing method was far from being publicly accepted.
“People being judged by machines feels very Orwellian, or Terminator-like," Hunter conceded.
They instead suggested using artificial intelligence as an assistant to human judges in the interim.
“We need to demonstrate, through empirical research and extensive pilot programs, that this is a feasible, fair, and efficient way of sentencing that can help everyone," Hunter said.