The US-based Pew Research Center has found the American public is growing increasingly distrustful of the use of computer algorithms in a variety of sectors, including finance, media and the justice system.
It's a sign that for all the trumpeting of AI, automation and machine learning, many people are increasingly suspicious of where technology is headed.
A report released over the weekend found that a broad section of those surveyed feel that computer programs will always reflect some level of human bias, that they might violate privacy, fail to capture the nuance of human complexity or simply be unfair.
The findings back up musings from leading tech and ethics figures at the Human Rights and Technology Conference held by the Human Rights Commission held in Sydney earlier this year.
The Pew report centre on four common scenarios in which algorithms and machine learning are being applied to streamline workflows and remove the risk of human error:
- Criminal risk assessment for people up for parole.
- Automated resume screening of job applicants.
- Automated video analysis of job interviews.
- Personal finance scores using many types of consumer data.
More than half of people surveyed consider the use of algorithms in each of these decision-making scenarios to be unfair to users or consumers.
Specifically, the top concern for people who found the personal finance score concept was the potential for the violation of privacy - contrasting with the finance industry’s attitude that any and all data that can be dug up on users is fair game for use in their algorithms.
The issue of fairness also cropped up across multiple scenarious including the finance score, job interview video analysis and automated CV screening.
Of those who found the CV screening to be a concern, the top concern (mentioned by 36 percent of users) was that an algorithm removes the human element form important decisions.
The overiding consensus is that “humans are complex, and these systems are incapable of capturing nuance”.
However, Pew notes that the results were at times highly context-dependent.
Of those surveyed, people of African-American descent were more likely to rate the personal finance score and criminal risk assessments as being more biased - which isn’t surprising given the growing evidence the Compas algorithm risk assessment used in several US states is biased against black people.
All up, 61 percent of black people surveyed thought the criminal risk score concept would not be fair for people up for parole, compared to 49 percent of white people surveyed.
However, only 25 percent of white people thought an algorithm-based personal finance score would be fair to consumers, compared to 45 percent of black people.
The report also delves into how context affects the respondents’ attitudes towards the collection of data used to inform the algorithms.
“A 75 percent majority of social media users say they would be comfortable sharing their data with those sites if it were used to recommend events they might like to attend,” Pew said.
“But that share falls to just 37% if their data are being used to deliver messages from political campaigns.”
The report concludes that, "by and large, the public views these examples of algorithmic decision-making as unfair to the people the computer-based systems are evaluating".