The department of health and aged care has warned AI algorithms can hold biases and inherit biases if they are trained on or developed with limited or predisposed data sets.

In its submission to the senate select committee on adopting artificial intelligence (AI) the department said biases can occur completely throughout the AI system.
The inquiry was opened in March this year and seeks to understand and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia.
“AI algorithms can inherit biases if they are trained on or developed with limited or biased data sets. Bias can also occur through the intrinsic design or configuration of the AI system itself,” the department’s submission report stated.
“Errors or misleading results can arise where the data AI is trained on draws from a population that is different to the group it is then applied to.
“In healthcare settings, biased algorithms can lead to exacerbation of inequities, existing social inequalities, and disparities in patient care, especially in underrepresented populations.”
One example observed by the health department is a machine learning algorithm that was “found to be less accurate at the detection of melanoma in darker skinned individuals, as it had mainly trained on fair skinned patients.”
“AI may also predict a greater likelihood of disease because of gender or race when those are not causal factors.
“Therefore, as data diversity is a main enabler for better accuracy and more equitable performance, it is critical that if data limitations are in the AI solution, these gaps should be understood and made available to healthcare providers who are using AI for clinical purposes,” the report said.
In the submission, the department warned, “humans are also susceptible to human cognitive biases; an evidence base is building which shows how AI can improve clinical decision making and accuracy in patient care.”
The submission also said the use of AI across the healthcare sector “presents heightened ethical, legal, safety, security and regulatory risks” as these carry “direct effect on patient safety and the use of sensitive health data for algorithmic development.”
“AI technologies that use sensitive health data to train and refine models, require explicit consent from a consumer in line with the Privacy Act 1988” with a breach leading to distrust.
Alongside transparency and explainability also had a role as the report said “data standardisation, stewardship and interoperability are important steps in optimising data quality for trusted AI outputs.”
The submission also said the “accuracy of an AI output is dependent on the quality of data input. In some cases, outputs can be entirely wrong, commonly referred to as hallucinations.”
“This may pose serious patient safety risks when AI software is used to give clinical decision-making support, for example differential diagnosis or disease screening tools.”
Over-reliance on the likes of AI is another risk as “there is a well-documented tendency for humans to over-rely on, and delegate responsibility to, decision support systems, rather than continuing to be vigilant, known as automation bias.”