Telstra is reviewing arrangements with third-party suppliers after developing a new set of ethical standards to guide the procurement and in-house development of artificial intelligence systems.
The policy was developed as part of the Australian AI ethics principles pilot, which saw the telco test the government’s current thinking around AI in real-world scenarios alongside five other companies.
The eight principles were developed to inform the design, development and use of AI systems, including around human, social and environmental wellbeing; accountability; and fairness.
In a case study published by the federal government this week, Telstra said the pilot helped it to “articulate some minimum standards it expects of suppliers” providing third-party AI systems.
The standards involve asking suppliers questions around how a system was built, including what data it was trained or retrained on and whether the system has been regularly assessed for bias.
Telstra has since formalised the standards into a ‘responsible AI policy’ that outlines “standards and expectations for due diligence when acquiring, using and selling AI”.
The policy, complete with “detailed guidance to implement” the standards, applies to both “in-house developed AI and AI purchased from third parties” across the Telstra Group.
“When Telstra purchases third-party systems with embedded AI, it remains responsible for their performance,” the telco said.
“Telstra takes steps to ensure these purchased AI technologies are working in line with its ethical principles.
“Navigating how to share accountability with suppliers is not a simple exercise and Telstra continues to refine its approach to this.”
The pilot also helped the telco “identify a person who is ultimately accountable for each decision to purchase, deploy or on-sell a third-party AI system”.
In light of the new policy, Telstra is now “reviewing supplier governance processes to ensure that any third-party suppliers of AI solutions are meeting internal requirements”.
It is also “setting up role-based responsible AI training for employees and contractors involved in the deployment or procurement of AI system”.
Noting that discrimination can arise if AI goes unchecked, Telstra has similarly broadened the remit of its existing Risk Council for Data and AI (RCAID) to cover AI use cases.
“A cross-functional body must approve AI systems (including third-party systems) that inform decision with significant impacts on people,” the telco said.
Telstra has previously used AI to predict equipment failures on its network, within its virtual assistant Codi and to vet job seekers vying for temporary contact centre roles.