When business leaders are implementing responsible AI practices, they first need to understand that to have responsible AI, it begins with the responsible use of their customers' data.

Lisa Green, data and AI solutions executive at Telstra spoke on a panel at SXSW Sydney explaining the steps business leaders need to take to ensure they are using AI responsibly.
“It starts with the data, we are custodians of our customers’ and our employees’ data. We don't own it, it is not ours and that's a big responsibility,” she explained.
Green said what data they capture, how they capture it, how they use it, store it and access it are the first factors organisations need to be thinking about when they are using AI.
“It's effectively our ticket to pay, if you can’t do that you don't deserve to be in the game. That's my view,” she said.
“Having all the right controls in place and ensuring that we're keeping the data safe and secure and private is the first step.”
Green said it is also important for business leaders to embed and build policy frameworks into an organisation.
“We've seen other companies globally do that well. The government has developed principles and we have adopted those principles into our frameworks and policies and then processes around those to ensure that we're actually adhering to those standards,” she said.
One of the biggest and hardest but most important tasks to do for organisations embedding responsible AI practices is ensuring they have a diverse workforce, Green explained.
“Ensuring they've got the right training and the right mindset in approaching this because it's a values-led type approach to technology, that's important,” she said.
“It's one of the things that companies underestimate which is the importance of the people. It's a huge and the mindset shift is important.”
Green said business leaders also need to comprehend how to build responsible AI understanding into their everyday.
“It can't be something that every month you go back and you have a look and see if you're doing it right. It has to be what you do every day,” she said.
One of Green’s aims at the moment is to put in technology that makes it easy for people to do the right thing and hard for them to do the wrong thing.
“Having the alarms and the alerts to say ‘something's not quite right here, you've got to go and have a look’ is important,” she explained.
“But also ensure that you're assessing the risks around developing and deploying this type of capability and whether or not those risks are things that you're willing to weigh up or if they match your ethical standards.”