Ask tech providers if your data is training their AI solutions, says Arinco's James Westall

By

IP ownership remains “largest concern of many organisations” using AI solutions, in his view.

Australian organisations should understand if and how technology solution providers are using their customer data to build AI solutions, according to James Westall of ANZ consulting firm Arinco.

Ask tech providers if your data is training their AI solutions, says Arinco's James Westall
James Westall, Arinco

With AI use by organisations in Australia compounding privacy, legal and other risks, iTnews’ sister publication techpartner.news invited comment from technology partners about what organisations should be doing to manage risks stemming from AI use.

Westall, who is Arinco’s Victorian sales director and Microsoft MVP Azure AI Platform, provided the following views.

Q. What is one aspect of AI risk your clients are finding very difficult, or are particularly concerned about, or you are seeing often unaddressed?

James Westall, Arinco: Overwhelmingly, ANZ businesses are most concerned about data security and sovereignty when adopting AI services. With many AI capabilities now embedded directly into collaboration tools, the risk of accidental corporate data leakage is a common and often the decisive factor in whether organisations build solutions internally or purchase enterprise-grade offerings.

Those with mature information architecture, well-defined taxonomies and robust data loss prevention controls are better equipped to address this risk. Others are confronting years of underinvestment in these areas. Critically, remediating poor data hygiene is not a quick fix and it requires sustained business adoption efforts, often extending AI implementation timelines.

Q. Are you seeing significant organisational tensions or failings around AI now (e.g. between innovation teams or other business employees and risk/compliance)? Can you touch on best practices to address this friction?

James Westall, Arinco: The most common tension we see is business impatience with technology teams to deliver scalable AI solutions. The availability of highly capable, off-the-shelf AI tools can create a perception that implementation is simple. In reality, technology teams must navigate unique risks and prerequisites such as data security, identity integration and service monitoring, while also addressing emerging AI-specific risks. This gap in understanding often results in the business purchasing an AI solution and handing it to IT with the expectation to simply “make it work”.

Best practice is to establish a shared delivery function between innovation, business and risk early. This should include jointly defining security, compliance and operational readiness criteria before procurement, along with clear ownership for AI governance, ongoing monitoring and capability uplift.

Q. Is there a specific legal risk organisations should consider when contracting tech products or services underpinned by AI –  what should they be asking tech vendors or providers?

James Westall, Arinco: Intellectual property ownership remains the largest concern of many organisations and is a key risk to consider when using AI solutions. One interesting trend relevant to this risk is technology providers considering the use of customer data to build differentiating AI solutions into their product offering. Organisations should be working with providers to understand:

  • Do you use our data with any AI solution?
  • Do you use our data for training of any AI solutions?
  • If you are using our data with AI solutions, what are they?
  • For each of these solutions, what controls are in place?

Where AI is in use, it is important to determine what contractual terms can be enforced to limit risk. Be prepared to walk away in high-risk scenarios, and in cases where negotiation is not possible, organisations should prioritise internal education so that staff understand what information can and cannot be shared with the provider.

Q. Is there a key question organisations should ask about storage and processing of data used and generated by AI systems?

James Westall, Arinco: Understanding the geographic location of data storage and processing is critical, as it determines which privacy, security and regulatory frameworks apply. Organisations should confirm whether data is kept within their preferred jurisdiction, whether it is replicated or processed in other regions, and what security controls are in place during transit and at rest.

Q. Is there a significant risk you think organisations should be looking at in terms of their external ICT services providers’ use of AI?

James Westall, Arinco: The risk is not unique to ICT providers, but applies to all external providers. Understanding where and how external partners use AI services is critical, as the same overarching risks that apply internally can be present and often less controlled in third-party environments. In the tech sector, adoption tends to move faster because technically adept teams integrate AI into their workflows quickly.

As an example, developer tools such as GitHub Copilot, Cursor and AWS Kiro/Q can improve delivery speed and reduce cost, but they also introduce potential risks if not governed properly. Technology providers should be able to clearly explain where, how and when they use AI. If they cannot, this should be treated as a red flag.

Q. Is there an approach or framework you recommend companies use to reduce the risks that come with AI use?

James Westall, Arinco: Australia’s Voluntary AI Safety Standard is an incredibly valuable resource that should be reviewed by all businesses. The site contains practical examples which you can compare to your own situation, enabling you to understand how you might implement the suggested guardrails.

Q. What in your view is the best practice approach for small businesses that don’t have legal or risk teams to reduce risks from their use of AI?

James Westall, Arinco: AI should be treated like any other technology in your business, with risks assessed on a service-by-service basis. A simple but effective step for small businesses is to maintain a technology risk register and review the terms of use for each AI service. This helps identify potential issues early and ensures informed adoption. There are also free, easy-to-follow resources that provide guidance for employees, such as:

Q. In your view, should Australian organisations be preparing for more regulation of their use of AI – and how?

James Westall, Arinco: Yes. While often deliberate in its pace, the Australian government has shown a clear willingness to legislate on digital matters. At a minimum, we can expect amendments to existing laws such as the Privacy Act to close potential loopholes, and it is likely that a mandatory AI standard will follow the current voluntary guidelines in the coming months. Organisations can get ahead by assessing their current maturity and beginning to align with the voluntary standard now.

Q. In your view, what should be the most important consideration for Australian organisations when looking for a partner to help mitigate AI risk in 2025? 

James Westall, Arinco: Look for partners that combine applied AI experience with deep security and risk expertise. Many providers can identify and assess risks, but the most valuable partners are those who also know how to remediate, implement and deploy AI safely in real-world environments. This combination enables pragmatic, actionable conversations about AI risk.

Disclaimer: The views expressed in this Q&A are those of the individual contributors and do not necessarily reflect the views of iTnews or techpartner.news. The content is provided for general informational purposes only and does not constitute legal, financial or professional advice.

See the directory of managed service providers (MSP) at techpartner.news.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Orro: Hyper connected consumers drive intelligent network investments

Orro: Hyper connected consumers drive intelligent network investments

The AI Revolution in Government Networking: From Infrastructure Cost to Strategic Asset

The AI Revolution in Government Networking: From Infrastructure Cost to Strategic Asset

Government AI Adoption: From Ambition to Implementation

Government AI Adoption: From Ambition to Implementation

Australian businesses leverage 5G to unlock their full potential

Australian businesses leverage 5G to unlock their full potential

Log In

  |  Forgot your password?