AI exposing pre-existing risks, warns Flame Tree Cyber's Kat McCrabb

By

Data governance and third-party risk among key considerations.

With AI use by organisations in Australia compounding privacy, legal and other risks, iTnews’ sister publication techpartner.news invited opinions from technology partners about what organisations should be doing now to manage risks stemming from AI use.

AI exposing pre-existing risks, warns Flame Tree Cyber's Kat McCrabb
Kat McCrabb, Flame Tree Cyber

Kat McCrabb, managing director of Queensland-based firm Flame Tree Cyber, provided the following views, touching on poor decision making due to overreliance on AI, questions about where data used by AI will be stored and processed, and third-party risk, among other topics.

Q. What is one aspect of AI risk your clients are finding very difficult, or are particularly concerned about, or you are seeing often unaddressed?

Kat McCrabb, Flame Tree Cyber: AI is often leading to the realisation of pre-existing risks. This can be inappropriate access to data and systems from poor access management, or reduced quality of decisions arising from low data quality.

We also see leadership teams concerned about their team’s becoming over-reliant on AI and not applying critical thinking to advice provided by AI. This can result in poor decision making or behaviour that contradicts organisation policies.  

Q. Are you seeing significant organisational tensions or failings around AI now (e.g. between innovation teams or other business employees and risk/compliance)? Can you touch on best practices to address this friction?

Kat McCrabb, Flame Tree Cyber: Boards and end users are eager to see productivity gains from AI. Technology teams want to implement strong foundations and guardrails to implement responsible AI. We see a layer of leadership in the middle who want responsible AI and productivity gains.

We advise starting with user education because of the time it requires to be effective and can help reduce risks around shadow AI. Concurrently we build strategy, governance, understand threats and risks and design technical controls. 

Q. Is there a specific legal risk organisations should consider when contracting tech products or services underpinned by AI –  what should they be asking tech vendors or providers?

Kat McCrabb, Flame Tree Cyber: Is there a key question organisations should ask about storage and processing of data used and generated by AI systems?

Where will the data be stored, destroyed and processed, and under which jurisdiction’s laws and security controls will it fall? If your provider can’t answer this then it’s hard to have confidence in their use of responsible AI.

Q. Is there a significant risk you think organisations should be looking at in terms of their external ICT services providers’ use of AI?

Kat McCrabb, Flame Tree Cyber: Third party supplier risks are always topical. Start with asset management - know what AI tools or systems your suppliers are using, what data they are feeding into them, and what processes they’re supporting.

Once you understand the risk, check how data will be stored, how it will be protected, and whether the supplier is working within any governance frameworks or standards.

This should be captured in contracts.

Q. Is there an approach or framework you recommend companies use to reduce the risks that come with AI use?

Kat McCrabb, Flame Tree Cyber: I advocate for ISO 42001 as it allows organisations to adopt a risk-based approach to responsible AI with appropriate governance. There’s also the Voluntary AI Safety Standard which introduces 10 guardrails.

I’m helping organisation use MITRE ATLAS for threat analysis and the MIT AI Risk Repository too. It’s quite fun to map the threat model and kill chain for AI and demonstrate a layer of defence model against it.

Q. What in your view is the best practice approach for small businesses that don’t have legal or risk teams to reduce risks from their use of AI?

Kat McCrabb, Flame Tree Cyber: Engage external expertise to ensure you’re operating legally and getting the best value for money from risk reduction activities. If you’re unable to do that then consider the 10 guardrails from the Voluntary AI Safety Standard.

Q. In your view, should Australian organisations be preparing for more regulation of their use of AI – and how?

Kat McCrabb, Flame Tree Cyber: The regulation to ensure protection of data and disclosure of automated decision making is in place, and I don’t think we’ll see an expansion on that. The majority of businesses in Australia are small business and additional regulation could disadvantage them, particularly when competing in a global economy.

I’d like to see increased enforcement when breaches are clear. We have an opportunity to ‘get it right’ from the start and I’d hate to see that squandered.

Q. In your view, what should be the most important consideration for Australian organisations when looking for a partner to help mitigate AI risk in 2025? 

Kat McCrabb, Flame Tree Cyber: Ask potential partners about their team’s credentials, their commitment to ongoing staff education, involvement in research, client testimonials, and proven capability. I’d also be interested in what they offer to support your own workforce, e.g. will they help train and support your workforce or take a 'dump and run' approach?

Q. Any other comments about this?

AI adoption data from the Department of Industry, Science and Resources demonstrates that SME AI adoption is shifting from ‘broad use’ to limited and cautious uptake. But individuals using AI is rapidly increasing. When we do discovery in organisations we find that the AI LLMs are used more frequently than Google and Bing combined.

Our workforce is expecting to use AI with the speed and ease with which we use it at home. While SMEs may not have current use cases for AI, they should be spending time preparing. Implement guardrails, design an AI strategy and policy, and prepare technology to ensure we get to leverage the opportunities.

Flame Tree Cyber partners with mission-driven organisations to manage digital risk, meet compliance requirements and build long-term resilience. It specialises in cybersecurity, AI governance and risk management, and delivers tailored solutions supported by advisory services.

Disclaimer: The views expressed in this Q&A are those of the individual contributors and do not necessarily reflect the views of iTnews or techpartner.news. The content is provided for general informational purposes only and does not constitute legal, financial or professional advice.

See the directory of managed service providers (MSP) at techpartner.news.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Orro: Hyper connected consumers drive intelligent network investments

Orro: Hyper connected consumers drive intelligent network investments

The AI Revolution in Government Networking: From Infrastructure Cost to Strategic Asset

The AI Revolution in Government Networking: From Infrastructure Cost to Strategic Asset

Government AI Adoption: From Ambition to Implementation

Government AI Adoption: From Ambition to Implementation

Australian businesses leverage 5G to unlock their full potential

Australian businesses leverage 5G to unlock their full potential

Log In

  |  Forgot your password?