ACSC’s new guidance for engaging with AI

The Australian Cyber Security Centre (ACSC) has published guidelines to help medium to large businesses and organisations engage with artificial intelligence (AI). The ACSC led a collaboration with 11 international organisations to prepare the joint guidance.

Defining AI

Artificial intelligence refers to computer systems which are capable of performing tasks which would typically require human intelligence. The ACSC, splits AI into three sub-categories as follows:

  • Machine learning: the components used by computers to adapt to patterns in data without needing to have programmed responses or decisions.

  • Natural language: programs that process and mimic natural language through sources such as speech, video and image.

  • Generative AI: systems which generate new content such as audio, code, text and images through the use of data models.

AI is already highly integrated in typical internet and satellite programs, and while its usage within business operations is likely to grow due to its low cost and high efficiency, it is important to be mindful of various threats.

Potential Threats posed by AI

Some common threats include:

  • Input manipulation attacks: this is where a malicious actor inputs hidden commands within an AI program to manipulate and hijack it. This is typically done to bypass various restrictions which originally were meant to dictate the functionality of the AI system.

  • Generative AI inaccuracy: organisations which may seek to rely on AI generated content must be aware that it may not always be factually correct or accurate.

  • Privacy and intellectual property risks: information given to an AI may be used as part of the systems training data to generate new content, which may be an issue if the information is private in nature, e.g. customer details.

  • ‘Data poisoning’: when a generative AI’s training data is intentionally manipulated, causing the AI to learn the wrong information and generate adverse functions. This can be done through the insertion of new data, or the modification of pre-existing data.

In light of these risks it is important that businesses critically evaluate their need for a new AI system, the functions they intend it to carry out, and the results they expect from it.

Six key considerations for Australian businesses

Businesses planning to adopt AI may need to alter their Cyber Risk Management (CRM) strategies to ensure that they adapt to new risks when integrating emerging AI technologies.

Depending on the role of an AI system within business operations, it is important to take heed of the following considerations:

Source Data

What data the AI program will source from, and the potential privacy and security controls which may need to be in place to protect the transfer of private and confidential data. Special consideration must be given if the AI is a third-party system; it is important to know if your data will be used to retrain the AI, or whether your data will continued to be stored in the system in the event of a termination of the commercial agreement.

Explore a Trial First

Consider whether it is possible to trial the AI system. This will help ensure that any bugs or core issues with the system can be amended prior to commercial use.

Tracking AI Systems

Introducing logging and monitoring of the AI systems to identify any faults or changes in performance which could lead to broader security issues.

Backup Procedures

Implementing a procedure for backups which would safeguard original data in the event of a malicious attack or system failure. Offline backups are an important tool to protecting data from the risk of a cyber network attack.

Adequate Staff Training

Human error is a significant contributor to cyber infiltrations exposing organizations to significant risks. You should ensure that staff have suitable training to set-up, operate and maintain the AI system.

Implement Basic Cyber Security

Ensure that common cyber security measures are in place, such as multi-factor authentication to access your AI systems, and the ‘principle-of-least-privilege’ to limit the number of staff with access to the system.

AI and Insurance

AI presents opportunities for more precise underwriting, coverage and pricing of insurance cover. There are indeed concerns that have been voiced across the insurance market about the impact of AI, both positive and negative. Some of these concerns are set out in our article here to which we presented publicly last year.

Insurers are some of the owners of the greatest sources of population data and trends. AI’s potential in insurance lies in the ability to extrapolate insights from ever growing and more complex datasets.

A McKinsey study concluded that there were 4 AI-related trends shaping insurance:

  1. The explosion of data from connected devices

  2. Increased prevalence of physical robotics

  3. Open-source and data ecosystems

  4. Advances in cognitive technologies

The work of underwriters will need to expand to assess the governance in place when considering insuring organisations that have or intend to adopt AI. Businesses will need to articulate the intended use of AI, the data relied upon by it and the outcomes produced before insurers can begin to properly underwrite risks. Insurers are likely to look to the ASCS guidance and seek from insureds illustration of compliance with the guidance and regulatory recommendations.

AI presents both opportunities and threats for organisations. Before implementing and adopting AI Into your organisations, boards should take heed of the threats posed by the AI system to be used for the organisation and its stakeholders to ensure that systems are ready and appropriate security controls are implemented.

Bellrock’s approach to cyber risk is to have an independent assessment performed by a cyber expert. The review intends to ensure cyber maturity and includes assessing adequate of security, sufficient incident response and continuity plans. Even for clients who have already undertaken an audit, a refresher audit is recommended at the outset of or during the implementation of AI systems.

Stay informed with our latest articles

* indicates required