Loading component...
At a glance
- Businesses face mounting pressure to balance rapid AI innovation with ethical, legal and operational accountability.
- Australian companies are currently using voluntary standards and existing legislation to guide AI use.
- Accountants and finance professionals are emerging as key players in AI governance.
AI adoption is surging. The CPA Australia Business Technology Report 2025 shows AI use in business rose from 69 per cent in 2024 to 89 per cent in 2025, with the percentage of businesses using it “all the time” doubling in that period.
Accounting and finance teams are reaping the benefits by using AI to enhance data analytics and insights, support research and communication, and streamline management accounting tasks. Yet as adoption grows, so do questions about whether the technology is being used responsibly.
Global ethicist Dr Catriona Wallace defines responsible AI as technology that does not cause harm or unintended consequences. “That’s the baseline,” she says. “But who is responsible for that?
That’s where we still see some disconnect. From investors to boards and executive teams, right down to frontline employees, there needs to be a company-wide understanding of what ethical AI is and a commitment to create a responsible AI-first strategy, which is needed for sustainable innovation.”
The difficulty for businesses is knowing how to build such a strategy.
AI governance in Australia
While the Australian Government introduced a framework in 2024 to steer the safe and responsible use of AI in the public sector, no such AI technology-specific laws exist for the private sector. Currently, voluntary guidelines and existing laws are the main drivers of responsible AI in private business.
The AI Ethics Principles, released in 2019, outline eight voluntary principles for responsible AI design and use. In 2024, the National AI Centre (NAIC) Voluntary AI Safety Standard added 10 guardrails covering areas such as testing, transparency and accountability.
That same year, the government proposed mandatory guardrails for AI in high-risk settings, though is yet to confirm whether these will be enacted.
In August 2025, the Productivity Commission recommended that AI regulation be built on existing legal frameworks, rather than new AI-specific laws.
“Over-regulation of AI will likely stifle innovation and investment into it,” says Gavan Ord, business investment and international lead at CPA Australia.
“While AI is evolving faster than regulation, many existing laws already address inappropriate uses. CPA Australia supports implementing risk-based AI regulations only where material gaps in existing legislation are identified.”
In an October 2025 speech, Senator the Hon Tim Ayres, Minister for Industry and Innovation, and Minister for Science, reaffirmed that Australia already has “strong, adaptive laws” that support consumer rights, privacy and fair competition.
These include the Australian Consumer Law, as well as the Privacy Act 1988, plus criminal, anti-discrimination, online safety, defamation and intellectual property laws. He added that good AI adoption requires thoughtful conversations in each workplace, rather than “new national bureaucracies that duplicate what we already have”.
“It is up to businesses to ensure AI is used ethically, effectively and democratically in workplaces, and responsibly in their goods and services,” he said.
Innovation tightrope
AI is built on data and deployed through software systems, making it vulnerable to risks common to other digital technologies including cybersecurity breaches, privacy violations and data misuse.
Other hazards include algorithmic bias, discrimination and non-compliance with existing laws. Businesses also face risks from AI performance or system failures, such as hallucinations or adversarial attacks.
CPA Australia’s business technology report found the most frequently cited negative impacts of AI were data/privacy concerns and dependence on AI with reduced oversight. “Human oversight remains essential, especially in finance functions requiring accuracy, assurance and verification,” says Ord.
He points to several frameworks that can help leaders ensure ethical, secure and compliant use of AI. Further guidance comes from the Australian Securities and Investments Commission (ASIC), which has issued AI governance considerations for financial services (AFS) and credit licensees, and from the Australian Institute of Company Directors, whose A Director’s Guide to AI Governance offers practical guidance for boards using or wishing to deploy AI within their organisations.
Meanwhile, the NAIC’s 2025 Guidance for AI adoption recommends six practices for responsible AI governance and adoption. The Office of the Australian Information Commissioner also provides guidance on privacy and generative AI models.
Although not mandatory, these and other frameworks can help businesses mitigate risk and maintain trust. “Misuse not only carries legal consequences under existing law,” Ord says, “but also risks reputational damage and loss of public trust.”
APAC lessons in AI governance

Across the Asia-Pacific (APAC) region, governments are experimenting with different models of AI regulation. Singapore has been a leader since launching the world’s first Model AI Governance Framework in 2019.
This accountability-based framework advises organisations in areas spanning internal governance structures and measures, human oversight, operations management, and stakeholder interaction and communication.
The country takes a sectoral approach to AI regulation, with individual ministries, authorities and commissions responsible for producing regulatory compliance guidelines and regulations.
The Monetary Authority of Singapore, for example, released the Veritas Toolkit to help financial institutions assess AI for fairness, ethics, accountability and transparency.
Collin Jin FCPA, Deloitte Asia Pacific’s audit and assurance digital enablement and AI leader in Shanghai, and president of CPA Australia’s East and Central China Committee, says that “if the goal is scalable, export-ready governance that still lets startups move fast, Singapore’s risk-based disclosure toolkit and South Korea’s tiered statute are the models most copied inside APAC right now”.
Japan, India, Hong Kong, Thailand and Vietnam are taking similar paths to Australia, Jin says, relying on non-binding principles and existing laws. Other jurisdictions, including Mainland China and South Korea, are introducing AI-specific regulations.
Mainland China, where the CPA Australia report shows AI is more widely used across business operations, has established a multilayered regulatory framework that combines laws, administrative measures and ethical guidelines. “These rules emphasise state oversight, data security and social stability, while also promoting innovation,” Jin explains.
“While China has been leading in many areas of AI innovation and adoption, it is interesting to see China currently tends to take a tight and product-level licensing model in governance, which is undeniably ‘leading’ in stringency.”
Maximising AI's ROI across your business: An essential guide
The accountant's role
As AI use expands and emerging technologies such as quantum computing and the metaverse develop, accounting professionals are becoming central to effective governance.
“Accountants are no longer just ‘guardians of the books’,” says Jin. “They are becoming the architects of the control environment in which AI systems are allowed to operate.”
This evolving role includes translating AI risk registers into financial terms that boards can understand, he says, while also embedding AI within established internal control frameworks such as COSO ERM or COBIT, and making explainability part of the audit process.
“Accountants can also run AI cost–benefit and risk dashboards alongside traditional KPIs, and champion third-party assurance before regulators ask for it.”
Ord agrees that accountants play a vital role in data governance and the validation of AI outputs, while auditors assess AI-generated information that impacts financial reporting.
“Without such activity, the benefits of AI, especially improved accuracy of repetitive tasks, are less likely to occur,” he says.
Wallace emphasises that technical oversight must be matched by strong ethical foundations. She urges organisations to embed ethical principles into their culture and train staff accordingly. When done well, this combination of robust governance and ethical practice can enable AI to deliver significant benefits while minimising harm.
AI governance in the 2020s, Wallace says, should be viewed much like financial governance in the 1980s. “We can see we’ve benefited from the introduction of strong financial governance. We now need the same for AI.”
Supportive frameworks
Gavan Ord, business investment and international lead at CPA Australia, points to several frameworks that can help leaders ensure ethical, secure and compliant use of AI:
- Voluntary AI Safety Standard, Australian Government Department of Industry, Science and Resources
- ISO/IEC TR 5469:2024 Artificial Intelligence — Functional safety and AI systems, covering AI use in safety-related functions
- ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system, providing a framework for AI governance and risk management
- APES 110 Code of Ethics for Professional Accountants, which includes the fundamental principles of ethics: integrity, objectivity, professional competence and due care, confidentiality and professional behaviour.
Simple rules for responsible AI use
The Australian Government Department of Industry, Science and Resources Guidance for AI adoption outlines six key steps for organisations starting their AI journey or using AI in low-risk settings.
- Decide who is accountable: Assign a senior leader to oversee AI use and create an AI policy.
- Understand impacts and plan accordingly: Conduct stakeholder impact assessments and create contestability channels.
- Measure and manage risks: Implement a risk-screening process to identify and address potential issues.
- Share essential information: Develop and maintain an AI register and ensure transparent disclosure of AI use.
- Test and monitor: Test systems before deployment, set up monitoring processes to detect changes in performance and behaviour, and extend existing data governance and cybersecurity practices to AI systems.
- Maintain human control: Ensure appropriate human oversight based on the system’s level of autonomy and risk, and incorporate human override points.
Further information and resources are available on the department’s website.

