Loading component...
At a glance
- AI is rapidly transforming financial workflows, but biased training data can embed unfair or inaccurate outcomes into critical decisions.
- Mitigating algorithmic bias requires continual oversight, diverse data, transparent models and strong, organisation-wide governance structures.
- Effective AI risk management depends on human judgement, governance and growing AI literacy among finance professionals.
By Rosalyn Page
Accounting and financial services firms are adopting artificial intelligence (AI) into their operations to improve organisational efficiency and accuracy. CPA Australia’s Business Technology Report 2025 found that 89 per cent of business used AI in 2025. However, it also found that only 16 per cent of businesses use AI widely across their organisations.
AI has many applications including data analytics and insights, fraud detection, risk assessment and financial reporting. However, it does not come without risks — and one of the major issues is algorithmic bias.
Research shows the data used to train AI models can contain biases or patterns that affect results. Unfair or inequitable outcomes can show up in risk assessments, credit evaluations or financial policies. Furthermore, the decision-making of AI models is not always easily explainable, raising questions about a lack of transparency and trust in conclusions.
To address these risks, there is a growing need for strategies to address algorithmic bias via careful data management, model oversight, governance and AI education. The aim of such strategies is fairness, accountability and compliance with ethical standards.
How bias gets in
Bias enters AI systems in various ways, such as skewed or unrepresentative data and design, or configuration of the system.
As advanced pattern recognition systems, AI tools can learn unwanted bias from the input data, which can then be transferred into the decision-making and output.
“AI is trained to identify patterns and then exploit those patterns in new data to make predictions,” says Dr Simon O’Callaghan, head of technical AI governance at the Gradient Institute. AI systems are often trained on historical data that reflect human subjectivity, he continues.
For example, which résumés look “good” or “bad”, or what account activity looks suspicious — and those judgements get encoded into the model and reproduced in its outputs.
Fairness is not the same thing as accuracy when it comes to AI. Accuracy is objective (for example, the percentage of correct predictions), whereas fairness is highly contextual.
O’Callaghan explains that the context in which the system is deployed matters when deciding what a fair decision looks like. “The definition of fairness in a hiring tool may differ from that in a medical diagnostic or an audit system,” he says.
Accountants' crucial role in Australia's AI governance
Uncertain outcomes
In accounting and finance workflows, algorithmic bias can potentially lead to errors and unfair decisions. This could include inaccurate risk assessments, unfair credit evaluations, inaccurate financial reporting and fraud detection, or an over-reliance on automated outputs, according to industry research.

When financial decision-making is automated at scale, any distortions embedded in the underlying data can be amplified across thousands of decisions, according to Dr Dimitrios Salampasis, associate professor in emerging technologies and fintech at Swinburne University of Technology.
In risk assessments, errors and biases can lead to distorted risk classification, uneven investigations or compliance burdens, which can result in false positive or false negative outcomes.
“You may have training data which actually under-represents certain industries, business models, geographies, risk scores or forecasts,” explains Salampasis, who is a member of CPA Australia’s Digital Transformation Centre of Excellence.
"Training data can under-represent certain industries, business models, geographies, risk scores or forecasts."
There is also the risk of inaccurate financial reporting. “When you have biased algorithms, this might create issues when estimating key accounting elements,” he says. “You may also have risk in relation to materiality in financial reporting.”
In practice, an AI model could misclassify transactions, skew risk scores or distort sampling, influencing which items are considered material and potentially altering outcomes. Furthermore, a lack of critical insight into AI outputs and opaque “black box” systems can hinder transparency, undermine trust, and make it difficult to assess potential discriminatory outcomes.
“An over-reliance on these automated outputs creates issues when it comes to human professional judgement,” Salampasis adds.
Bias-mitigation strategies
Recognising bias is just step one. The next challenge is developing safeguards to mitigate bias in workflows. Practical strategies include diverse data sourcing, algorithm audits, stronger governance and AI education.
Structural and historical factors can be embedded in data, therefore diverse data sourcing is important to help limit skewed outcomes.
However, algorithmic bias mitigation is not a one-time technical fix, but an ongoing governance responsibility, O’Callaghan says. Organisational leaders need to be involved in decisions around the definition of fairness, not leave it to engineers and technical leads alone. This definition also needs to be revisited over time.
“It is not good enough for just the technical people to try to make the system fair,” he says.
"[Audits] are essential because they introduce structure, transparency and accountability into systems that are otherwise opaque."
Algorithm audits are also necessary to limit “model drift” — when the AI moves from its original parameters and provides inaccurate outputs — and test the underlying assumptions.
Evaluating models is essential, O’Callaghan says. “You can measure different definitions of fairness and see how the system performs against them.” However, there are always trade-offs and competing criteria on questions of fairness, balance and bias.
“You cannot satisfy all definitions of fairness at once,” he notes. Attention needs to be paid to model design and feature selection, and there needs to be ongoing supervision and monitoring of the system. This includes curating and refreshing training data, running fairness and performance tests to assess models, and considering independent validation.
“You need explainability layers,” Salampasis says, mechanisms such as model documentation, audit trails and independent validation — which allow organisations to interrogate and analyse systems and output.

AI risk assessment should also adopt a holistic approach, addressing cybersecurity, privacy, automation bias and legal compliance in a manner proportionate to the AI system’s use and risk profile, according to Tiffany Tan FCPA, audit and assurance lead at CPA Australia.
Auditors, she says, can provide safeguards for systems. Employing risk assessment, controls evaluation and professional skepticism, auditors possess foundational expertise that can be applied to algorithms.
Audits can provide an independent assessment of governance and control arrangements around how models are used, how risks are mitigated, and whether outputs are overseen for explainability, fairness and reliability, Tan continues. “In doing so, audits support transparency and accountability without substituting for management responsibility or technical model validation.”
Governance is essential
Strong governance is needed to manage the risks associated with AI. It needs to be the foundation of AI workflows — not an optional add-on.
Algorithmic bias can also intersect with anti-discrimination frameworks and human rights considerations. Decisions involving protected attributes could amount to unlawful discrimination, according to an Australian Human Rights Commission report, which recommends proactively identifying human rights risks and assessing impacts before and during deployment.
For organisations, this includes clear AI policies that define responsible and ethical use, with clear roles and responsibilities for oversight, Tan says. Staff training is also needed to ensure there is an understanding of the limitations of the technology.
Salampasis argues that questions of bias cannot be separated from how models are designed and governed. He stresses the importance of transparency, accountability and alignment with professional standards, particularly in high-risk financial contexts.
Without transparency and accountability, it becomes difficult to challenge or verify AI-driven decisions and identify who is responsible for which outcomes.
Governance must include data sourcing to understand the origins of data and any potential issues. However, as with algorithmic bias mitigation, AI governance cannot sit solely with engineers or accountants, Salampasis continues.
There is a need for interdisciplinary teams that facilitate financial, legal and technical collaboration and oversight, particularly in high-risk financial contexts.
It must also extend beyond internal systems to include vendor due diligence to understand how third-party AI tools have been trained and developed.
Regulation is catching up
Increasingly, AI governance is also being defined at the regulatory level.
Best practice is to have documented governance frameworks. Organisations can look to trusted standards for guidance, such as the US National Institute of Standards and Technology’s AI risk management framework playbook, the International Organisation for Standardisation and International Electrotechnical Commission’s AI management standards, or the Committee of Sponsoring Organisations’ enterprise risk management framework.
Salampasis points to the European Union’s AI Act as a structured, risk-based framework for AI, with heightened governance obligations for high-risk systems. This includes financial services, particularly areas such as credit scoring.
Australia does not have an economy-wide AI regulatory regime equivalent to the EU AI Act, and in fact has signalled it does not intend to introduce one. The Australian Government argues that existing laws such as the Privacy Act (1988), Australian Consumer Law and The Corporations Act (2001) are sufficient to manage AI-related risks. Nonetheless, it has stated it will intervene where regulatory gaps emerge.
The National AI Plan and Guidance for AI Adoption support this approach by providing practical risk management measures and guidance on ethical principles for AI adoption.
For auditors in particular, governance must extend to how AI is used within audit engagements and align with International Standard on Quality Management requirements.
“There needs to be processes for documenting human-in-the-loop reviews so that human oversight, professional scepticism and judgement remain central,” Tan says. “Good governance does not eliminate risk, but it ensures AI risk is measured, managed, explainable and auditable.”
Develop AI literacy
AI risk management is becoming a core professional responsibility, not solely a technical safeguard. It has heightened the need for AI literacy and critical evaluation skills among accounting and finance professionals.
“The profession must treat AI as both an opportunity and a capability shift,” Tan says.
There is an increasing need to develop a sufficient understanding of AI to ask informed questions, recognise risk signals and interpret outputs critically. In practice, this means that professionals must treat AI as a powerful assistant — not an authority — to ensure final judgement always remains with humans, and build training into professional development.
This includes training on prompting and interrogation of AI tools, applying industry specific risk lenses when using AI, and embedding ethics and bias awareness into AI-related work.
“It should augment, not replace, professional expertise,” Tan says. “Even the most advanced AI cannot replace the judgement of a trained accountant or auditor.”
Recommendations for auditors

As organisations accelerate their adoption of AI tools, the accounting and finance profession’s existing strengths place auditors at the forefront of responsible oversight.
“Auditors already possess foundational expertise in risk assessment, controls evaluation, independence and professional skepticism that positions them well to provide trust in this evolving area,” says Tiffany Tan FCPA, audit and assurance lead at CPA Australia. She has five key recommendations for auditors when assessing AI use:
- Understand how clients are using AI across financial reporting processes and internal controls.
- Ask tough questions: What data is being used? How reliable are outputs? What controls exist? Who reviews AI‑assisted decisions?
- Be prepared to assess how AI affects risk of material misstatement.
- Develop skills in evaluating AI‑related evidence, including when AI‑generated outputs are not sufficient.
- Prepare for emerging AI assurance services, which will become increasingly relevant.

