Loading component...
At a glance
As told to Susan Muldowney
Question: “Our firm is trialling artificial intelligence (AI) tools to generate financial advice. The outputs are fast and mostly accurate, but they lack nuance. I am concerned that junior staff are relying too heavily on them without applying professional judgement. As a senior accountant, how do I ensure ethical oversight of technology without stifling innovation?”
Answer: The use of technology like AI is increasingly becoming part of professional practice, and its role is growing in substance and impact. While the output of AI may be fast, there are clear ethical, professional and legal issues at stake. When it comes to AI in accounting, there is a risk when outsourcing human judgement along with advice.
The starting point is simple but often overlooked: use of advanced technology does not change an accountant’s ethical obligations. APES 110 Code of Ethics for Professional Accountants (the Code) outlines the five fundamental principles of ethics for professional accountants and establishes the standard of behaviour that is expected.
These fundamental principles — integrity, objectivity, professional competence and due care, confidentiality and professional behaviour — still apply, whether the work is drafted by a junior accountant, a partner or an algorithm.
Clients seek advice that is accurate, appropriate and relevant to their circumstances. “Mostly accurate” is not a standard they expect — nor should they. AI can process information quickly, but it cannot fully understand a client’s goals, risk appetite, family dynamics or broader context.
That interpretive layer is the accountant’s responsibility, which cannot be delegated. The tools are impressive, but they lack nuance, and in a profession built on trust and client relationships, nuance matters.
The Code’s conceptual framework is particularly useful here. It requires accountants to identify threats to compliance with the five fundamental principles, evaluate their significance and apply safeguards where necessary.
AI introduces a new and significant technology threat that we need to continue to evaluate and address. This does not mean the tools should be avoided, but instead they must be actively managed.
Safeguards might include mandatory human review and audit of AI-generated advice, clear documentation of where professional judgement has been applied, regular testing of tools for accuracy and training staff to treat AI as an assistant rather than an authority.
A useful sense check is the Code’s “reasonable and informed third-party test”: would another competent professional or a regulator agree that the judgement exercised was appropriate?
The question you outline also highlights a strategic issue about the future of the accounting profession. If accountants allow technology to replace judgement rather than support it, they risk eroding their own value. Information has always been widely available, but clients pay for insight, relevance and confidence that someone has genuinely considered their unique situation.
Ethical oversight does not stifle innovation. Done well, it enables it. Accountants must maintain judgement, avoid complacency, and implement safeguards to uphold integrity and competence, as well as the public trust that is paramount for a sustainable profession.
By setting clear expectations, guardrails and accountability, firms can harness the benefits of AI while protecting the principles that underpin professional practice. The message to junior staff should be clear: use AI tools and embrace the efficiency that they bring, but remember that accountability, judgement and trust will always rest with you.

