At a glance
On the list of the hottest buzzwords of 2023, “ChatGPT” surely scores a position near the top. Created by San Francisco‑based tech company OpenAI, this generative artificial intelligence (AI) tool promises to create almost any form of content without direct human intervention.
Generative AI tools are in the spotlight not just for what they can do, but for how they go about doing it – and, in some cases, whether they should be used at all in professional settings.
As the use of generative AI becomes more widespread and normalised, accountants must find the balance between applying professional scepticism and staying open‑minded about the benefits of this emerging technology.
Inside the "Black Box"
GPT stands for “generative pre‑training transformer”, a type of AI adept at understanding and creating human language.
Some accountants are already using generative AI tools to help analyse historical financial data to make predictions about future trends.
AI algorithms can detect irregularities and anomalies in financial transactions, and they can automate the generation of financial reports, such as balance sheets and cash flow statements.
AI‑powered tools are also being used to assist in audits by analysing vast amounts of financial data to identify potential audit risks. They can enhance client service and support by answering common accounting‑related queries.
“AI has delivered many advantages, but there is a need for scepticism, because it does present challenges and ethical concerns,” says Anju De Alwis FCPA, managing director of Ultimate Access Education Institute and member of CPA Australia’s Ethics and Professional Standards Centre of Excellence.
These concerns include data privacy and security, bias and discrimination, trust and the role of human judgement. If generative AI tools are “black boxes”, how can the validity of their answers be evaluated properly?
“We have AI that we know works, but we can’t explain precisely why,” says Dr Simon Longstaff AO FCPA, executive director of The Ethics Centre.
“That’s because the way the large language models and other systems operate means they can be trained to produce a result. However, it’s not like an older form of computer logic, where you could see that A leads to B leads to C.
“We don’t have ‘explainable AI’ at the moment,” Longstaff adds. “People are trying to work out how to interrogate what’s happening, or to create systems that can be explained, but it means we are currently using things that we don’t fully understand in terms of their internal operations.”
Regulation has always required appropriate levels of governance around technology. When personally identifiable information is involved, for example, there are regulations that dictate what companies must and must not do.
Professor Matt Kuperholz, data scientist with Deakin University’s Centre for AI and the Future of Business, notes that forward‑thinking companies have already created “ethical guardrails” to ensure their use of these technologies is appropriate and in line with their customers’ expectations.
“We don’t put brakes into a race car so we can go slowly,” Kuperholz says.
“We put brakes into a race car so we can go really quickly, but safely. Most organisations have been using AI for a long time, even if it’s simply through something like a recommendation engine or a third‑party piece of marketing software.
“I think now it’s just very obvious that AI is hitting the mainstream, and we will see things like generative pre‑trained transformers being integrated into things like the Microsoft Office Suite and the Google Suite, so they’ll be landing on every desktop soon,” Kuperholz says.
AI and accountability
Technology should never be unrestrained, but who is ultimately accountable for generative AI tools? Longstaff says it is a question yet to be resolved.
“We have to work out, for example, whether or not the person who initiates the system and designs it has adequately discharged their responsibilities in terms of building the ethics into the design itself and putting in certain protections and restraints that we want.”
Technologies like generative AI should always be bound by a degree by ethical consideration, Longstaff says.
“You’ve got to start at the very beginning and say, ‘What is enabled by this technology?’ How am I going to restrain it, so that there are only things which are intended and, hopefully, for the good? How do you minimise the use of it for other purposes?
“There can also be another level of accountability, in terms of the person who deploys it,” Longstaff adds. “To what extent have they put in place measures to ensure that it is only used for the proper purposes for which it was designed?”
Governance and professional scepticism
APES 110 Code of Ethics for Professional Accountants sets out the fundamental principles of ethics for the profession. It requires accountants to be independent of mind in order to act with integrity and to exercise objectivity and professional scepticism.
“If someone who’s performing an audit has ethical concerns around the information that AI is giving them, whether that’s around bias or accuracy or whatever it may be, they have a responsibility to raise their concerns,” De Alwis says.
Longstaff adds that the results produced by generative AI tools should also be “able to be tested against analogue judgement in the real world”.
“This is not different to what accountants have always done, particularly in the auditing profession,” he says.
“Companies are constantly producing accounts, which they claim to be a true and fair representation of the state of the company’s financial affairs, only for it to be discovered through the independent review of auditors that perhaps it’s not true.
“If you’ve got an expert system like AI producing findings that are supposed to describe the world – well, a good accountant would go into the world and say, ‘Does it look right? Is there any evidence beyond what the system has produced to validate that?’
“These are ‘bread and butter’ things to do for a profession that is oriented towards things being true,” Longstaff says. “That’s a core skill to bring to bear.”
Kuperholz says companies should ensure that AI systems have an appropriate level of explainability, transparency and replicability.
“Will these models break when they’re presented with new data that they haven’t seen before? Are they secure? Is there the ability for a nefarious actor to capture sensitive intellectual or, even worse, customer information?” Kuperholz asks.
“There’s another set of performance monitoring around safety and control,” he adds. “These are challenging things to monitor, but not at all impossible.”
De Alwis adds that accountants should consider the broader technology ecosystem while using AI tools.
“Who is feeding the data into these systems, and can we trust the data?” she asks.
“Management accounting is all about insights for the future, and that’s where AI will get really interesting, but we need to constantly think of the veracity of the data – that’s where professional scepticism comes in. We need to question the insights that we are getting from technology.”
As generative AI tools evolve, Kuperholz says we can expect a greater focus on responsible AI in general, and international standards are already being written.
“I think we’ll move towards an adoption of certain standards for explainability, for ethics, for bias, security and replicability,” he says. “All of these areas will hopefully move towards realistic and attainable standards, and tools that organisations can use.”
In the meantime, De Alwis says accountants must apply their professional scepticism to the use of AI in the workplace.
“Accountants are required to serve the public, and that underpins so much of our ethical considerations,” she says.
“When it comes to making decisions about the use of technologies, accounting professionals have a seat at the table to ask the right questions, but in order to ask the right questions, we need to know what to ask.
“We need to gain knowledge about technologies,” De Alwis says. “We can’t assume that it’s not part of our job to understand how technologies like AI work.”
6 amazing generative AI content tools
Tech updates to the IESBA code
The International Ethics Standards Board for Accountants (IESBA) has recently released technology‑related revisions to the IESBA Code to better accord with the development of technology.
These revisions aim to guide the ethical mindset and behaviour of professional accountants in both business and public practice as they adapt to new technology.
The revisions include:
- Strengthening the Code in guiding the mindset and behaviour of professional accountants when they use technology.
- Providing enhanced guidance fit for the digital age in relation to the fundamental principles of confidentiality, and professional competence and due care, as well as in dealing with circumstances of complexity.
- Strengthening and clarifying the International Independence Standards (IIS) by addressing the circumstances in which firms and network firms may or may not provide a technology‑related non‑assurance service to an audit or assurance client.
The revisions are set to come into effect in December 2024.