At a glance
- Companies like KPMG Australia and Spark in New Zealand have developed customised LLMs to improve efficiency and productivity.
- They present huge benefits for those who get it right, but robust guardrails are required.
- Data quality and volume, in-house skills and a clear AI governance framework should be in place before making any technology live.
In March 2023, KPMG Australia welcomed a new team member to the fold. Dubbed “KymChat”, this new addition is a customised version of ChatGPT, the powerful large language model (LLM) chatbot released in 2022 by artificial intelligence (AI) research laboratory OpenAI.
KPMG Australia is among a growing number of organisations to create their own LLM. Other organisations exploring their LLM options include Westpac, Australian construction company John Holland and New Zealand telco Spark.
Meanwhile, customised LLMs look set to take off in Singapore, with the government last year announcing the “AI Trailblazers” joint venture with Google Cloud.
The program is designed to help organisations to identify real-world challenges that can be addressed with generative AI, build prototypes and bring them to production.
With the evolution of accounting in the digital age, here is what accountants need to know about LLMs.
What is an LLM?
An LLM is an advanced form of AI that can understand and generate human-like text. It is considered generative AI because it can create new content.
An LLM works by analysing large amounts of text data, such as books, articles and websites, to learn language patterns. This allows it to predict and generate relevant content based on the input it receives. This means it can help with a range of tasks, from writing reports and answering questions to engaging in conversation.
While building an LLM from scratch is cost prohibitive for most organisations, customising existing foundational models, such as Microsoft’s Azure OpenAI Service or Google’s Gemini, may help drive efficiencies and enhance productivity across a workforce.
How to customise an LLM
Research from Accenture shows 85 per cent of C-suite leaders are focused on increasing generative AI investments in 2024.
Customising LLMs may be an appealing solution, because it bridges the gap between generic AI capabilities and specialised, industry-specific tasks.
Dr Jake Renzella, lecturer and co-head of the computing and education research group in the School of Computer Science Engineering at the University of New South Wales, says there are two main ways to do it.
“The first is by fine tuning a foundational model,” he says. “For instance, you can expose it to some examples of tasks that you want it to be better at and then train it.”
In an accounting context, imagine you have some financial data that contains a common type of error. If you have many examples where that error occurs in a real dataset, you could showthem to the LLM and train it to identify them.
The other common way to customise an LLM is through retrieval-augmented generation (RAG). RAG involves exposing a foundational model to documents or data from your organisation that it is unlikely to have been trained on.
“You are basically saying to these LLMs, ‘Hey, here’s a bunch of information about our organisation to draw on when you are doing your tasks’,” Renzella says.
Benefits of a customised LLM
Creating a customised LLM presents huge benefits for those who get it right.
Data integrity is one of them, says Jannat Maqbool CPA, executive director of the Artificial Intelligence Researchers Association and a member of CPA Australia’s Digital Transformation Centre of Excellence.
“You can ensure the data is from trusted, verified sources,” she says. “That means you can have a really good go at eliminating bias and improving the reliability of the model.”
Maqbool notes that customised LLMs can also present cost benefits.
“Bringing an LLM in-house means you do not have subscription fees and the APIs, so you are only paying for what you are using or what you are really needing,” she says.
“You may also have a competitive advantage,” Maqbool adds. “You can align it more with market conditions and compliance requirements, as well as your clients and stakeholders.
“The stability and quality mean that you can be a lot more responsive as well, because it’s part of your infrastructure, and you can optimise the model.”
How businesses can prepare for an LLM
Before considering a customised LLM for your organisation, Renzella says there are three key things to assess.
“The first is do you have the right training data and the right volume, because you might need tens of thousands of examples, which might seem a lot to some organisations, but in the scale of LLMs, that’s a drop in the ocean,” he says.
“The second is whether your dataset is representative – if you are trying to teach a model to do a task, do you have lots of different examples that cover the full scope of that task?”
The third, says Renzella, is about having the right capability.
“You could have the data, but has it been thoroughly reviewed for quality control, and do you have the right people in-house, such as software engineers or machine learning engineers, who can put the data in the right format in order to successfully fine tune or build a RAG model?”
Humans at the helm
At KPMG, KymChat is viewed as an assistant governed by humans.
“The human is at the helm,” says Jason Fogaty, partner, KPMG Connected Technology Group at KPMG Australia, adding that data is “absolutely vital to AI solutions”.
“I think many organisations are having to work through data quality, but there’s been very large investments in data transformation programs over the past decade, which has certainly laid a strong foundation for a lot of clients.
“Of course, there’s still work to be done, and you still need to ensure you have the right controls, data readiness assessment and trusted AI impact assessment.”
Fogaty notes that content from KymChat is clearly flagged as AI generated and that it is for a human to review.
“Our employees are also trained in how to review and test what is generated from KymChat.”
Spark New Zealand, the country’s largest telecommunications and digital services company, is also rolling out customised LLMs across its business, and upskilling is an important consideration.
“There’s an acceptance that more upskilling is required,” says Anshuman Banerjee, tribe and chapter area lead at Spark New Zealand.
“We have just launched our internal skilling hub called Te Awe. It was created in response to the surge in demand for skills in new technologies such as AI, data and analytics, and cloud, to ensure that Spark is equipping our people with new skills in high-demand areas, while at the same time ensuring that as an organisation, we are building the future critical skills we need in a sustainable and inclusive way.
“We also have our own testing teams, which are creating some of the guardrails,” Banerjee adds.
“The learning that’s happening is not just for the data teams or the AI teams, but for the ecosystem of people surrounding the technology.”
AI governance and risk management
As with any form of AI, LLMs require clear guardrails to govern their behaviour and output.
Dr Tiberio Caetano is chief scientist at Gradient Institute, which works to build safety, ethics, accountability and transparency into AI systems. He says companies must consider the risks of adapting LLMs.
“What is particularly different with AI is that it’s a general-purpose technology,” he says. “It’s extremely difficult to constrain an LLM to only operate within the compliance of a specific purpose. You need to use the right guardrails to make sure that it doesn’t go outside the scope it has been designed for.”
In mid-2023, KPMG released KymChat to clients. Fogaty says that employees and clients have embraced it, but “with rightly a degree of caution”.
“We are very focused on ensuring that there’s the right AI governance system in place for wherever we are working with AI internally, but also within our client environments as well.”
For Maqbool, the benefits of LLMs must be carefully balanced with the risks.
“When ChatGPT first came out, people saw the huge productivity gains for business, but you have to be really wary when you let loose this technology,” she adds.
“What impact could an LLM have on your brand if you don’t do it properly? It’s not like other technologies, because it is quite clever on its own.”
How KymChat boosts productivity
For KPMG, productivity and efficiency gains were early wins from KymChat.
KPMG Australia’s Jason Fogaty says the organisation had been experimenting with AI technology for some time, but the arrival of ChatGPT represented a “game changer”.
“We were one of the early adopters of Microsoft Azure’s OpenAI Service, so that enabled us to get our hands very quickly on OpenAI and to be able to build KymChat,” he says.
There are now 10,000 users of KymChat at KPMG, and its initial use case was to help them to find the subject-matter experts within the organisation who could help them in their roles or engagements.
“Then we moved on to our policies,” Fogaty says. “Once we checked the data quality and the data accuracy, they got loaded into KymChat and that enables our staff to ask questions of policies, which is really helpful as an experience, and also enables them to get answers quickly.”
Fogaty says KPMG needed to ensure that KymChat was highly secure, and that data did not leave the private environment.
“We have our Trusted AI Approach, and we prioritised use cases that we saw as low risk, so we could ensure that our controls were robust and that the new AI governance landscape we were building was really humming and fit for purpose before we looked to extend the rollout across the business.”
KPMG developed a Prompt Confidence Index, which enables it to test the accuracy of what’s coming out of KymChat, Fogaty adds.
“We now know that Kym is over 90 per cent accurate all of the time.”
Spark New Zealand’s LLM journey
Spark New Zealand’s Anshuman Banerjee says the company began its LLM journey in 2023.
“There were certain areas of the business that were very keen when we started the journey and we engaged with them to create some proof of concepts,” he says.
“For example, our contact centres and retail stores were keen to use the technology, and we created a product for our contact centre agents and store agents to be able to query a knowledge base, identify plans and procedures and SOPs [standard operating procedures].
“This means when they’re on a call with a customer, they’re able to now query for information using a simple chat interface and get that information really easily and faster than before,” Banerjee explains.