At a glance
By Prue Moodie
In 2023, after an employee accidentally disclosed proprietary source code to ChatGPT, Samsung Electronics banned workers from using generative artificial intelligence (Gen AI) tools on company devices.
It may have been a moment of truth for Samsung Electronics, but really — had they not seen it coming?
Free Gen AI programs are proving irresistible to employees. A 2024 global workplace survey by Microsoft and LinkedIn found an overwhelming majority of white collar workers are already using their own AI in the workplace, called “BYOAI”.
The downside is that less than 40 per cent of those enthusiastic users are receiving training from their company, according to the report.
More than an interesting discrepancy, this points to a serious failure in corporate strategy.
As well as exposing companies to risk, the laissez-faire approach also suggests they are in denial about how close AI is to delivering era-defining, whole-enterprise productivity boosts.
AI training starts with strategy
Goh Ser Yoong, a member of the ISACA Emerging Trends Working Group and head of compliance at Advance.AI, believes the need for training is obvious. But, he notes, it must be preceded by a risk strategy — at a minimum.
“Most organisations do not have a mature AI governance structure to begin with, let alone are rolling out proper training for their employees,” Goh says.
Goh’s comment goes to the heart of the challenge for AI trainers. Not only must they develop training for a technology that changes every few months, but they need to help companies establish the rules of AI usage in the workplace.
That means clear objectives and benchmarks for training.
RMIT Online’s director of customer success and growth, Anshu Arora, agrees that setting standards is key.
“Those benchmarks could be: how long is it taking workers to complete a particular task? Or how quickly can they update or take a new product to market? Or what’s the level of error?” Arora says.
Efficient AI use begins with prompting
Dr Sean Gallagher, founder of AI training specialist company Humanova, says there are three dimensions to a good prompt.
“The first is clarity. All organisations have a lot of assumed knowledge and use jargon. AI can’t read minds, so it’s really about breaking down what you want into very simple terms.
“Second, provide examples. They help the AI to better understand what you want.
“The third is simplicity. Don’t ask the AI to do too much in one particular task.”
Then comes real-world application.
Create KPIs and encourage “use cases”
Experimentation and psychological safety are preconditions for the successful application of AI in the workplace, Gallagher says. Employers should provide an atmosphere in which employees can make errors and discuss their mistakes.
He suggests that after initial training, employers could ask each individual to report back to their team on a situation where they used AI.
“It would be a structured reporting method, describing what it was that they used it in, what they learned from it and what the outcomes were, in terms of efficiency and quality.
“At the team level, each team reports back to the whole organisation once a quarter on a process they have improved using AI. What did they learn along the way? What were some of the mistakes they made, but also, what was the outcome?”
Demonstrate productivity gains
The productivity findings from internal use case exercises like this are also important to demonstrate to leaders that the investment in licences and training is worth it.
Globally, there is limited data about AI and productivity based on actual use cases. However, the Organisation for Economic Co-operation and Development (OECD) estimates that productivity gains from current generative technology could be 20 to 40 per cent.
A small Australian survey published in February 2025, based on Australian Government Treasury operations, found that the cost of a Microsoft Copilot licence was easily recouped through moving workers onto higher value tasks.
However, Lucy Debono, Microsoft ANZ’s modern work business group leader, notes that a critical component of AI investment is in the time and resources committed to upskill employees.
“This is a culture and change management question as much as a technology one — Gen AI cannot sit in IT alone,” she says.
Build “AI intuition” with frequent use
Gallagher stresses that any training must be guided by the fact that AI is a nonlinear technology.
“The future state of AI is likely to look nothing like the current state, and so workers need to build intuition through frequent use of AI in more and more work tasks,” Gallagher explains.
All AI presents unique upskilling challenges, says Mahesh Krishnan, chief technology officer for Fujitsu Australia and New Zealand.
“It’s not just about learning new software, it’s about understanding complex concepts, critical thinking and ethical implications. The ethical considerations surrounding AI deployment require a different kind of training, focusing on responsible AI practices and mitigating bias.”
Prepare for the next wave: Agentic AI
The newest form of AI, currently in early release, is agentic AI. Unlike the AI of today, agentic AI can act on its own to achieve specific goals through the initiation and oversight of complex multi-step workflows autonomously.
Incorporating agentic AI into operations will go beyond training, says Gareth Flynn, founder and CEO of workforce strategy consultancy TQSolutions.
“It will require change management,” he says. “The way that workflows and processes operate will change.”
Flynn believes agentic AI will be driven by the needs of the functional owner of a division — for example, a CFO or a chief HR officer — rather than by those further down the chain of command. The c-suite’s high level, structural understanding of corporate operations will be critical to taking advantage of agentic AI.
But, as Gallagher says, only if they have developed “AI intuition” from training in earlier versions.