At a glance
Artificial intelligence (AI) has been on a journey from academic curiosity in the 1950s to a transformative force of daily life. Along the way, it has introduced a collection of terms to explain its capabilities. From “vibe coding” to “shadow AI”, the vocabulary continues to expand as more applications are developed.
“Most Australian enterprise is sitting squarely at the early stages of this conversation,” says Dr Michael Kollo, director of AI for Adapt and CEO of Evolve.AI.
“We are all familiar with the term AI, but now we are working out the many different ways to use it.”
Here are 11 AI buzzwords and their meanings, explained by experts.
1. Vibe coding
This term refers to using AI chatbots or language models to help write code, rather than manually coding it, explains Ben Scown, digital delivery director at Integral.
“The idea is that you can just ‘vibe’ with the chatbot and describe what you want to build, then have the AI write the code for you,” he says.
“It’s very quick and easy for anybody to come up with an application and build it. But there are some concerns about whether it is secure, so there’s still a lot of caution around using it.”
2. Multimodal
Until recently, many large language models could only operate with one type of input and output, for example, chat using text or the generation of images but not videos, Scown says.
“Multimodal describes models that can take input from different types of data, such as voice, text and camera at the same time and output multiple modes — as in generate text as well as video and audio,” he says.
“An example of this is Gemini Live being able to hear your questions about what it can see on the camera. This is important as you can start to do more complex tasks and build smarter [AI] agents.”
3. Zero-copy architecture
Previously, there was a focus on centralising information in large structures or data lakes, Kollo says.
“AI has changed that approach. There is now a shift to gather and analyse data where it already lives, across multiple sources, without making duplicates,” he explains.
“It is called zero-copy architecture, and it means businesses may no longer need to consolidate everything in one place — which could save enormous resources and unlock greater flexibility.”
4. Diffusion models
This is a new class of generative models that learn to generate data by “diffusing” multiple samples sourced online then reversing the process to create a unique piece of data.
“We’ve seen this in the image and video generation space for quite some time, and researchers are now exploring diffusion models for text generation,” Scown says.
“This is important, as it’s not like a traditional language model that is often seen as a next word predictor. Some recent examples are proving to be much faster at creating answers and content.”
5. Reinforcement learning
This term describes AI’s ability to learn by trial and error, without any guidance from a human user.
“The newer large language models are using reinforcement learning to learn and provide more helpful, accurate and safe results, rapidly improving the overall quality of the output,” Scown says.
6. Mixture of experts
A term that is becoming more visible due to DeepSeek’s use of it, this model routes tasks to specialised sub-models rather than trying to do everything itself. Dubbed a mixture of experts (MoE), or committee machines, it relates to scalable architecture whereby multiple small sub-models are trained, says Scown.
“It’s called a mixture of experts because just as you would direct certain tasks to the experts in that area in any other setting, the model sends the task to the relevant small model expert,” he adds.
7. Neuro-symbolic AI
An approach that combines deep learning, which is excellent at recognising patterns in data, with symbolic reasoning or logic-based AI that applies rule-based knowledge (like tax laws, legal regulations or formal logic) is known as neuro-symbolic AI, says Tracy Sheen, AI strategist and author of AI & U.
“A neuro-symbolic system might use deep learning to extract and interpret financial data from documents and then apply rule-based logic to check tax compliance.”
8. Ethical AI
Ethical AI refers to the practice of designing and using AI in a way that is fair, transparent and accountable, says Scown.
“One of the issues with AI is that it can introduce bias,” he explains.
“If the bulk of the data on the internet used to train the model is in English, for example, there’ll be a bias towards the responses being more English-specific.”
9. Responsible AI
Responsible AI is about how you use it, says Sheen. For example, quality control measures such as questioning the outputs and risk mitigation techniques like not providing personal information when inputting data.
“It’s risky, at the moment, to assume that AI is giving the correct answer. That’s why there’s a need for people to use their expertise and analyse the output.”
10. Explainable AI
Explainable AI (XAI) refers to the set of methods designed to enable humans to understand, interpret and trust the processes and outputs generated by machine learning algorithms, deep learning and neural networks.
Sometimes described as a “glass box” approach, XAI contrasts with the “black box” nature of many AI systems by making decision-making processes visible and comprehensible.
This involves implementing specific techniques to ensure each decision made during the machine learning process is transparent, traceable and explainable.
11. Shadow AI
Shadow AI is a concept related to the unsupervised use of artificial intelligence tools within an organisation, says Sheen.
“It occurs particularly in smaller organisations that may not have a specific policy in place regarding AI use,” she explains.
“It raises compliance and governance issues, and it is hard to monitor. But if you have a framework or a policy outlining which tools are approved, it will stop people from using unauthorised tools on their phone to complete work-related tasks.”