At a glance
By Gary Anders
The Nebraska-based American multibillionaire Warren Buffett is colloquially known as the “Oracle of Omaha” because of his uncanny knack for picking investment winners and trends.
So, it was no surprise that Buffett grabbed headlines recently – when he told shareholders of his conglomerate Berkshire Hathaway – that he is worried about the potential threats from artificial intelligence (AI), likening it to the development of nuclear weapons.
“We let a genie out of the bottle when we developed nuclear weapons,” Buffett says.
“AI is somewhat similar. It’s part way out of the bottle and … we may wish we’d never seen that genie,” adds the multibillionaire, citing the sharp rise in investment scams using “deep fake” (digitally manipulated, synthetic media) generative AI technology.
He referred to a recent example where criminals had used his likeness on a video to try and lure investors into a scam.
AI risk versus reward
It is not just AI investment scams that are of growing concern across the globe. The technology may also pose a risk to political campaigns and elections.
Australian Electoral Commission (AEC) commissioner Tom Rogers told a senate committee hearing in May 2024 that he expects to see AI-generated misinformation at the next federal election. This could include voice clones of Prime Minister Anthony Albanese and Opposition Leader Peter Dutton, designed to influence voters, Rogers claims.
AI technologies are becoming increasingly sophisticated. The OpenAI GPT-4o that powers the popular ChatGPT chatbot can have interactive voice conversations with users in real time, and it can exhibit human-like personalities and behaviours. The AI system can also respond based on a user’s body language and voice tone.
Beyond the potential risks posed by AI, there are also clear economic benefits.
In a report published in mid-2023, management consulting firm McKinsey & Company estimated the impact of AI could add trillions of dollars in value to the global economy annually.
After examining just 63 cases where AI could be used to address specific business challenges around customer interactions, content creation, marketing and sales, and software coding, McKinsey calculated AI could add between US$2.6 trillion (A$3.9 trillion) and US$4.4 trillion (A$6.6 trillion) annually to global GDP.
“Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities,” McKinsey notes.
“Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 per cent of employees’ time today.”
Separately, the International Monetary Fund has predicted AI will affect around 40 per cent of jobs worldwide, warning it is imperative for policymakers to upgrade regulatory frameworks and support labour reallocation while safeguarding those adversely affected.
Upskill
Tackling AI challenges
Around the world, governments are grappling with the rapid proliferation of AI systems across society, recognising both the enormous global economic opportunities from AI, and the known risks and unforeseen future consequences.
Governments appear to be moving reasonably swiftly in terms of setting standards for AI safety and security.
In October 2023, the Biden Administration issued an executive order that requires AI systems developers to share their safety test results and other critical information with the US Government, and to develop standards, tools and tests to help ensure AI systems are safe, secure and trustworthy.
The executive order also seeks to protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
Later, in November, at the world’s first AI Safety Summit held in England’s Bletchley Park mansion – which coincidentally was used as the principal centre for enemy code-breaking activities during World War II – the European Union (EU) and 28 countries, including Australia and the US, signed the Bletchley Declaration.
The parties to the declaration agreed to establish a shared understanding of the opportunities and risks posed by frontier AI, to build a shared scientific and evidence-based understanding of the risks and to develop respective risk-based policies to ensure safety in light of such risks.
“This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research,” the declaration states.
In February 2024, the Australian Government announced the establishment of a new Artificial Intelligence Expert Group made up of 12 appointees with expertise in AI technology, law and ethics.
The government’s focus is targeted towards the use of AI in high-risk settings, where harms could be difficult to reverse, while ensuring that the vast majority of low-risk AI use continues to flourish largely unimpeded.
“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” says Ed Husic, Minister for Industry and Science, in announcing the formation of the group.
“We want safe and responsible thinking baked in early as AI is designed, developed and deployed.”
In April this year, the EU’s 27 member states unanimously endorsed the EU AI Act, assigning applications of AI to three risk categories.
Under the Act, applications and systems that create an unacceptable risk are banned. High-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements, and applications not explicitly banned or listed as high-risk are largely left unregulated.
In May 2024, Japan’s government announced the launch of the Hiroshima AI Process, signed by more than 40 countries and regions, to develop a voluntary framework for safe, secure and trustworthy AI usage.
This was closely followed by the Seoul Declaration, signed by 27 countries and the EU, confirming a shared understanding of the opportunities and risks posed by AI.
Regulatory approaches across Asia
Professor Mimi Zou, head of UNSW Sydney’s School of Private & Commercial Law, who was an adviser to the UK Government on the Bletchley Declaration, says governments in Asia are taking varied approaches to AI regulation.
Mainland China, India, Indonesia, Japan, the Philippines, Singapore and South Korea are all signatories to the Bletchley Declaration, while India, Japan, Singapore, South Korea and Thailand are signatories to the Hiroshima AI Process.
Zou says Mainland China has already issued a number of interim regulations for technology companies around AI, including generative AI and deep fakes (or “deep synthesis” as it is known in Mainland China), requiring certain types of platforms to register their algorithms with the Cyberspace Administration, the country’s main internet and cyber regulator.
“From the Chinese Government’s point of view, there is a need to control AI risks, including cybersecurity and information content security risks, not least because these risks can undermine the regime,” Zou says.
Zou adds that Mainland China is taking a strong approach to AI regulation and may introduce an EU-style AI Act in the near future. By contrast, she says countries such as Japan, Singapore and South Korea have so far taken a “lighter touch approach” by promoting standards and guidelines or looking to existing laws, such as copyright and personal data laws or sectoral laws and regulations.
“Singapore is focusing on an innovation-driven governance approach,” she says.
“There is an emphasis on consensus building between the stakeholders, and the government is currently not pushing for a general AI legislation.”
Malaysia is following a similar line as it formulates its own AI governance and code of ethics, broadly based on the OECD’s “Principles for trustworthy AI” designed to provide policymakers with recommendations for effective AI policies.
Government versus self-regulation
Zou says there are immediate risks in terms of current applications of AI, which is where government regulations need to come into play.
“You can't just leave it to the tech companies to regulate themselves. But there’s a delicate balance. For example, the EU AI Act may be seen by some AI startups operating or looking to operate in the EU as entailing heavy compliance costs.”
Mark Jansen, PwC Singapore’s AI leader, who also leads the firm’s AI transformation, says the regulatory actions being taken by governments are well founded.
However, he adds that within organisations and companies, there also needs to be a high degree of self-regulation through internal governance including policies, procedures and training.
This is more important than ever as AI is set to transform the world of work. According to PwC’s 2024 AI Jobs Barometer, AI is set to unlock a nearly fivefold surge in workforce productivity in sectors that are especially exposed to AI.
Organisations must first be able to get a good handle on AI before building relevant skills among their employees to set themselves up for success in an AI era.
“The regulations are there to prompt us, but actually, good practice still requires having good controls within an organisation and a standard of care,” Jansen says. “Nothing has really changed.
“For enterprises, it has to be thought through from the lens of not just adopting the technology but recognising that by doing so it impacts everyone in an enterprise, extending beyond the AI and tech teams. The trickle effect of how you communicate, how you govern, is quite large. I think this is a challenge for enterprises.”
Another challenge Jansen notes is the cross-border impacts of government regulations, with some regulators expressing concerns over how AI data generated in foreign-based data centres will be governed.
“Suddenly, this is a really big debate across many locations,” he says. “If my data is to be localised because my regulator says so, how do I effectively leverage critical technologies like GenAI for my company? Also, for cross-border operations, what controls and risks do I need to consider, especially when the data centre may not be located in the same country?
“I think from a regulatory standpoint, the principles orientation, you’ll see some of the regulators being very prescriptive in their rules, particularly around data, and later revising them to reflect real-life situations and developments,” Jansen adds.
“Ultimately, data needs to flow across borders, but what’s important is that it is done in a safe and secure manner.”