At a glance
- Fourth Industrial Revolution (4IR) technology encompasses artificial intelligence, robotics, the Internet of Things, quantum computing and genetic engineering.
- 4IR technology has far reaching potential in combatting crime and tapping new sources of economic value, but its rapid pace of evolution sparks concerns around ethics and regulation that is fit for purpose.
- These concerns don’t apply only to tech companies, but also to organisations across multiple sectors, and explainability, transparency and security are crucial to ensuring the technology aligns with a code of ethics.
Algorithms, such as those that determine credit decisions, are often described as “black boxes” because of the mystery surrounding what really goes on inside them.
Even the most ethics-focused and tech-driven organisations face something of a minefield when navigating the growing power and influence of Fourth Industrial Revolution (4IR) technologies as they continue to evolve and make further inroads into the mainstream.
The World Economic Forum describes 4IR as “a new chapter in human development, enabled by extraordinary technology advances”. It encompasses artificial intelligence (AI), robotics, the Internet of Things (IoT), genetic engineering and quantum computing, and blends the physical, digital and biological worlds to help us do things that were previously impossible – or that took much more time, effort and resources.
Law enforcers are using 4IR technology to help combat crime. Scientists have mined it to map the human genome. When managed well, the technology can unlock new sources of economic value by verifying assumptions, predicting customer needs and spotting business trends.
“The value that AI can bring to real-world problems is huge,” says Professor Matt Kuperholz, chief data scientist and partner at PwC. “The fact that it’s not simply AI, but a whole bunch of exponential technologies that are reinforcing each other, is an even more important concept to grasp.
“Part of the solution for monitoring the performance of AI is explainability and transparency, then robustness and security, in the same way you’d address cyber security,” adds Kuperholz. “But there’s a lot of catching up to do.”
An ethical minefield
4IR technologies pose ethical challenges because their advancements are moving faster than standards of governance and regulation. However, Kuperholz also notes a disparity between the aspirations of some organisations that use the tools and the wider goals of the society in which they operate.
Consider the example of an AI-led company, such as YouTube. In broad terms, its business model is based on a user’s time on site, which determines the number of ads they see to maximise advertising revenue. YouTube has huge amounts of data at its disposal – it knows who its users are, which videos they watch and, because it is linked to Google, their search history, too.
“Social media platforms can train an AI algorithm to search for the best videos to put in front of me to maximise my time on site,” says Kuperholz. “These algorithms were put in place to maximise profits for shareholders, but did it accomplish the right thing by serving me the most interesting, educational content that’s going to serve my purpose as a human? The answer is probably ‘No’.
“The same applies to the algorithms that serve up content to my daughter in continuous feeds,” adds Kuperholz. “How biased are they? Why is she not seeing people with a disability in these feeds? Why is she not seeing people with ‘imperfect’ body types in these feeds? Why is she seeing fewer dark-skinned people, given that she’s a light-skinned girl?
“The algorithms are aiming to maximise her time on site, while compromising a fair and unbiased representation of what the world actually looks like.”
It is not just tech companies that face ethical questions in their use of 4IR technologies.
Research from PwC shows that, of businesses currently using or trialling AI solutions, 25 per cent have not considered AI as part of their corporate strategy, while just 38 per cent believe it is aligned with their organisational values. Only a quarter make ethical implications a top consideration for AI solutions before investing in them. However, 84 per cent of CEOs agree that AI-based decisions need to be explainable in order to be trusted.
Lack of understanding
Professor Mahendhiran Sanggaran Nair FCPA, pro-vice chancellor research engagement and impact with SUNWAY University Malaysia, fellow of Academy Sciences Malaysia and member of CPA Australia’s Digital Transformation Committee, says technology provides huge opportunities for the accounting profession, while also posing significant ethical questions.
“Tools like behavioural analytics and sentiment analysis are becoming core to many accounting decisions as they help firms to understand different value systems and markets, so clients can make more informed decisions,” he says. “There are many positives in terms of economic productivity and improving market reach.”
Nair adds that deep-learning tools can also gather and analyse information quickly, including unstructured data from sources like social networks or emails.
“People are knowingly or unknowingly making this data available,” says Nair. “When it’s used unknowingly, ethical issues can arise. The challenge, however, is that so many of our institutions, our governance systems and our regulatory frameworks haven’t kept up with the pace of technological developments. Many of the institutions are lagging behind.”
Jeannie Paterson, professor of law at University of Melbourne and co-director of the university’s Centre for AI and Digital Ethics, says there is a general lack of understanding about the nature of 4IR technologies among the organisations that use them.
“That creates a flow-on effect in understanding what the risks are,” she says. “If you’re using natural language processing, for instance, the risk of bias and discrimination may arise in the way the language is interpreted and used. There’s quite a different risk if you’re using machine learning in advertising, where the bias and discrimination might be found in who gets to see ads and who doesn’t.”
This lack of understanding of 4IR technology is also revealed in findings of the 2021 Fourth Industrial Revolution Benchmark survey, by KPMG Digital Delta in collaboration with Faethm. The survey has found that most Australian business leaders lack a deep knowledge or experience with 4IR technologies, and less than half feel their organisation is strongly prepared for technological change.
Piers Hogarth-Scott, partner at KPMG Digital Delta and national leader IoT, says adoption of technology has only accelerated since the COVID-19 pandemic.
“Results of our 2021 study are being analysed, and I expect we’ll see a greater uptake in technology like AI,” he says.
“However, organisations need to develop strong oversight and governance around its use.”
Cyber security resources and support
Taking ownership
Anju De Alwis CPA, managing director of education institute Ultimate Access in Dubai, and a member of CPA Australia’s Ethics and Professional Standards Centre of Excellence, says business leaders must develop an understanding of the technology they employ, including the risks it presents.
“This responsibility doesn’t sit solely in that IT domain,” she says. “The ethical principles for accountants are based on five key areas – integrity, objectivity, professional competence, due care, confidentiality and professional behaviour. These principles should be the foundation for how we approach technology like AI, because they encompass transparency, accountability and fairness.”
Nair says leadership must come from the top. “We’re starting to see the emergence of roles like chief ethics officer and chief innovation offer,” he says. “They must work together with the CEO, CTO and CFO to put in place a foolproof governance system to mitigate risks. I think if you don’t do that, it will undermine the firm’s credibility and accountability and responsibility.
“This confluence of technology requires a confluence of leadership styles to address a fast-changing global economy powered by science, technology and innovation,” adds Nair.
“We’re going to see a lot more structure within organisations to ensure a cross-pollination of ideas to address ethical risk, and some of the most sophisticated firms are moving in this direction before implementing new technologies.”
Paterson stresses that AI ethics policies should not “be a gloss on top of what the technologies do”.
“Ethical questions are fundamental to how companies use technology and whether it does for them what they actually want it to do. Another risk is error. If you don’t know what it is you are using and what data your technology is relying on, you could receive erroneous results.”
A power for good
Kuperholz says standards of governance and regulation will eventually catch up to 4IR technologies.
“Remember that around the turn of the 20th century there was no regulation governing the publishing of financial statements by listed companies,” he says. “Society drove the need for this level of regulation, which is now the backbone for audit companies.
“We find ourselves in a situation that drags regulation into being,” adds Kuperholz.
“AI-first companies are not motivated to put the handbrakes on themselves, but, at some point, it will seem like the Wild West if companies are just using AI to maximise return to shareholder value, rather than using it in the best interests of society as a whole.”
In the meantime, however, organisations must take responsibility for understanding – and explaining – their use of 4IR technologies.
“We have taken ourselves to this incredible precipice of the most exciting technology that’s ever existed,” says Kuperholz. “In theory, it could solve problems like cancer, pollution, world hunger. On the other hand, however, it could lead to a completely dystopian future that benefits the few and disadvantages the many.
“With this great power comes great responsibility, and you can’t get something for nothing. You have to acknowledge that the use of these technologies opens up a different risk profile, and you are accountable at the board level for mitigating these risks sufficiently.
“The buck has got to stop with you for your use of AI and advanced technologies,” adds Kuperholz. “Is it fair? Is it unbiased? Is it appropriately governed? Companies that put in place and publish strong, ethical frameworks are building the guide rails by which they will get more advantage from this technology, rather than less.”
Making tech fair
Data-driven technologies and artificial intelligence (AI) are powering our world and removing the friction from everyday life.
How can you ensure that the algorithms that help power your business are developed in a manner that is fair? To what extent can you explain them to your clients?
The World Economic Forum’s Global Future Council on Artificial Intelligence for Humanity proposes 10 practical interventions for companies to employ to ensure AI fairness.
- Assign responsibility for AI education
- Define fairness for your organisation
- Ensure AI fairness along the supply chain
- Create an HR AI fairness people plan
- Educate staff and stakeholders through training and a "learn by doing" approach
- Test AI fairness before any tech launches
- Communicate your approach to AI fairness
- Add AI fairness as a board meeting standard item
- Document everything
- Make sure the education sticks
Source: The World Economic Forum’s Global Future Council on Artificial Intelligence for Humanity