At a glance
By Lachlan Colouhoun
To “google” something has become a verb for an action by millions of people every day, but few would think about what the algorithm is doing in the seconds between pressing the search button and the results appearing.
If, for example, the search is for “university professor”, most of the results are for white males.
Instead of being a neutral “black box”, the algorithms that drive the most popular search engine are functioning with biases introduced long ago, which have now become entrenched.
Google is not the only example. Another technology giant, Amazon, had to reboot its recruitment software when it became clear that it was penalising the resumes of women, while Microsoft had to intervene in 2016 when its AI-powered chatbot, Tay, began tweeting racist slurs.
As science has advanced, harnessing AI technology in applications such as chatbots is becoming the easiest part. Human governance to ensure that AI is used for good, and not evil, is still a work in progress.
AI bias
With AI increasingly ubiquitous and part of modern existence, concern about its biases has inspired a push towards a governance framework that is ethical, unbiased and transparent.
Because AI is created by humans, it takes in many of our less honourable biases and when it is unsupervised and learns from repetition, these biases can snowball, says Kriti Sharma, vice-president of bots and AI at British multinational accounting software company Sage.
“The good news is that you can fix these biases,” says Sharma, who addressed the World Congress of Accountants (WCOA) in Sydney last year on AI for the finance sector.
Along with data privacy, protection and use, regulating for ethical or responsible AI poses a key moral dilemma that needs to be navigated before the full force of the fourth industrial revolution – the rise of cyber-physical solutions through technological breakthroughs – can be embraced and unleashed with any real confidence.
“You make sure you are giving the algorithm the right goal to achieve and the values and the right purpose to follow, and that it is about not just achieving the task but doing it in the right way,” Sharma says.
“For a profession like accountancy, that is very important because it relies so much on trust.”
UK-based Sharma has emerged as one of the leading activists in the push towards what she calls “responsible” AI, with her work recognised with ambassadorships to the United Nations and the Obama Foundation.
“When I started campaigning in this area it was news to many people,” she says. “They were saying, ‘are you for fighting for robot rights?’, but now there is more awareness.”
Government's role in AI
In creating an ethical framework, Sharma says it is important that it not be left for individual developers to opt in to responsible governance as a project-by-project choice. Rather, developers need to understand ethical standards as being integral to their work, and this requires accountability “right at the top of organisations” and from government.
Some governments are beginning to move. In the UK, which is considered a leader in AI, the government has established a Centre for Data Ethics and Innovation. Sharma is on the board, while a recent appointment was Luciano Floridi, a professor of philosophy and ethics of information at Oxford University. The body’s core task is to create an ethical AI framework for the UK.
It can build on the work by a House of Lords committee, which articulated five principles of responsible AI – borrowing from the 1942 laws of robotics created by renowned science fiction writer Isaac Asimov.
The fifth principle articulated by the committee, that AI should not have the power to “destroy” human beings, was reflected in a petition organised by University of New South Wales Scientia Professor Toby Walsh for a ban on lethal autonomous weapons powered by AI.
Similarly, the Monetary Authority of Singapore has released a set of principles to promote fairness, ethics, accountability and transparency (FEAT) in the use of AI and data analytics in the finance industry.
Australia is also moving in this direction, with the federal government giving CSIRO’s Data61 a mandate to develop a framework, working alongside the Australian National University’s Autonomy, Agency and Assurance Innovation Institute (3A Institute). Headed by former Intel executive Professor Genevieve Bell, the 3A institute and Data61 will create a curriculum for certified AI practitioners by 2022.
Another organisation working to develop AI in Australia is the Australian Centre for Robotic Vision, a collaborative centre incorporating several major universities. Its COO, Dr Sue Keay, believes two critical areas can redress bias in AI and deliver better ethical outcomes: the quality and volume of the data and the diversity of the workforce which collaborates to create AI algorithms.
Data and diversity and AI
Keay says AI can only operate with the datasets it is provided with, because algorithms are only as good as the data they are using. Humans creating an algorithm must ensure the data has integrity and is deep enough to be truly representative of any sample.
In one example, researchers at the Massachusetts Institute of Technology looked at an income prediction system and discovered it was twice as likely to classify females as low income. If the dataset was increased tenfold, incidents of misclassification would decrease 40 per cent.
“Another common example is that if you had a robot that determined when people in jail are likely to be paroled, [it] is likely to be quite discriminatory towards people of colour,” Keay says.
“That’s just an unfortunate part of our historical data [in] that perhaps in the past there has been a bias in that direction.
“It is also about understanding that we do have biases and then working out how you can get the program to do the job you want without it being influenced by those biases.”
Keay cites from the analog world to illustrate her other point: that the diversity of teams working on AI can help eliminate bias and promote better design.
“For a long time, the safety features of cars have been tested with crash test dummies, and after several decades, researchers noticed that women and children were disproportionately injured in car accidents,” she says.
“Then it was understood that the crash test dummies were based on the physique of the average American male, and this was the group who were conducting the tests. I think that in terms of AI, one of the things which goes beyond the prescriptive frameworks would be to have a more diverse workforce developing these technologies.”
Transparency vs commerciality
In a commercial environment Keay does see potential for tension between the need for transparency and desire for technology companies to protect their intellectual property.
“As AI booms, there is an inclination to protect the IP of those companies, who will be disinclined to tell everyone what is in their algorithms,” she says.
“However, algorithms do need to be testable and transparent to the people who are affected by their decisions, and this is where governments become important because they can put these frameworks in place.”
Sharma says it is important to stand against the idea of a black box that functions without question.
“In the technology industry you have some highly intelligent people who say their algorithm is so clever they can’t explain it themselves,” she says.
“I challenge that. I understand that some algorithms are difficult to explain, but you still absolutely need to have that auditability and traceability as part of the fundamentals.”
As an AI advocate, Sharma envisages a world where AI applications are so seamlessly integrated into daily life that people will not notice or question them. To achieve that first requires a basis in ethics and trust.
“People should be able to investigate and interrogate AI and that framework should be made available to everyone, but we shouldn’t have to fear it,” she maintains.
Sharma is working on what she calls a “bias detection” test for AI applications – before they are launched as products.
“We already do cyber testing, so why not have a bias detection test before launching?”
Like ethical practice in other areas of business and government, Sharma believes ethical AI will become a differentiator that will lead to better outcomes, including for corporate bottom lines.
“Ethical technology will not just be a nice thing to have – it will be a great business driver,” she says. “People always prefer to buy from businesses they can trust, and that will extend to AI too, as it becomes a part of our lives.”
5 principles for artificial intelligence
- AI should be developed for the common good and benefit of humanity.
- AI should operate on principles of intelligibility and fairness.
- AI should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.
Source: UK House of Lords select committee on artificial intelligence