The Ethical AI Advisory and Gradient Institute has launched the inaugural Australian Responsible AI Index. IAG, which sponsors the index, said the findings reveal less than one in 10 Australia-based organisations have a “mature approach” to deploying responsible and ethical artificial intelligence (AI), signalling an urgent need for Australian organisations to increase investment in responsible AI strategies.
According to IAG, responsible AI is designed and developed with a focus on ethical, safe, transparent, and accountable use of AI technology, in line with fair human, societal and environmental values. It is critical in ensuring the ethical and appropriate application of AI technology, which is the fastest growing technology sector in the world, currently valued at US$327.5 billion ($438 billion).
The Responsible AI Index 2021 studied 416 organisations operating in Australia and found that only eight per cent are in the ‘Maturing stage of Responsible AI’, while 38 per cent are ‘Developing’, 34 per cent are ‘Initiating’, and 20 per cent are ‘Planning’. The mean score was 62 out of 100, placing the overall result in the Initiating category.
To help organisations accelerate responsible AI adoption, a Responsible AI Self-Assessment Tool has been created to measure an organisation’s maturity when developing and deploying the technology.
Dr Catriona Wallace, CEO of Ethical AI Advisory, said the implications of organisations not developing AI responsibly are that unintended harms are likely to occur to people, society and the environment, potentially at scale.
“As only three in 10 organisations stated they had a high level of capability to deploy AI responsibly, there is significant work for Australian business leaders to do,” Dr Wallace added.
Bill Simpson-Young, CEO of Gradient Institute, said the index found just over half the organisations have an AI strategy in place, highlighting the opportunity for business leaders to act on critical AI initiatives. These include reviewing algorithms and underlying databases, monitoring outcomes for customers, sourcing legal advice around potential areas of liability, and reviewing global best practice.
“Putting training in place to upskill data scientists and engineers, as well as board and executive teams, can also help close the gap by enabling a far greater level of understanding and education in Responsible AI,” Simpson-Young said.
Julie Batch, Group Executive Direct Insurance Australia at IAG, said AI plays a central role in enhancing customer experience and improving business processes. To ensure the right customer outcome, the company embeds “considered thinking” about fairness and equality before implementing an AI solution.
“At IAG we think of fair and ethical AI as a societal challenge and we see the Responsible AI tool as a great way for organisations to understand where they sit on the index and what they need to do to help ensure they’re applying AI in an ethical, responsible way,” Batch said.
IAG said it uses its established AI ethics framework and the Australian Government’s voluntary AI ethics principles to identify potential issues or risks prior to launch.
The company employs artificial intelligence to predict whether a motor vehicle is a total loss after a car accident, reducing customer claims processing times from more than three weeks to just a few days. IAG is now looking at how responsible and ethical AI can be used to help detect motor claim fraud by using advanced analytical techniques which have been used to help claims consultants settle genuine customer claims sooner.