Author

Jo Riches, journalist

The capacity for artificial intelligence (AI) to automate tasks and accelerate productivity has transformed how businesses interact with customers and clients.

Two years ago a report from McKinsey predicted that the productivity boost from AI could add trillions of dollars in value to the global economy. But as workplace adoption of AI continues to rise, trust and safety are proving to be major concerns. Eurobarometer, the European Commission’s regular survey of popular opinion, found that 84% of respondents said AI requires careful management to protect privacy and ensure transparency.

Bringing value

Demand is growing for assurance models that will nurture consumer and employee trust in AI systems, identifying and mitigating risks while supporting management, board and investor decision-making. There are now significant value-creation opportunities for audit and assurance professionals, alongside other consultants, to guide organisations through ethical, technical and regulatory frameworks.

‘The way AI fails is different from the way humans fail’

A new policy paper from ACCA and EY addresses these issues. AI assessments: enhancing confidence in the use of AI explores how effective evaluation could support strong governance, compliance and performance. Key contributor Ansgar Koene, global AI ethics and regulatory leader at EY, stresses the importance of robust risk management frameworks to identify potential AI failures – and ensure operational continuity when they occur.

‘The challenge with AI is that it is an alien brain,’ he says. ‘We understand how humans behave. We have intuition about the areas where humans are more likely to make an error, and those are the areas we build safeguards around.

‘The way AI fails is different from the way humans fail. You change something in the input where you might say, from a human point of view, this doesn’t make a difference. But for the AI system it could potentially make a huge difference in the output that you get.’

‘There isn’t anyone in the ecosystem who doesn’t want this; everybody wants more trust’

Koene emphasises trust and safety as primary areas of focus for assessors. ‘They are key issues that need to be addressed if we’re going to use AI in areas where failure of the system – an outright failure or just not behaving in the way we would expect it to behave – could have real ramifications, both for the business and for people. It’s why governance, risk management frameworks and quality-control systems built around AI are important components to focus on in assessments.’

Taking a ‘built-in anticipation of failure’ approach is, Koene says, vital. ‘Good governance or risk management systems anticipate there will be cases where AI fails,’ he says. ‘Instead of a consequence for clients, or for operational continuity, the failure can be captured and corrected, or managed by the risk management system.’

Assessment focus

There are, however, significant differences between current AI assessment frameworks whether mandatory, as in the European Union’s new AI Act, or voluntary. The UK British Standards Institution this month issued a standard aimed at bringing trust to the rapidly expanding AI audit sector via ‘principles-based, standards-led regulation’.

Narayanan Vaidyanathan, head of policy development at ACCA, and a key contributor to the policy paper, believes rapid scaling of systems plus evidence of ‘AI harms’ provide powerful incentives for organisations to take proactive measures.

‘There isn’t anyone in the ecosystem who doesn’t want this,’ he says. ‘Everybody wants more trust. The value-creation opportunity is solving a real problem.’

The paper identifies three main categories that could be carried out separately or in combination.

  • Governance assessments: determine whether appropriate internal corporate governance policies, processes and personnel are in place to manage an AI system, including connection with that system’s risks, standards or other policy requirements.
  • Conformity assessments: determine whether an organisation’s AI system complies with relevant laws, regulations, standards or other policy requirements.
  • Performance assessments: measure the quality of performance of an AI system’s core functions, such as accuracy, non-discrimination and reliability.

A single issue, Vaidyanathan clarifies, could be approached in different ways. ‘We refer to three categories, but they overlap,’ he explains. ‘If there are concerns about cyber threats for a business operating in Brussels, for example, the conformity assessment could be “Does it conform with what the AI Act requires in relation to cybersecurity?” The governance assessment might be “Is the company taking necessary risk-management precautions to prevent an attack? And if one happens, are there clear protocols?” This is where specificity becomes important, so clients know the criteria you’ve checked against and expectations can be managed.’

Transparency is vital. ‘Any AI system is connected to other systems,’ says Vaidyanathan. ‘It also draws on inputs from other areas of the business and on external data, including through APIs [application programming interfaces].

‘Ultimately, there are multiple fault lines and areas where things can break down. The devil is in the detail.’

Advertisement