Artificial intelligence: panacea or paradox?
Artificial Intelligence (AI) is all pervasive, although progress was intermittent until very recently
Auditors’ panacea – the practical benefits
In describing the audit of the future, we talk about how AI will help us to analyse a company’s data sets (referred to as Audit Data Analytics (ADA)): helping us to ‘spot the needle in the haystack’. AI helps us to analyse electronically more information than ever before, including ‘unstructured data’ (e.g., documents and records such as company contracts).
Therefore, if AI helps the profession to optimise ADA, then accordingly, the merits of ADA are enhanced. The Financial Reporting Council (FRC) reached its own conclusions on this subject, stating that: ‘The ADA we have seen in practice offers the potential to improve audit quality in a number of ways, including: deepening the auditor’s understanding of the entity; facilitating the focus of audit testing on the areas of highest risk through stratification of large populations; [and] aiding the exercise of professional scepticism….’ In a survey of 800 executives, conducted by the World Economic Forum, 75% believed that by 2025 30% of corporate audits are expected to be performed by AI.
At EY, we are combining a company’s internal datasets with external data about the same company (eg customer sentiment and information about local operational issues, via surveys and buying habits respectively). This changes the audit approach: instead of relying on management estimates, judgements and data, we can independently rebuild a picture of a company’s expected performance and value from the ground up, to support our discussions and audit procedures. This is especially important where the auditors are seeking to understand the rationale for complex estimates and judgements such as asset impairment, environmental provisions or exposure to warranties and contracts.
The benefits speak for themselves. AI adds a further degree of audit independence by relying less on management and the potential risk of human bias. It also increases the scope or coverage of the audit and accordingly reduces some of the audit risks. Getting all of this right means enhancing audit quality and further augmenting the trust and reliance of those for whom the audit is conducted.
Auditors’ paradox – ethical issues
However – and this is the paradox -- it is difficult to provide assurance on AI. It might also be challenging for a regulator to oversee and inspect the application of AI in the audit methodology (e.g., the underlying algorithms used to sift through and identify specific data anomalies). For example, where companies are using AI internally, an audit trail is not created in the same way that it is for more simple calculations and systems with inputs: a rule-based process and output. Likewise, where an auditor uses AI to seek out and analyse financial data, the method of approach (and underlying decisions) may not be documented and available for inspection as they once were.
AI is unstoppable. It will be a feature of our customer experience and in almost all of our technology and communications. It will help us to redefine how finance systems operate, and how we audit the books. But there are risks ahead and we need to work out what we are prepared to trust and what we are not.
For those uses of AI that we do not trust (yet), we need more transparency and auditability so that we get a greater level of confidence that it works. Critically we need ethical codes, or at least codes and standards that can be amended to accommodate the application of AI (specifically its ‘learning’ and ‘training’ process). Boards need to understand where AI is being adopted in their organisations and challenge their executive team accordingly. In the case of auditing, the audit committee will need to be satisfied that the application of AI in the audit process enhances rather than undermines the quality of the outcome, and likewise the regulator needs to be confident with the technology.
Above all else, we need to remind ourselves that the ‘intelligence’ we refer to here is “artificial”. We also need to ask ourselves whether the technology will work as originally intended, and be satisfied that any unintended ethical consequences can be avoided, as far as ‘humanly’ possible.
To close, we encourage the CBI’s members to consider the following three questions, as a means to gauge your state of readiness for this relatively new application of AI: 1) What steps has the board taken to consider the use of internal and external company data in the external audit? 2) What assurances will the board seek from its auditor, if AI becomes an integral aspect of the ADA methodology?
integral aspect of the ADA methodology? And 3) How will the board engage with investors and other stakeholders if/when ADA and AI are used to inform the auditor’s opinion?
Next post: Why it pays to buy social