Finance departments regularly handle a significant amount of sensitive data. From customer invoices to employee bank details, it is imperative that this information is handled with care. Organisations don’t risk just their reputation if this data gets into the wrong hands, but regulatory fines too, with GDPR enforcers continually cracking down on how personal data is used.
The proliferation of generative AI and ChatGPT has thrown into question how AI uses sensitive data and what organisations need to be aware of before employees start using AI tools. Some organisations, such as Samsung, have restricted the use of ChatGPT because they don’t want to risk company data being used to train the model and so becoming accessible to others.
AI is becoming essential to corporate decision-making, and the finance department is no different. But as the technology continues to evolve, it is vital that finance leaders guide teams on the proper use of AI, to safely take advantage of its benefits.
AI already in finance
AI is a powerful tool that can crunch large amounts of data and reduce manual toil. It is an attractive technology for finance departments to accelerate analysis, automate internal processes and help to forecast growth.
AI and automation are already widely adopted by finance teams. For example, machine learning (ML) algorithms are being leveraged to better understand customer payment behaviours. These algorithms create real-time and accurate insights to better equip finance teams when speaking to customers about outstanding invoices, and to predict cash flow fluctuations.
ChatGPT and other Large Language Models (LLMs) have the potential to further benefit finance departments. These forms of AI will be able to instantly make connections in data that humans may miss, and analyse this to make informed and meaningful recommendations. Humans will still be needed to check what is produced, but the manual tasks will be completely removed. As a result, finance departments will have more time to spend on complex and profit driving tasks.
The need for AI regulation
ChatGPT has sparked a global conversation about the need for AI regulations. There are very legitimate worries about how AI can be used to cause havoc through hacking and AI weaponry. The less malign risks that company data gets pooled into a public pot, and that AI can easily give misinformation without knowing it, are a serious concern too.
The EU has been at the forefront of AI regulations, with the EU AI Act currently being debated in EU Parliament. The aim of the Act is to ensure better conditions for the development and use of AI, by imposing a framework that assesses the risk that applications of AI pose to users. For finance departments, the Act would mean tighter controls on documentation, transparency and data security, for suppliers, users, distributors and other third parties.
“But where is the UK in these regulations?” you may ask. Despite the UK being set to host the first ever global summit on AI later this year, and the Foreign Secretary speaking at the UN’s Security Council on AI, the UK has been slow to formalise any regulations. A whitepaper on the UK’s approach to AI regulation was published in March, which sets out guidelines for “responsible use” and outlines five principles it wants companies to follow. However, unlike the EU AI Act, this does not give organisations practical advice on how they can and should be utilising AI. Many UK finance departments may be unclear about how their existing use of AI will be scrutinised in the future.
Regulations for a regulated industry
Finance is one of the most highly regulated industries, so new laws protecting data will not necessarily come as a shock or worry. AI regulations will likely provide a framework for practices that already exist, as many financial process automation technologies are already based on AI, and have been for many years.
The use of generative AI and models such as ChatGPT, however, does present new risks to finance departments that must be monitored. CFOs need to ensure that any AI-enabled system guarantees compliance with the commitments made to customers, not only from a legal and security point of view, but also in relation to the company’s values. If organisations allow generative AI models to be used, then a clear policy must be put in place that ensures algorithms only pull from compartmentalised, secure and controlled data sets. This means that answers will be reliable, and not based on hallucinations, and there is always an audit trail.
Futuristic finance
While the UK waits for AI regulations to come into place, humans remain at the heart of critical decision making. AI and automation are powerful tools to increase efficiencies and analysis, but they must always adhere to the predefined rules and frameworks of a company. CFOs and finance leaders need to guide teams on how AI can be used safely. Those who wait for government regulations may risk employees taking AI into their own hands, and won’t be prepared to take full advantage of the power of the technology.