It’s very clear why financial services firms are engaging with Artificial Intelligence (AI) – the potential annual value of AI and analytics for banks alone could reach $1 trillion, across 25 use cases, according to a recent study by McKinsey.
The definition of AI relied on by the UK’s Turing Institute in a report commissioned by the UK Financial Conduct Authority (FCA) is “AI is the science of making computers do things that require intelligence when done by humans.” Machine learning, a form of AI, “refers to the development of AI systems that are able to perform tasks as a result of a ‘learning’ process that relies on data,” which differs from more traditional approaches that write explicit rules and logic language into code. The Turing Institute report goes into useful detail about the different kinds of AI and ML that can be applied within firms.
Within financial services firms, there are a wide variety of use cases for AI and ML, including:
- Retail customer uses – ML algorithms can assess credit quality, price insurance and assess appropriate products and services, while AI chatbots can answer frequently asked questions.
- Middle office processes – AI and ML models can be used within a variety of tasks, including trade processing, model risk management, stress testing and market impact analysis.
- Trading and investment – Already AI and ML are being used for algorithmic trading and within back-testing, best execution and portfolio management.
- Compliance requirements – AI and ML are at the heart of many regulatory technology (RegTech) solutions, including fraud detection, trade surveillance, AML compliance and regulatory change monitoring.
This is a short sample of use cases and others are emerging all the time as new approaches within AI technology develop. There is little doubt that AI and ML will transform the financial services industry over the next decade and so most Boards – as well as senior managers – are already heavily engaged with AI and ML strategically within the business.
KEY QUESTIONS FOR BOARD DIRECTORS.
However, boards of directors at financial services firms need to recognise the risks of AI and ML as well as the rewards. Below are five key questions that directors should be asking within their firms, with some insights into why these questions are important. AI and ML is a deep and complex topic and in a brief blog such as this, it is impossible to cover the entire topic. Instead, this blog is intended to provoke reflection – at the end are some links for further reading, which may prove useful for a deeper dive.
1. How will increased regulatory interest in AI and ML impact our projects? How should we be thinking about regulatory change?
Most regulators and legislators around the globe are preparing to implement guidance and rules over the next few years, with some acting much more quickly than others. Looked at as a whole picture, there will be considerable regulatory change in this space well into the future.
For example, in April 2021, the Basel Committee on Banking Supervision said it would be focusing on “the use of artificial intelligence / machine learning in banking and supervision” as well as on “data and technology governance by banks” in its annual work plan. In August 2021, the Financial Stability Institute – a sister organisation of the Basel Committee – published an FSI Insights paper, “Humans keeping AI in check – emerging expectations in the financial sector.” AI principles that specifically target the financial sector have been issued in the European Union, Germany, Hong Kong, the Netherlands, Singapore and the United States. The EU has proposed a new set of rules that would apply across all industries, while the UK has issued AI guidance applicable to all industries and has three government entities focusing on AI issues. The UK Financial Conduct Authority has not yet issued its own AI document, but continues to work with industry in a Public-Private partnership. The US financial regulators consulted on AI rules they are developing earlier in 2021.
One of the most advanced countries is Singapore, which developed its own AI Governance Framework and assessment methodology in 2019 and has updated it since. This can be applied to most industries – it is not just financial services specific – and the World Economic Forum continues to seek feedback on it. According to the FSI analysis of the existing frameworks, they have five principles in common: reliability/soundness, accountability, transparency, fairness, and ethics. These are almost certain to be core topics as AI and ML rules develop across jurisdictions.
In late October, a new topic was introduced by the Bank of England in Staff Working Paper No. 947 Software validation and artificial intelligence in finance – a primer. The paper introduces the idea of “software risk”, which it says is composed of model risk, technology risk and data risk. The paper goes on to review existing approaches to managing software risk and what evolutionary steps are needed to appropriately manage the risks created by AI. The paper’s approach will be of interest to regulators and firms alike and could contribute significantly to the conversation.
Financial services firms can expect their own regulators to produce a new set of AI rules, or revise their existing ones, over the next few years as thinking around AI governance for financial services firms continues to evolve. Of course, firms should be aware of what is happening in their own jurisdictions and it also makes sense to monitor what is happening in other geographies, as well as at an international level, too. In shaping the firm’s policies and processes around AI, it can make sense to implement best practices in anticipation of regulatory requirements. It also makes sense to examine the firm’s overall technology and data strategies, including data governance, to ensure that these are and will continue to be supportive of AI business and compliance requirements.
2. What is our organisation’s ethics approach to AI? Have we captured these ideas in a document?
From an environmental, social and governance (ESG) point of view, some companies are choosing to create AI governance policies that can be shared with the public. Some business schools are also working on AI financial services frameworks and a few of these have been published. A sampling of some of these documents includes:
- Microsoft’s Responsible AI document and its Guidelines for Human and AI Interaction
- A framework for financial services published in the Berkeley Law Journal
- A financial services framework from Wharton Business School
More of these kinds of documents will be published over the next few years and, as thinking around ethical approaches to AI and ML develop, so will these.
Boards of directors have a responsibility for both setting the “tone from the top” for AI ethics within their firms and for communicating that tone to stakeholders, including shareholders, employees and regulators. Having an ethics document as part of an AI framework can be very useful – it can help the entire organisation sense-check their AI and ML activities, potentially reducing a wide range of risks, including the three key ones, reputational damage, financial risk and compliance risk. For example, an ethics document can help those working in AI to consider the impact of a project from multiple points of view. If joined up to the firm’s risk appetite, it can also help that team to better contextualise the impact that risks associated with an AI or ML project might have and implement controls to mitigate them or enhance resilience.
3. What data governance policies and processes do we have around the data used in our AI and ML projects?
Today, boards of directors need to pay particular attention to data governance. It is an area of increased regulatory interest, both in general and in specific application to AI and ML – as is evidenced by the new Bank of England paper. However, the importance of data governance stretches well beyond compliance – it will become ever more challenging for firms to thrive in an increasingly technology-based world without firm control of data assets. As well, a robust data governance programme is one of the best risk management investments a firm can make. Poor quality data in an AI or ML application can result in financial losses, reputational damage and regulatory sanctions.
Of course, it is important that the data used in AI and ML is timely, accurate and complete. It is also important for the firm to be sure that the data represents what the firm thinks it represents – and that the data set accurately reflects what happens in the real world. Also, firms need to ensure that data isn’t biased. While some of the more well-known examples of this include ML programmes that “learned” from the data to be biased against gender or ethnicities, bias can creep into a data set in other ways too and this is something that firms want to guard against.
Financial firms also need to ensure they are legally permitted to use the data in an AI or ML application – that such an application isn’t prohibited by terms of use in data contracts or by privacy rules such as the EU or UK General Data Protection Regulation (GDPR), for example. Ideally, firms should also keep track of where and how data is used in other ways across the organisation, who is responsible for the data within the organisation and where the data is stored.
Financial services boards should consider ensuring that there is data and data governance expertise within the Board’s membership. Of course, the Board should review and approve key data governance policies and process frameworks. Also, the Board should receive regular reports about changes to data and data governance programmes and, in particular, those that relate to AI. The Board should also consider which risk metrics around data and data governance would be helpful for it to have in its regular reporting packages.
4. Who are the third parties we have engaged to build or operate our AI and how are we managing the third-party risks associated with those relationships?
With the explosion of AI and ML-led financial technology companies (FinTechs) and regulatory technology organisations (RegTechs), many financial services firms are engaging with these types of partners to gain competitive edge. However, these relationships could be the source of significant third party risk, too. Boards – and senior managers – need to be certain the firm is managing these risks appropriately and that people with the right set of skills are involved in doing so.
Key questions Boards should be asking about AI and ML projects that involve third parties include:
- Is the firm comfortable with the third party’s ethical approach to AI and ML?
- Does the third party have a robust approach to data governance, particularly in relation to any of the firm’s own data or contracted data that the third party might engage with?
- Does the third party meet applicable compliance requirements that apply to the AI and ML project, including data privacy?
- How robust is the contract with the third party? For example, what happens to the AI or ML programme at the end of the contract? What happens to the data?
- What kind of processes are in place if a risk event occurs? How will the organisation communicate with stakeholders? What operational resilience processes are in place?
The Board might want to consider what kind of regular reporting it should receive about third party relationships that involve AI. For example, how should high-risk third party relationships that involve AI be assessed and reviewed? What additional third party risk management metrics should be captured beyond the normal metrics captured for other relationships to help keep the Board informed?
5. Do we understand how our AI programs and the models within them, work?
As the technological sophistication of AI and ML grows, so too does the challenge for Boards – and senior managers – to understand how these models operate and provide governance over them. According to the new Bank of England paper, model risk includes the risk that a model, as designed, does not produce the intended outcome. Many regulators are expecting firms to be able to provide the same kind of governance over AI and ML models as they do over other types of models that the firm uses. However, this can be challenging to do – according to the new Bank of England paper, “one of the key differences between non-AI and AI software is that in the latter the three categories of risk discussed above (technology risk, model risk and data risk) are co- mingled and thus impossible to separate and treat independently. As such, a holistic approach to AI software validation will be fundamental to effectively manage the risks from its usage.” So, although Boards and their firms may have approaches to AI model validation in place today – as many regulators already expect some kind of model validation in certain use cases – it’s likely that approaches to model validation will evolve over the next few years. As firms change their approach, Boards and senior managers need to ensure this is communicated to regulators, customers, and investors.
Today, firms who do AI and ML model governance successfully often have governance processes baked into the development process very early on. For example, legal, compliance and risk management executives are engaged even at the ideation stage, to help ensure that potential risks are understood and mitigated early on. Another advantage of having development teams engage with non-AI experts is that it forces the development teams to articulate project concepts early on to help stakeholders understand. Ideally, the entire process from inception to adoption into production around AI and ML models should follow a similar process to that which the firm uses for new product design, development and implementation.
So, Boards should ensure that AI and ML teams frequently check in with key non-technology stakeholders across the development and testing process. Boards should also consider reviewing their model risk policies and processes and ensure that they take into consideration AI and ML compliance requirements, as well as the firm’s own ethical considerations. Over the next few years, it’s likely that model governance will be an area of significant regulatory change and so firms should begin to think strategically about that too.
LOOKING TO THE FUTURE
The topic of AI and ML governance is a large and complex one and the key questions listed above provide only a short exploration of some of the issues. There are a number of other questions that financial services boards should be asking around their organisation’s AI and ML initiatives, too. For example:
- How robust is our cybersecurity programme around our AI initiatives?
- How resilient are the processes within the firm that rely on AI? What is in place to ensure operational resilience today and in the future?
- What is our communications strategy with external stakeholders around AI and ML?
- How do we communicate our AI and ML ethical values across the organisation? What kind of training should we have in place?
- What kind of reporting should the Board be seeing about AI and ML initiatives? Should this reporting be consolidated on a regular basis into an overall AI and ML report, joining up insights from across the organisation (ie the business, risk, compliance, legal, audit, etc.)
- What will AI and ML development look like tomorrow? How will this impact our firm’s future strategy and risk profile, in five- or ten-years’ time?
- How do we translate our AI and ML strategy into reality without disrupting employee relations, taking into account the potential downsizing of the human workforce in favour of AI and ML applications performing those roles?
For Board directors – and senior managers – the questions they ask about the firm’s approach to AI and ML are just as important as the answers they receive.
In summary, AI and ML applications have the potential to transform financial services firms, improving the products and services delivered to clients, enhancing the capital markets and delivering a safer and sounder industry. However, this is a fast-moving area and boards of directors need to be actively ensuring they are putting in place the right governance approach, ethical framework and culture to manage risks, ensure compliance, foster creativity and enable the firm to thrive in the years ahead.
FURTHER READING
INTERNATIONAL:
- OECD Principles on AI
- FSI Insights on policy implementation No 35 Humans keeping AI in check – emerging regulatory expectations in the financial sector
- Basel Committee – High-level summary: BCBS SIG industry workshop on the governance and oversight of artificial intelligence and machine learning in financial services
- Singapore’s AI Governance Framework and Assessment methodology
EUROPEAN UNION:
UNITED KINGDOM:
- Bank of England – Staff Working Paper No. 947 Software validation and artificial intelligence in finance – a primer
- UK Financial Conduct Authority webpage on AI
- UK Centre for Data Ethics and Innovation
- UK AI Council
- UK Office for AI
UNITED STATES:
- U.S. House Mulls Ethical AI Frameworks for Financial Sector
- US NIST AI Risk Management Framework
- Harvard Business Review – New AI Regulations are Coming, Is your Organization Ready?
- US Financial Regulators Consultation on AI
INDUSTRY:
- AI-bank of the future: Can banks meet the AI challenge?
- AI in Financial Services, by The Alan Turing Institute (commissioned by the FCA)
- Linklaters report: Artificial Intelligence in Financial Services
- Nine Types of Data Bias in Machine Learning
- Microsoft policy on responsible AI
- Microsoft guidelines for Human and AI interaction
- Berkeley Technology Law Journal – Innovating with Confidence: Embedding AI Governance and Fairness in a Financial Services Risk Management Framework
- Wharton – How can Financial Institutions Prepare for AI Risks?