Risk services

AI readiness for financial institutions

By:
Emma Gachon
AI Readiness
By 2 August 2027, every financial institution in Europe will have to comply with one of the most significant and wide-reaching AI regulations to date, becoming accountable not only for what their AI does but also for how it was built. Moreover, most of the requirements of the AI Act will already become applicable by 2 August 2026.
Contents

We covered the timeline and general information on the AI Act in one of our previous articles.

In this context, AI readiness will not only allow you to comply with regulations, but foremost to differentiate yourself in a competitive market by having more efficient, quicker decision-making, and delivering a personalised customer experience. Additionally, it offers you opportunities to reduce operational costs and improve risk management and compliance, key factors for profitability and customer loyalty.  

AI readiness not only includes operational readiness (the effectiveness of the AI model itself), but also governance and regulatory readiness, including compliance with the AI Act, but also the drawing of policies and procedures, and the risk appetite of the company for AI. These different points are discussed in more detail in the next sections.

What should you know about the AI Act?

The AI Act entered into force in August 2024, with rules relating to high-risk systems coming into play in August 2026. It classifies the AI systems into four different categories: unacceptable (posing a threat to the safety, livelihood and people’s rights), high (strong implications needing important oversight), limited (lesser implications but need for some transparency), and minimal (negligible risks). Most of the rules of the AI Act apply to high-risk AI systems, which entail, for example, the use of AI in the recruiting process, or in algorithms to profile, via credit scoring, access to financial products and services to a certain segment of individuals. 

For multinationals operating outside Europe with headquarters in an EU country, the AI Act applies both to subsidiaries inside and outside the EU, as long as the AI system is placed on the EU market. 

In the financial context, while the applications of AI may seem to be limited, they can easily turn into high-risk ones. For example, AI-driven algorithms to assess creditworthiness or set insurance premiums can impact certain groups and are considered high-risk processing. 

Main dimensions of AI readiness

Technological readiness:

Technological maturity of the organisation, such as data management, data mining, data analysis, and so forth, as well as support from the IT team in order to avoid disruptions. That also includes the knowledge of AI of managers, staff and other stakeholders of the organisation and trust in AI, as many employees are still reluctant to use it. Staff training becomes an important factor. 

Data readiness

Having access to abundant, high-quality data is essential to feed and support AI. This data has to be relevant and of good quality. To prepare, cleaning, labelling and structuring data can help.

Customer readiness

In most cases, your customers will not be aware that you are working with AI. However, you must prepare yourself to deal with customers’ privacy concerns and acceptance of AI, such as privacy concerns regarding personalisation, but also the need to exit for customers, as AI can “unmask” information (even anonymous one), and “remember” it. New customer plans can be developed. Adherence to stringent data privacy regulations such as GDPR is a cornerstone of these efforts, ensuring responsible stewardship of customer information – for more information, you can refer to our article on the intersection between the AI Act and GDPR [insert link to the Article “Article Interlink AI Act and GDPR Act Draft BMC”].

Compliance readiness

Conformity assessments are required by the AI Act before using a high-risk AI system in the market. This includes demonstrating, for example, data quality, transparency, traceability, human protection, etc. Hence, conducting an impact assessment of the Act and mapping its requirements to existing policies, procedures and programs (e.g., Model Risk Management, Data, Third Party Risk Management) where there may be dependencies or overlaps is an excellent first step for your organisation.   

Security readiness

The increase of digitalisation (and AI) further reinforces the interdependence between firms, digital service providers and software vendors, which can create a domino effect in case of Cybersecurity attacks. While AI can also be used in the context of cyber defence (with real-time monitoring of network or predictive capacities, automating incident responses), it is also more vulnerable and encourages new threats (such as more efficient and large-scale fishing attacks). As of year-end 2023, over a fifth of all phishing activities targeted the financial sector, making it the second most affected industry after social media (APWG, MIT Tech Review). 

In many cases, and especially for larger institutional firms with important datasets, to obtain the required technological knowledge and AI system from a smaller company. In these instances, both providers and deployers of AI systems are required to follow the requirements of the AI Act. This means that even if you are only acquiring a tool for your organisation from a third party, you are still expected to conduct an assessment.  

How AI can impact your organisation (in a good way)

AI has numerous implications, especially in regulatory technology, as it can provide a good level of accuracy and consistency, minimising the risk of human error if the risks mentioned earlier, such as biases, are tackled properly. The scalability of the system also allows it to treat a huge amount of data.   

  • Fraud detection in real-time: by analysing patterns across millions of transactions, AI can spot suspicious activity in seconds. This is especially relevant for payment institutions, for example, where fraud is an increasing concern, but also for banks using electronic payments. It can be especially helpful for reducing the number of false positives, very prominent in transaction monitoring. 
  • Smarter credit scoring & loan approvals: by analysing cash flow trends, transaction history, and alternative data to make faster decisions. AI can extract data from various sources simultaneously (media, industry reports, conversations, market data, etc.), increasing the available information and leading to better decision-making. This is especially important for customer onboarding. 
  • Regulatory compliance: AI can analyse legal documents and transactions to detect compliance risks, helping your organisation to stay ahead of regulatory changes. 
  • Customer segmentation: AI-driven analytics allow you to identify customer types, tailor products for their needs, and target niche markets. For example, in wealth management, AI is unlocking specific advice and risk-assessment opportunities, enhancing revenues. 
  • Risk management: by analysing historical data and recognising patterns, AI can suggest measures against future risks.
  • Reporting: AI can automate the reporting process to regulatory bodies and ensure scenario analysis in the case of ORSA reporting for insurers, for example. 

How to start

Starting in areas where AI can be quickly implemented, with low risks, such as marketing or operational functions, in order to gain experience, is a good starting point. This includes starting with basic technology and small changes, such as starting to automate some of the marketing functions first before moving to operations, finance and HR. 

If your organisation is unsure of where to start, an AI/Data Readiness Assessment is the first step. It helps you pinpoint exactly where your data stands and what needs to be done before AI can work for you. It also allows you to find potential gaps and to define your AI goals (fraud prevention, compliance automation, risk analysis, or else). 

Do not hesitate to bring an external perspective on your risk measures, by conducting, for example, an AI readiness audit. Outsourcing audit teams mean you don’t need a full-time AI audit unit. Instead, bring in experts when needed. If you’re ready to figure out where you are in your AI journey and what’s needed to move forward, contact our audit team.

Contact us