Risk services

The GDPR and the AI Act: the upcoming challenge of financial institutions

By:
Emma Gachon
The GDPR and the AI Act
February 2025 marks the beginning of the regulatory shift towards the implementation of the EU Artificial Intelligence Act, or the so-called AI Act. By August 2026, the full act will enter into force. The stake is high for financial institutions – 7 out of 8 categories of high-risk AI systems identified by the AI Act involve the processing of personal data. In other words, in more than 87% of the cases involving a high-risk AI system, compliance with GDPR would be necessary. Financial institutions, by regularly dealing with personal and sensitive client data, must take the first steps.
Contents

Anticipating the intersections between GDPR and the AI Act will allow companies to turn regulation into resilience and to be better prepared for the evolving regulations surrounding AI.

Overview

While the GDPR Act only applies to personal data, the AI Act cover the development, provision and use of AI systems, and therefore applies even if non-personal data is processed using AI. For more information, you can refer to our latest article on AI Readiness.  

Contrary to the GDPR Act, focusing on personal data processing and applying to controllers and processors inside or targeting the EU, the AI Act is broader in scope by regulating any AI system used or impacting individuals in the EU (even if no personal data is processed). Therefore, AI systems that do not process personal data, or that process personal data of non-EU persons, will still fall under the scope of the AI Act, but not GDPR. However, for financial institutions, data-driven systems often handle personal data, and both regulations usually apply together.

The AI Act outlines eight typologies of high-risk AI systems, with 7 of these 8 involving a high degree of (sensitive) personal data processing. This means that in almost 90% of cases involving a high-risk AI system, compliance with GDPR is also likely necessary. Therefore, a coordinated approach to managing high-risk systems is crucial to ensure obligations are met for both the AI Act and GDPR.

Organisations will need to map the two acts, especially since they have some overlap, especially regarding data retention and forgotten rights (1), biases and discrimination (2), and risk assessment (3).

Overlaps

1. Data retention and right to be forgotten

Many AI solutions store data for extended periods, eventually using it as part of their machine learning. Long-stored data increases the risk of unauthorised access (including the risk of cyberattacks), or misuse. It also challenged the “right to be forgotten”, or right of erasure, of customers under the GDPR. Organisations should be particularly aware of the following:

  • Clear communication with users when their data is used for AI training and/or prediction. In that regard, individuals’ right to restriction of processing (Article 18 GDPR) and right to object of the individual (Article 21 GDPR). 
  • Clear deletion/erasure of data (Article 17 GDPR) should always be guaranteed in those cases. Furthermore, the controller should have an explicit obligation to inform the data subject of the applicable periods for objection, restriction, deletion of data, etc. 

2. Biases and discrimination

AI technologies are becoming more and more advanced and can put together data to uncover highly sensitive user information such as political views, sexual orientation, or health status. These hidden connections create risks that often go unnoticed, even for anonymised or pseudonymized data that the AI can still re-identify. If the data provider is not aware of it, this can both go against the right to rectification (Article 16 GDPR), allowing users to rectify inaccurate or incomplete personal data, but also against the all GDPR Act regarding sensitive personal data (revealing racial or ethnic origin, political opinion, religious or philosophical beliefs or trade union membership), which is strictly prohibited (Article 9 GDPR).   

A thorough risk assessment, including DPIAs to include AI-specific risk, can prevent these risks, as discussed in the next section. 

3. Risk assessment: DPIAs and conformity assessments

Data protection impact assessments (DPIAs) are required under the GDPR (Article 35). This especially includes data processing likely to pose a high risk to the rights and freedoms of individuals, especially concerning sensitive personal data. The AI Act possesses a similar concept: the Conformity Assessments (Article 43 of the AI Act). The latter focuses on high-risk AI systems (as, for example, AI-driven recruitment tools or the use of AI to profile and automate access to financial products and services), evaluating risks to fundamental human rights, including privacy and non-discrimination. 

It is also important to consider that parts related to compliance with data quality of cybersecurity eventually become used to inform the customer, which means that the data needs to be included in the DPIAs. Additionally, AI platforms can involve collaboration between multiple parties or use third-party tools and services. This increases the risk of unauthorised access and/or misuse of data. Organisations must pay particular attention to sensitive and personal data that is transferred outside of the EU or to jurisdictions with different privacy regulations.

The first steps for organisations can be integrating DPIAs into Conformity Assessments to address overlapping requirements, establishing periodic reviews, and dialogue between the data protection officer, compliance teams, and AI development teams. 

AI audit

Even if AI audits are not a requirement under the AI Act, they can add value to understanding if compliance goals are met. An external perspective on your risk measures can give you potential improvements and assurance on your DPIAs. Additionally, outsourcing audit teams mean you don’t need a full-time AI audit unit. Instead, bring in experts when needed. For more information, contact our internal audit team.

Contact us