As Seen In


Arendt & Medernach Discusses AI Trends in Banking – Delano Magazine

In this feature article, Marc Mouton and Astrid Wagner, Partners at member firm Arendt & Medernach, discuss how generative AI can revolutionize banking operations, enhance risk management and improve financial performance, while also highlighting the evolving regulatory landscape with ethical considerations and data protection.


As Seen In Delano Magazine

The growing role of artificial intelligence in processing information for legal documentation and counsel won't eliminate the need for human lawyers or ethical judgments, argue Marc Mouton and Astrid Wagner, partners at Arendt & Medernach.

Marc Mouton and Astrid Wagner, partners at Luxembourg law firm Arendt & Medernach, discussed with Delano in a written Q&A how generative AI is set to revolutionise banking operations, enhance risk management and improve financial performance, while also highlighting the evolving legal and regulatory landscape and the need for ethical considerations and data protection. Part 4 of Delano’s AI in finance 2024 series.

Kangkan Halder: How do you foresee AI, and particularly generative AI, influencing banking operations and financial performance in Luxembourg?

Generative AI is poised to make banking operations evolve significantly by streamlining operations, enhancing risk management/assessment, fraud detection, and improve customer experience/services. AI powered systems can enhance compliance processes, optimise investment strategies and bolster cybersecurity measures, ultimately improving financial performance.

In your view, what are the potential implications of AI and GenAI on the regulatory and legal framework governing Luxembourg's banking and financial services?

The regulatory framework may need updates to address the concerns of data privacy, accountability for algorithmic decisions and potential biases, ensuring ethical AI use, defining liability in case of AI errors and maintaining compliance with existing financial laws and regulations such as outsourcing and IT governance and security regulations while accommodating emerging AI technologies in the sector.

In that respect, it is worth mentioning that the EU legislative bodies are working on an artificial intelligence act which aims to regulate the development, deployment and use of AI within the EU. It seeks to establish a harmonised regulatory framework for AI systems, ensuring their responsible and ethical use while promoting innovation and competitiveness. The act categorises AI systems based on risk levels and sets requirements for transparency, data quality, human oversight and compliance with fundamental rights. Its goal is to foster trust in AI technologies while safeguarding individuals’ rights and safety. The EU AI Act also clarifies that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing of innovative AI systems in real world conditions. A political agreement has been reached a few days ago between the European parliament and the European council on the AI Act.

In terms of liability, the EU proposal for a directive adapting non-contractual civil liability rules to AI seeks to address liability issues stemming from AI-related incidents causing harm.

Finally, the Luxembourg Financial Sector Supervisory Commission (CSSF) has already a few years ago provided helpful guidance on the points of attention to consider when implementing AI systems. The CSSF has amongst others recommended to set-up a strong data governance, to look into the quality of data used, to establish controls involving humans, to make sure that there is a sufficient level of AI skills to be able to properly understand and supervise the AI solutions used, to ensure that the AI systems used comply with fairness and non-discrimination principles, to apply appropriate data protection measures, to ensure accountability, as well as explainability, auditability and safety of the systems used.

Considering the rapid advancements in AI, how do you expect the legal strategies of Luxembourg banks in dealing with AI deployment to evolve over the next five years?

We expect that Luxembourg banks will need to have a strong legal strategy to leverage AI in compliance with legal and regulatory requirements. This means that they will amongst others need (i) to have a strong data protection framework to ensure that the data input into AI systems will be operated in a way that is compliant with the applicable requirements on the protection of personal data and professional secrecy, as well as upcoming EU regulations such as the AI Act, (ii) to have adequate processes when introducing AI tools to make sure that compliance with outsourcing regulations (where applicable) and CSSF notifications are submitted where required, (iii) to have adequate IT governance arrangements with adapted policies and IT specialists with the necessary experience and expertise to oversee the use of the relevant tools, (iv) to have an appropriate IT security framework aligned with regulatory requirements, and (v) to more generally comply with the key recommendations of the CSSF Whitepaper on Artificial Intelligence.

From a legal standpoint, how is AI reshaping the practices related to compliance and regulatory adherence in Luxembourg’s banking sector?

We expect that Luxembourg banks will need to have a strong legal strategy to leverage AI in compliance with legal and regulatory requirements. This means that they will amongst others need (i) to have a strong data protection framework to ensure that the data input into AI systems will be operated in a way that is compliant with the applicable requirements on the protection of personal data and professional secrecy, as well as upcoming EU regulations such as the AI Act, (ii) to have adequate processes when introducing AI tools to make sure that compliance with outsourcing regulations (where applicable) and CSSF notifications are submitted where required, (iii) to have adequate IT governance arrangements with adapted policies and IT specialists with the necessary experience and expertise to oversee the use of the relevant tools, (iv) to have an appropriate IT security framework aligned with regulatory requirements, and (v) to more generally comply with the key recommendations of the CSSF Whitepaper on Artificial Intelligence.

We expect that AI will indeed reshape the practices related to compliance and regulatory adherence in Luxembourg’s banking sector as it will enable, e.g., enhanced risk management/assessment, fraud detection and more generally deviations from legal and regulatory requirements.

This being said, adoption levels are still at the beginning. The Luxembourg central bank (BCL) and the CSSF published in May 2023 a joint thematic report on AI. To gather information about the usage of AI (and machine learning in particular) in the Luxembourg financial sector and the particular use cases being implemented by credit institutions, payment institutions and e-money institutions supervised by the CSSF, the BCL and the CSSF launched a joint survey in October 2021. The joint thematic report on AI reproduces a summary of the main findings from the survey. The conclusion of the report is that ‘the survey demonstrated that the usage of AI in the Luxembourg financial sector is currently fairly limited and still at an early stage, but investments in this technology and especially ML are estimated to increase, paving the way for a wider adoption of these innovative technologies in the near future.’

From a legal standpoint, the banking sector will need to ensure that the compliance and regulatory adherence tools work properly as they may otherwise be counterproductive. The CSSF recommendations in its AI whitepaper are particularly relevant, as e.g., the banking sector will need to be able to explain how the AI systems work, that accurate data is fed into the AI systems, that personnel with adequate expertise supervise the systems, that appropriate IT governance and security arrangements are used etc.

From your experience at Arendt, how do you foresee AI and GenAI, particularly in legal research and client advisory services, transforming the legal support offered to Luxembourg’s banking and financial services sector?

We foresee that AI will become an increasingly powerful support tool to draw and leverage on information which is relevant to prepare legal documentation and legal advice (such as laws and regulations, regulatory guidance, case law, publications by legal scholars, previous work, templates of various documents, etc.) to generate first drafts of legal documents (such as contracts, fund and corporate documentation) and legal analysis and advice, and test outcomes of legal analysis. We do not foresee, however, that it will replace the human analysis, which will still be required to review, correct and enhance the analysis and advice to be provided.

We have already a few years ago started with machine learning to enable faster due diligence or documents review and we are using AI for simulating dawn-raids and forensic exercises. Now with the fast development of generative AI systems several other opportunities have opened up.

GenAI and its capacity to ingest and analyse vast amounts of data, including case law and regulatory updates, enables to extract information, saving valuable time for legal professionals.

By using natural language processing and machine learning, GenAI can provide comprehensive legal summaries. AI and GenAI will also help to significantly mitigate the risk of oversight or non-compliance in respect of anti-money laundering obligations and much more.

However, this requires having very well-structured and a huge volume of data and most importantly to ensure that data is properly protected. No one wants to face the risk of accidental exposure of internal information.

There are other challenges such as the cost of GenAI which must not be underestimated and of course one of the key challenges of the future which is ensuring a proper training for the next generation. How will you ensure that a young generation is trained to challenge what AI has produced? How will you ensure they will become expert in their domain if AI can generate for them texts or clauses after they have made a prompt?

In addition, we must be very cautious for legal research we are all aware that legal references could have been created by the AI itself. In June this year a US judge-imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by an AI system.

In conclusion, generative AI is a great opportunity, but the importance of ethical considerations must be taken into account as well as data protection aspects.

Full article can be accessed here.

 

dots