- Digital Legal Lab Newsletter
- Posts
- Digital Legal Lab Newsletter October
Digital Legal Lab Newsletter October
Digital Legal Talks 2024: Registration Now Open! | New Publication: Taner Kuru on Training LLMs with Public Data | Explaining AI: Ljubiša Metikoš and Jef Ausloos on GDPR and the Right to an Explanation | Judicial Profiling Systems: Contestability in Design – Ljubiša Metikoš’s New Article | Privacy and GenAI: Duffourc, Kollnig, and Gerke Explore Data Lifecycle Risks
Hi! Thanks for reading the monthly Digital Legal Lab newsletter, where we round up the latest news & views from our cross-university research collaboration on digital legal studies. For more information, updates and events, please visit our website or follow us on social media.
What's new at the Digital Legal Lab
🚀 Digital Legal Talks 2024 – Registration Now Open!
We’re thrilled to announce that Digital Legal Talks 2024 will take place November 28-29 in Utrecht as a fully in-person conference. Bringing together experts to explore topics like AI regulation, data sharing, and the digitalization of justice systems, this event is not to be missed!
🕒 Register (free of charge until November 17): here
🗓️ Event Program and More Information
We’re excited to share that our Lab Member Taner Kuru has published a new article exploring the legal implications of training large language models (LLMs), like ChatGPT, with publicly accessible online data. Taner’s research tackles critical questions within the EU data protection framework, specifically addressing whether training these models on public data aligns with GDPR regulations. His paper argues that the processing activities involved should fall under the Article 9 GDPR regime based on recent rulings by the CJEU, and discusses exceptions within Article 9(2) that may justify this type of processing.
This study reveals key challenges, including the difficulty of obtaining explicit consent from individuals and the limited scope of data that is legally usable. Taner’s paper ultimately points to the need for clear legal bases to support AI training, hinting that this will be a pivotal issue in the future of EU data protection.
DLS members Ljubiša Metikoš and Jef Ausloos have just released a thought-provoking paper on the right to an explanation under the GDPR and the AI Act. The paper examines GDPR case law on this right across the EU, using these insights to interpret explainability requirements under the AI Act. They analyze the scope, content, and balancing of explainability rights within both frameworks, addressing critical questions such as:
Does the right to an explanation apply throughout the AI development process?
Can companies use intellectual property rights to protect their AI systems?
Must AI training datasets be disclosed?
This paper provides a comprehensive interpretation of GDPR case law, aiming to resolve some of the central questions affecting explainability rights under EU regulation.
In a thought-provoking new article, "Explaining and Contesting Judicial Profiling Systems, Beyond a Procedural Right to an Explanation," Ljubiša Metikoš explores the critical need for litigants to contest the ways courts use profiling systems. The article argues that simply providing a procedural right to an explanation does not adequately ensure fairness in judicial profiling; instead, contestability should be integrated directly into the design phase of these systems.
Metikoš’s work draws on a range of regulatory frameworks, including the AI Act, the Right to a Fair Trial (Article 6 ECHR), GDPR, and the Law Enforcement Directive. He highlights the gaps and limitations in Europe’s current regulatory approach, urging for a more robust system to address these shortcomings in judicial profiling practices.
In their new article, "Privacy of Personal Data in the Generative AI Data Lifecycle," researchers Mindy Duffourc, Konrad Kollnig, and Professor Sara Gerke explore the privacy risks posed by the use of personal data in Generative AI (GenAI) models. Published in the Journal of Intellectual Property & Entertainment Law (JIPEL), this research highlights how personal data, including sensitive information, is processed within GenAI, raising concerns about privacy violations and broader societal impacts.
The article outlines how personal data is integrated into GenAI models, often exposing sensitive information that could lead to profiling for advertising, surveillance, or discrimination. The authors also discuss how U.S. and EU data privacy frameworks address these challenges, urging stronger protections over personal data in the GenAI lifecycle.