Access to text and speech data is essential for research, yet personal and sensitive information often prevents open sharing. Techniques such as pseudonymization and anonymization offer potential solutions, but their effectiveness, limitations, and impact on data utility require deeper investigation. Balancing privacy protection with meaningful scientific use remains a key challenge.
At the same time, legal and ethical requirements increasingly shape how language resources can be created, processed, and distributed. Regulatory frameworks, such as the GDPR, the Data Act, and the Artificial Intelligence Act, affect access, reuse, and documentation duties for both text and speech data, creating a complex environment that demands interdisciplinary insight.
The workshop brings these two perspectives together by addressing both the technical and practical aspects of de-identification as well as the legal and ethical obligations governing data handling. Topics include anonymization and pseudonymization methods, compliance in practical workflows, provenance and rights tracking, and emerging approaches to legal metadata. The goal is to foster responsible, legally sound, and technically robust innovation in human language technologies.
For inquiries, please contact ingo.siegert@ovgu.de for questions about LEGAL2026 or mormor.karl@svenska.gu.se for questions about CALD-pseudo 2026.
Dr Maja Bogataj Jančič is the founder and head of the Open Data and Intellectual Property Institute ODIPI. She has also been the head of the Institute for Intellectual Property (IPI) since its establishment in 2004. Maja is a copyright expert; her recent work focuses on open science, open data, data governance and artificial intelligence, as well as open science issues and the legal framework of copyright and data for research and science. Maja is the National Coordinator for Slovenia and the Regional Coordinator for the six Western Balkan countries of the Knowledge Rights 21 project. In 2020-2024, she co-chaired the Data Governance Working Group of The Global Partnership on Artificial Intelligence (GPAI).
Maja is a member of the Expert Council of the Slovenian Open Science Community (SSOZ) and the head of the newly established Expert Body for Legal Issues Related to Copyright and Data Governance. She is a a member of the Advisory Committee on Copyright and other Legal Matters (CLM) of the International Federation of Library Associations and Institutions (IFLA). Maja is a president of the supervisory board of the National and University Library (NUK). She is a Senior Research Fellow at the Centre on Knowledge Governance. She is a Vice President of COMMUNIA. She Creative Commons Slovenia since 2003.
Maja graduated from the Faculty of Law in Ljubljana (1996), obtained her LL.M. from the Faculty of Law in Ljubljana (1999, Economics), Harvard Law School (2000, Law) and Facoltà di Giurisprudenza di Torino (2005, Intellectual Property), and her Ph.D. from the Faculty of Law in Ljubljana (2006, Copyright).
t.d.b.
t.b.d.
t.d.b.
t.b.d.
Authors are invited to submit original and unpublished research papers in the following categories:
Long papers (up to 8 pages) for substantial contributions
Short papers (up to 4 pages) for:
The full papers will be published as workshop proceedings along with the LREC main conference. They should follow the LREC stylesheet, which is available on the conference website on the Author’s kit page. Unlike the main conference, we allow appendices of up to 10 pages already in the review phase. However, the reviewers will not be required to look in the appendices and must be able to review the paper based on everything contained within the main body of the paper (as if there were no appendices).
Submission deadline: 20th of February 2026
Submission link: https://softconf.com/lrec2026/LEGAL2026/
When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC authors to share the described LRs (data, tools, services, etc.) to enable their reuse and replicability of experiments (including evaluation ones).
Regulatory frameworks and global governance: Impact of the GDPR, EU Data Act, Data Governance Act, Digital Services Act, AI Act, and international regulations (e.g., China’s 2023 Draft Rules on Generative AI, U.S. AI Bill of Rights) on access, circulation, and reuse of language and speech data; statutory exceptions for text and data mining.
Intellectual property, data protection, and LLM governance: Legal issues surrounding training data, derivative datasets, and model outputs; copyright, data governance, and data protection obligations in the development and deployment of Large Language Models.
Ethics, fairness, trust, and transparency: Ethical considerations in personal data collection and reuse; ensuring fairness, transparency, and accountability in language and speech technologies.
Compliance in practice: Legal metadata, provenance, consent documentation, usage rights, and machine-readable licensing; practical workflows for lawful data collection, annotation, and sharing.
Operationalizing compliance: Tools and methods that support automated compliance checking, risk detection, consent tracking, and policy-aware data filtering; language technologies assisting in legal compliance.
Emerging and grey areas: Legal uncertainties around synthetic or augmented data, LLM-generated content, and cross-modal leakage; evolving interpretations of anonymization thresholds.
Interdisciplinary and cross-border coordination: Global harmonization of legal and technical approaches; collaboration models between researchers, legal experts, and infrastructure providers; navigating jurisdictional inconsistencies.
Detection and classification of personal information (PI): Automatic identification of PI in text, speech, and multimodal data; context-dependent and indirect indicators of identity.
Replacement and transformation of PI: Context-sensitive pseudonymization and anonymization methods; substitution, masking, obfuscation; maintaining coherence across discourse and modalities.
Utility and bias after de-identification: Effects of de-identification on downstream task performance, linguistic research validity, readability, and bias amplification or reduction.
Approaches to evaluation and adversarial testing: Metrics and frameworks for assessing de-identification quality; adversarial re-identification attempts; robustness and failure-mode analysis.
Dataset creation for de-identification research: Methodological, ethical, and annotation-related considerations in building corpora for training or evaluating de-identification systems.
Low-resource scenarios: Techniques for de-identification in settings with limited data, scarce annotations, or underrepresented languages; transfer and multilingual approaches.
Speech-specific challenges: Removing speaker identity cues in audio; voice anonymization; cross-modal leakage between text, transcripts, and acoustic features.
Cross-disciplinary applications and challenges: Integrating de-identification techniques into real-world workflows in areas such as linguistics, social sciences, digital humanities, healthcare, and other private- or public-sector data environments.
Deadline for submission
Notification of acceptance
Submission of final version of accepted papers (strict)
Workshop day
Ingo Siegert, Otto-von-Guericke-Universität Magdeburg, Germany
Kossay Talmoudi, ELRA/ELDA, France
Khalid Choukri, ELRA/ELDA, France
Pawel Kamocki, IDS Mannheim, Germany
Maria Irena Szawerna, University of Gothenburg, Sweden
Simon Dobnik, University of Gothenburg, Sweden
Therese Lindström Tiedemann, University of Helsinki, Finland
Pierre Lison, Norwegian Computing Center & University of Oslo, Norway
Ildikó Pilán, Norwegian Computing Center, Norway
Ricardo Muñoz Sánchez, University of Gothenburg, Sweden
Lisa Södergård, University of Helsinki, Finland
Elena Volodina, University of Gothenburg, Sweden
Xuan-Son Vu, Lund University, Sweden
Khalid Choukri,
Claudia Cevenini,
Erik Ketzan,
Prodromos Tsiavos,
Andreas Witt,
Paweł Kamocki,
Kim Nayyer,
Krister Lindèn,
Ingo Siegert,
Catherine Jasserand,
Isabel Trancoso
Henrik Bushschmeier
Annett Jorschick
Lars Ahrenberg
Terhi Ainiala
Emilia Aldrin
Lucas Georges Gabriel Charpentier
Simon Dobnik
Emilie Francis
Linnea Gustafsson
Ivan Habernal
Udo Hahn
Aron Henriksson
Nikolai Ilinykh
Dimitrios Kokkinakis
Herb Lange
Tomas Lehecka
Therese Lindström Tiedemann
Pierre Lison
Peter Ljunglöf
Ricardo Muñoz Sánchez
Ildikó Pilán
Tatjana Scheffler
Maria Irena Szawerna
Lisa Södergård
Vicenç Torra
Thomas Vakili
Shubham Vatsal
Elena Volodina
Xuan-Son Vu
Jan-Ola Östman