European security research projects operate at the intersection of technological innovation, fundamental rights, and complex regulatory ecosystems. DETECTOR exemplifies this evolution.

The project’s ambition: to support law enforcement authorities and forensic experts in identifying, analysing, and responding to AI-generated media manipulation—requires not only scientific excellence but equally a rigorous and demonstrable commitment to legal and ethics compliance. In a domain shaped by sensitive data, cross-border cooperation, and rapidly evolving techniques of digital deception, adherence to the law is not just a supporting activity, in DETECTOR it is a structural pillar of the project itself.

From Innovation to Regulation: A Changing Landscape

The legal context in which DETECTOR operates has become profoundly more intricate in recent years. Regulation is no longer confined to classical data protection or criminal/criminal-procedure norms. Instead, it encompasses a dense web of interdependent frameworks: the General Data Protection Regulation (GDPR) and Law Enforcement Directive (LED) for personal data; the ePrivacy regime; the EU Charter; national constitutional safeguards; and specialised legislation governing law enforcement activities and the admissibility of forensic evidence.

Regulatory Implications of Deepfakes and Generative AI

Moreover, the emergence of generative AI and deepfake technologies poses regulatory challenges of its own. Their misuse creates new threats to public trust, judicial processes, and national security. DETECTOR must therefore demonstrate not only that its tools are technically capable of detecting manipulation but also that the development of these tools aligns with the ethical and legal principles that the AI Act and GDPR seek to protect: transparency, accountability, proportionality, and respect for human dignity. This dual imperative—innovation paired with compliance—is a hallmark of responsible security research.

On top of these established domains, the EU introduced the world’s first comprehensive regulatory instrument for artificial intelligence: the EU AI Act. This regulation represents a landmark shift in the expectations placed on research involving or producing AI systems, particularly where outputs may ultimately be deployed in high-risk contexts. The AI Act imposes requirements that reach deep into the design, training, evaluation, and documentation of AI models. It mandates transparency, robustness, risk classification, human oversight, and quality of training data.

For a project like DETECTOR, which aims to develop tools used by law enforcement authorities, forensic institutes, and judicial actors, the Act’s relevance is immediate and significant. Even when DETECTOR does not directly place a high-risk AI system on the market, its research and prototyping activities must anticipate the regulatory environment into which future operational systems will enter. In this sense, DETECTOR is not only producing technology; it is shaping the compliance architecture needed to ensure that technology is ultimately admissible, trustworthy, and aligned with the Union’s fundamental values.

Embedding Legal and Ethical Governance in Research Practice

This complexity illustrates why early-stage legal analysis is indispensable. DETECTOR’s work in Task 2.1, conducted at the very beginning of the project, embodies this principle. Instead of treating legal compliance as a downstream validation exercise, the consortium placed it “front and centre” by dedicating resources to a systematic mapping of the legal frameworks that condition the project’s activities. This approach acknowledges that compliance is not a procedural hurdle but a design constraint: it informs how datasets may be created, what content may be processed, how tools may be tested, and how results may eventually be disseminated or transferred to operational partners.

DETECTOR’s approach underscores the consortium’s responsibility for proper implementation, risk mitigation, and respect for ethics and data protection. These obligations are not symbolic. They impose enforceable duties: to assess risks, ensure confidentiality and security, prevent conflicts of interest, maintain accurate records, and comply with all applicable EU and national laws. Any processing of personal data will be limited to strictly justified, minimised, and subject to strong safeguards—requirements that are especially pronounced in a project dealing with media evidence, training datasets, and potentially sensitive information.

One of the central challenges the consortium faces is the heterogeneity of applicable legal regimes. DETECTOR brings together a wide range of actors—research institutions, forensic experts, private companies, and law enforcement agencies—from multiple Member States. Each actor operates within its own national laws on criminal procedure, digital forensics, evidence admissibility, and police powers. Yet the project also engages with European-level norms that demand harmonisation and the protection of fundamental rights across borders.

Contributing to Europe’s Forensic and Policy Landscape

As DETECTOR concludes its foundational legal work in February 2026, the Consortium is preparing some of the project’s earliest deliverables: a structured analysis of the most relevant applicable legal frameworks and the implications for research methodologies, data governance, and tool development. These deliverables will serve two purposes. First, they will provide practical guidance for the Consortium, enabling all partners to operate within a clear and harmonised compliance structure throughout the project lifecycle. Second, they will contribute meaningfully to the wider scientific and policy debate by articulating how cutting-edge forensic AI research can be conducted in a legally robust manner within the EU’s evolving regulatory landscape. This is the reason why the first deliverable (D2.1) is not classified but will be publicly available.

In this way, DETECTOR not only navigates complexity, but it also demonstrates leadership in addressing it. Through a deliberate, well-resourced, and anticipatory compliance strategy, the project shows that excellence in forensic AI research is inseparable from excellence in legal and ethical governance.

Interested in the DETECTOR project? Join our community and get involved — click this link.

Author: Marco Gercke, Cybercrime Research Institute (CRI)