Imagine getting a video call from your manager asking for an urgent payment. The face and voice seem real—but it’s not them. Advances in artificial intelligence (AI) now make it possible to generate media that looks and sounds authentic. These creations, known as synthetic media and deepfakes, are transforming not just entertainment but also the landscape of cybercrime and identity theft.
What Is Synthetic Media?
Synthetic media is any content—images, videos, audio, or text—created or modified by artificial intelligence rather than by humans. Algorithms can generate realistic human faces, clone voices, or alter existing footage to make people appear to say or do something they never did.
In many cases, synthetic media has legitimate uses: creating lifelike virtual assistants, improving accessibility with AI voiceovers, or enhancing films and video games. But in the wrong hands, it becomes a powerful tool for deception. Cybercriminals can fabricate identities, forge official-looking videos, or impersonate trusted individuals to commit fraud. A synthetic voice mimicking a CEO, for example, can convince an employee to authorise a fraudulent money transfer—no hacking skills required.
The danger lies in believability. When a message, call, or video sounds and looks genuine, people are more likely to act without question—handing over credentials, approving transfers, or sharing sensitive data.
What Are Deepfakes?
Deepfakes are the most advanced—and dangerous—form of synthetic media. Using “deep learning,” a branch of AI, they can realistically map one person’s face or voice onto another’s. The result: videos or audio recordings that look and sound authentic, even though they’re entirely fake.
Deepfakes have been used in various malicious ways:
- Financial scams: In 2024, fraudsters used a video call with a fake company executive to trick a multinational firm into wiring over $25 million.
- Identity theft: AI voice cloning has been used to bypass phone-based identity verification systems.
- Disinformation: Deepfake videos of politicians and celebrities are spreading online, distorting public opinion and eroding trust.
- Harassment and defamation: Fake explicit videos and impersonations have been used to damage reputations.
Because these fakes are so realistic, victims often don’t realise they’re being deceived until it’s too late. The implications for cybersecurity are serious—deepfakes can undermine authentication systems, manipulate public trust, and make detecting fraud much harder.
Why It Matters for Cybersecurity
Synthetic media and deepfakes blur the line between truth and deception. They erode traditional trust signals—like recognising someone’s voice or face—and make verification essential. For individuals, this means being skeptical of unexpected calls or videos, even from familiar people. For organisations, it means updating security protocols:
- Use multi-factor verification for financial approvals.
- Train staff to recognise signs of manipulation.
- Confirm unusual requests through separate channels.
In summary, synthetic media and deepfakes are reshaping the landscape of cybercrime. By understanding how they work and recognising their risks, individuals and organisations can better protect themselves from the new generation of digital deception.
DETECTOR Project
To confront these emerging threats, the European Commission has funded the DETECTOR Project — a Horizon Europe research initiative dedicated to addressing the growing challenge of deepfakes and synthetic media. Its mission is to equip forensic experts, law enforcement agencies, and judicial authorities with state-of-the-art tools to verify digital evidence, enhance investigations, and uphold trust in democratic institutions.
Bringing together expertise from artificial intelligence, digital forensics, law, and ethics, DETECTOR advances detection technologies, develops specialised datasets, and builds Europe’s capacity to counter malicious media manipulation.
Follow DETECTOR’s progress, insights, and resources on our social media channels and subscribe to the newsletter to stay updated on how Europe is responding to the challenges of synthetic media and deepfakes.


