Deepfakes and the Digital Wild West

Denmark’s government has announced plans to expand copyright law in “a pioneering measure that would allow people to demand that social media platforms take down digital forgeries” as part of an amendment to the existing Danish Copyright Act. The proposed change does not come under its own legislation, so it does not have a specific name like the UK’s “Online Safety Act” or the US’s “Take It Down Act.” Instead, it would be an alteration to pre-existing legislation.
According to the Danish government, the revised legislation will grant individuals legal control over their digital likenesses, voices, and facial features, thereby mitigating the risks associated with unauthorized AI-generated content.
If you’ve ever seen a video comparing AI-generated videos from just two years ago to those today, you’ll appreciate the jump in progress, and the feeling of whiplash it causes.
Videos generated in 2023 were, frankly, abominations: clunky, looking like they were drawn by a kindergartener, and with physics-breaking movements. In 2025, however, they are terrifyingly real. Your own face could be looking back at you, speaking words you never spoke, in your voice and cadence, so convincing that you might believe you are watching a video of yourself.
AI offers a world of opportunities. In the same way that people born even in my generation never imagined a powerful computer in their pockets capable of connecting people with video calls across the world, answering questions in mere seconds, or broadcasting thoughts instantly, imagining that AI could become powerful enough to produce an entirely fake video about people who have never existed was the stuff of science fiction. And yet, here we are.
Just as superfast Internet speeds changed the way we interact with and even perceive the world around us, AI seems to be the next quantum leap in changing mankind’s relationship with technology. Already, many are using AI in their daily lives, from innocuous uses of making themselves look like Studio Ghibli characters, to asking ChatGPT or Grok to write them an email they’re too busy (or too lazy, depending on who you ask) to compose. But just as people once complained about the proliferation of the mobile phone, there will always be skeptics about technology and its effects on society.
This is not without justification. Already this year, in my country, someone was arrested for using AI to “create deepfake porn,” having manipulated images shared by a woman he knew to imitate pornographic content. The sentencing judge commented, “These people had a right to post their images on social media platforms without fear of those images being warped for sexual purposes,” and of course, he’s right.
But—and I am not defending this evil man at all, or minimizing his crimes—this isn’t really a “deepfake.” That term gives it a level of sophistication that it didn’t involve. What this man did was make some crude Photoshop edits, and that’s been around as long as the Internet has existed. People have long edited images of each other, faked messages, attributed quotes, and all sorts—do you remember “Let Me Tweet That”? I recall a student at my high school having his face put on the naked body of a female porn star. It’s crude, and it’s always been there—but it clearly was not a believable image.
But the increasing realism in images means that plausible fakes of real people are here. The sophistication that AI has reached now goes far beyond putting someone’s face on another’s body. It is building, from the ground up, a convincingly realistic and almost undetectably fake image or video of a person without his or her consent or knowledge. That is the key distinction between modern AI deepfakes and the primitive photoshops of the past.
Britain has tried to make it illegal to create sexually explicit “deepfakes” as part of the Crime and Policing Bill (2025), but the proposed legislation is, again, too generous to what these things actually involve, casting a wide net for “creating a sexually explicit deepfake of someone without their consent.”
There is a risk though: Denmark’s law and Britain’s are playing catch-up with a technology changing so rapidly that the use of the law disappears before the ink is dry. Denmark seems aware of this—Danish Culture Minister Jakob Engel-Schmidt emphasized that “technology has outpaced our current legislation.”
That’s why Denmark has sought to define a “deepfake” in its law: “very realistic digital representation of a person, including their appearance and voice.” By baking the “digital” aspect of the imitation into the law, Denmark is explicitly focused on AI-generated content, and—quite importantly—has included an exemption for parodies and satires.
With such laws, there are concerns over the limitations they impose on free speech and free expression. They are vague and imprecise, and for that reason they must be viewed with caution. While Denmark’s law has made attempts to define “deepfakes,” the law’s broad definition of deepfakes as “very realistic digital representations” could be interpreted to cover content that is neither harmful nor misleading, meaning that comedy or even just simply artistic expression is made illegal. Even if this is considered an acceptable trade-off, it opens the door to the definition of satire being set by the government, and not by the people.
Similarly, “parody” remains ill-defined. Is a mocking video of a major politician made to say things he or she obviously does not believe or agree with parody, satire, or abuse? Pornographic deepfakes are clearly not acceptable, but laws need to be more precise both to identify harm and then to tread that line between free expression and protecting those at risk.
The post Deepfakes and the Digital Wild West was first published by the Foundation for Economic Education, and is republished here with permission. Please support their efforts.