Last month’s Munich Labor Court ruling rocked the media world, granting Michael Schumacher’s family £170,239 ($216,641) in compensation from German celebrity tabloid Die Aktuelle after it published an AI-generated “interview” with the Formula 1 legend.
In April 2023, Die Aktuelle featured a cover story boasting an exclusive interview with Schumacher, who has been absent from public life since a serious brain injury from a skiing accident in December 2013.
The article concluded by revealing that the interview was entirely created by AI. Schumacher’s family responded with legal action. Funke Media Group, the publisher of Die Aktuelle, apologized to the family, provided compensation and fired the editor-in-chief responsible for the story.
This verdict carries significant weight for how AI is used in journalism and media, and it raises important questions about ethics and integrity in news reporting. The ruling sets a vital precedent, signaling a strong stand against the unethical and deceptive use of AI in news content.
But does it go far enough to deter future offenses?
Legal implications of the German court decision
Let’s get this out of the way: The court’s decision is, at least to some extent, a win for journalistic integrity, privacy rights and intellectual property law. By siding with the Schumacher family, the court clearly stated that fabricating quotes or content with AI and presenting it as real news is a serious ethical breach that can lead to legal trouble.
The verdict underscores the importance of respecting individual privacy, especially in sensitive cases involving personal health. Since Michael Schumacher’s tragic accident, his family has been deeply protective of his privacy. The magazine’s disregard for their wishes was not just unethical but also legally punishable. The decision makes it clear that exploiting someone’s privacy for sensationalism or profit is unacceptable and will have consequences.
The ethical concerns of AI and journalism
This case shines a spotlight on the risks and ethical issues tied to using AI in journalism — while this was a case heard in a German court, the legal concepts that underpin the case would also apply in the United States and elsewhere.
While AI can enhance research, analysis and storytelling, it should never be used to create or misrepresent facts. The magazine’s attempt to pass off AI-generated content as real was a blatant breach of journalistic ethics and a betrayal of public trust.
Attorney Ken Fulginiti aptly noted, “The Schumacher verdict needs to be a legal wake-up call for media organizations and journalists to be cautious and establish clear guidelines for AI use in their work. Transparency is crucial — AI-generated content must be clearly distinguished from factual reporting based on credible sources and real interviews. Ignoring this can not only tarnish a publication’s reputation but also lead to serious legal repercussions, as we have just begun to see in this case.”
For all of us, this case needs to sound alarm bells about the misuse of AI to spread misinformation or create deepfakes — that is, synthetic media designed to manipulate or fabricate content. As AI technology advances, it’s essential to develop robust methods for verifying information authenticity and protecting against malicious uses. The law absolutely needs to run faster to keep up.
Where this goes from here
One thing is crystal clear: A $216,641 penalty is borderline laughable. The number seems almost absurdly low; from where I sit, Die Aktuelle should have been hit with at least a $2 million fine.
In practical terms, because of this very low number, this isn’t a deterrent. A piece or two going viral can practically and reputationally be worth a lot more than the cost of the fine, so this just further incentivizes publications to build a small potential fine into their cost of doing business.
We need court decisions that address the ethical and legal implications of using AI in journalism, and media and have the weight behind them to sufficiently penalize bad acts and actors. If we become desensitized to the fabrication of content and presentation of it as factual, we are going to be facing severe consequences as a society.
Here’s an interesting irony: Despite the prevalence of perceived “fake news,” we still recognize it as news. Regardless of our political views or the type of media outlet, we still want and need to believe that the source of our news is real. We may agree or disagree with the source and its message, but the idea that a human was involved remains an authentic phenomenon.
This is, of course, all just the beginning of the beginning. AI is still in its infancy. Transparency, fact-checking and adherence to journalistic principles must remain paramount, even as AI tools become more sophisticated and integrated into the news gathering and storytelling process. And when those standards fail, the courts need to come in with a hammer, rather than the flyswatter used in the Schumacher case.
Aron Solomon, JD, is the chief strategy officer for Amplify. He has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world.