Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Friday, May 15
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » AI deepfakes protection or internet freedom threat?

AI deepfakes protection or internet freedom threat?

GTBy GTJune 26, 2025 AI No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Critics fear the revised NO FAKES Act has morphed from targeted AI deepfakes protection into sweeping censorship powers.

What began as a seemingly reasonable attempt to tackle AI-generated deepfakes has snowballed into something far more troubling, according to digital rights advocates. The much-discussed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act – originally aimed at preventing unauthorised digital replicas of people – now threatens to fundamentally alter how the internet functions.

The bill’s expansion has set alarm bells ringing throughout the tech community. It’s gone well beyond simply protecting celebrities from fake videos to potentially creating a sweeping censorship framework.

From sensible safeguards to sledgehammer approach

The initial idea wasn’t entirely misguided: to create protections against AI systems generating fake videos of real people without permission. We’ve all seen those unsettling deepfakes circulating online.

But rather than crafting narrow, targeted measures, lawmakers have opted for what the Electronic Frontier Foundation calls a “federalised image-licensing system” that goes far beyond reasonable protections.

“The updated bill doubles down on that initial mistaken approach,” the EFF notes, “by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them.”

What’s particularly worrying is the NO FAKES Act’s requirement for nearly every internet platform to implement systems that would not only remove content after receiving takedown notices but also prevent similar content from ever being uploaded again. Essentially, it’s forcing platforms to deploy content filters that have proven notoriously unreliable in other contexts.

Innovation-chilling

Perhaps most concerning for the AI sector is how the NO FAKES Act targets the tools themselves. The revised bill wouldn’t just go after harmful content; it would potentially shut down entire development platforms and software tools that could be used to create unauthorised images.

This approach feels reminiscent of trying to ban word processors because someone might use one to write defamatory content. The bill includes some limitations (e.g. tools must be “primarily designed” for making unauthorised replicas or have limited other commercial uses) but these distinctions are notoriously subject to interpretation.

Small UK startups venturing into AI image generation could find themselves caught in expensive legal battles based on flimsy allegations long before they have a chance to establish themselves. Meanwhile, tech giants with armies of lawyers can better weather such storms, potentially entrenching their dominance.

Anyone who’s dealt with YouTube’s ContentID system or similar copyright filtering tools knows how frustratingly imprecise they can be. These systems routinely flag legitimate content like musicians performing their own songs or creators using material under fair dealing provisions.

The NO FAKES Act would effectively mandate similar filtering systems across the internet. While it includes carve-outs for parody, satire, and commentary, enforcing these distinctions algorithmically has proven virtually impossible.

“These systems often flag things that are similar but not the same,” the EFF explains, “like two different people playing the same piece of public domain music.”

For smaller platforms without Google-scale resources, implementing such filters could prove prohibitively expensive. The likely outcome? Many would simply over-censor to avoid legal risk.

In fact, one might expect major tech companies to oppose such sweeping regulation. However, many have remained conspicuously quiet. Some industry observers suggest this isn’t coincidental—established giants can more easily absorb compliance costs that would crush smaller competitors.

“It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES,” the EFF notes.

This pattern repeats throughout tech regulation history—what appears to be regulation reigning in Big Tech often ends up cementing their market position by creating barriers too costly for newcomers to overcome.

NO FAKES Act threatens anonymous speech

Tucked away in the legislation is another troubling provision that could expose anonymous internet users based on mere allegations. The bill would allow anyone to obtain a subpoena from a court clerk – without judicial review or evidence – forcing services to reveal identifying information about users accused of creating unauthorised replicas.

History shows such mechanisms are ripe for abuse. Critics with valid points can be unmasked and potentially harassed when their commentary includes screenshots or quotes from the very people trying to silence them.

This vulnerability could have a profound effect on legitimate criticism and whistleblowing. Imagine exposing corporate misconduct only to have your identity revealed through a rubber-stamp subpoena process.

This push for additional regulation seems odd given that Congress recently passed the Take It Down Act, which already targets images involving intimate or sexual content. That legislation itself raised privacy concerns, particularly around monitoring encrypted communications.

Rather than assess the impacts of existing legislation, lawmakers seem determined to push forward with broader restrictions that could reshape internet governance for decades to come.

The coming weeks will prove critical as the NO FAKES Act moves through the legislative process. For anyone who values internet freedom, innovation, and balanced approaches to emerging technology challenges, this bears close watching indeed.

(Photo by Markus Spiske)

See also: The OpenAI Files: Ex-staff claim profit greed betraying AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

GT
  • Website

Keep Reading

Enterprise users swap AI pilots for deep integrations

Google, Sony Innovation Fund, and Okta back Resemble AI deepfake detection plan

Platform corrects AI algorithmic bias for eKYC

What ByteDance’s Launch Means for Enterprise

UK and Germany plan to commercialise quantum supercomputing

Frontier AI agents replace chatbots

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.