Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Thursday, May 7
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » AI browsers are a significant security threat

AI browsers are a significant security threat

GTBy GTNovember 3, 2025 AI No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Among the explosion of AI systems, AI web browsers such as Fellou and Comet from Perplexity have begun to make appearances on the corporate desktop. Such applications are described as the next evolution of the humble browser, and come with AI features built in; they can read and summarise web pages – and, at their most advanced – act on web content autonomously.

In theory, at least, the promise of an AI browser is that it will speed up digital workflows, undertake online research, and retrieve information from internal sources and the wider internet.

However, security research teams are concluding that AI browsers introduce serious risks into the enterprise that simply can’t be ignored.

The problem lies in the fact that AI browsers are highly vulnerable to indirect prompt injection attacks. These are where the model in the browser (or accessed via the browser) receives instructions hidden in specially-crafted websites. By embedding text into web pages or images in ways humans find difficult to discren, AI models can be fed instructions in the form of AI prompts, or amendments to prompts that are input by the user.

The bottom line for IT departments and decision-makers is that AI browsers are not yet suitable for use in the enterprise, and represent a significant security threat.

Automation meets exposure

In tests, researchers discovered that embedded text in online content is processed by the AI browser and is interpreted as instructions to the smart model. These instructions can be executed using the user’s privileges, so the greater the degree of access to information that the user has, the greater the risk to the organisation. The autonomy that AI gives users is the same mechanism that magnifies the attack surface, and the more autonomy, the greater the potential scope for data loss.

For example, it’s possible to embed text commands into an image that, when displayed in the browser, could trigger an AI assistant to interact with sensitive assets, like corporate email, or online banking dashboards. Another test showed how an AI assistant’s prompt can be hijacked and made to perform unauthorised actions on the behalf of the user.

These types of vulnerabilities clearly go against all principles of data governance, and are the most obvious example of how ‘shadow AI’ in the form of an unauthorised browser, poses a real threat to an organisation’s data. The AI model acts as a bridge between domains, and circumvents same-origin policies – the rule that prevents the access of data from one domain by another.

Implementation and governance challenges

The root of the problem is the merging of user queries in the browser with live data accessed on the web. If the LLM can’t distinguish between safe and malicious input, then it can blithely access data not requested by its human operator and act on it. When given agentic abilities, the consequences can be far-reaching, and could easily cause a cascade of malicious activity across the enterprise.

For any organisation that relies on data segmentation and access control, a compromised AI layer in a user’s browser can circumvent firewalls, enact token exchanges, and use secure cookies in exactly the same way that a user might. Effectively, the AI browser becomes an insider threat, with access to all the data and facility of its human operator. The browser user will not necessarily be aware of activity ‘under the hood,’ so an infected browser may act for significant periods of time without detection.

Threat mitigation

The first generation of AI browsers should be regarded by IT teams in the same way they treat unauthorised installation of third-party software. While it is relatively easy to prevent specific software being installed by users, it’s worth noting that mainstream browsers such as Chrome and Edge are shipping with increased numbers of AI features in the form of Gemini (in Chrome) and Copilot (in Edge). The browser-producing companies are actively exploring AI-augmented browsing capabilities, and agentic features (that grant significant autonomy to the browser) will be quick to appear, driven by the need for competitive advantage between browser companies.

Without proper oversight and controls, organisations are opening themselves to significant risk. Future generations of browsers should be checked for the following features:

Prompt isolation, separating user intent from third-party web content before LLM prompt generation.Gated permissions. AI agents should not be able to execute autonomous actions, including navigation, data retrieval, or file access without explicit user confirmation.Sandboxing of sensitive browsing (like HR, finance, internal dashboards, etc.) so there is no AI activity in these sensitive areas.Governance integration. Browser-based AI has to align with data security policies, and the software should provide records to make agentic actions traceable.

To date, no browser vendor has presented a smart browser with the ability to distinguish between user-driven intent, and model-interpreted commands. Without this, browsers may be coerced to act against the organisation by the use of relatively trivial prompt injection.

Decision-maker takeaway

Agentic AI browsers are presented as the next logical evolution in web browsing and automation in the workplace. They are designed deliberately to blur the distinction between user/human activity and become part of interactions with the enterprise’s digital assets. Given the ease with which the LLMs in AI browsers are circumvented and corrupted, the current generation of AI browsers can be regarded as dormant malware.

The major browser vendors look set to embed AI (with or without agentic abilities) into future generations of their platforms, so careful monitoring of each release should be undertaken to ensure security oversight.

(Image source: “Unexploded bomb!” by hugh llewelyn is licensed under CC BY-SA 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

GT
  • Website

Keep Reading

Enterprise users swap AI pilots for deep integrations

Google, Sony Innovation Fund, and Okta back Resemble AI deepfake detection plan

Platform corrects AI algorithmic bias for eKYC

What ByteDance’s Launch Means for Enterprise

UK and Germany plan to commercialise quantum supercomputing

Frontier AI agents replace chatbots

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.