Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Inside Anthropic’s AI ambitions with Jared Kaplan

June 7, 2025

X users were glued to the Musk v. Trump blowup. Could this be good for the platform?

June 6, 2025

Figure AI CEO skips live demo, sidesteps BMW deal questions onstage at tech conference

June 6, 2025
Facebook X (Twitter) Instagram
Trending
  • Inside Anthropic’s AI ambitions with Jared Kaplan
  • X users were glued to the Musk v. Trump blowup. Could this be good for the platform?
  • Figure AI CEO skips live demo, sidesteps BMW deal questions onstage at tech conference
  • Musk could lose billions of dollars in spat with Trump
  • The Trump-Musk feud has been great for X, which jumped up the App Store charts
  • Trump orders new drone rules, pushes for supersonic flights
  • Less than 48 hours left to exhibit at TC All Stage
  • WWDC 2025: What to expect from this year’s conference
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Saturday, June 7
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » FBI says Palm Springs bombing suspects used AI chat program

FBI says Palm Springs bombing suspects used AI chat program

GTBy GTJune 4, 2025 IT No Comments3 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Debris is spilled onto the street after what the Mayor described as a bomb exploded near a reproductive health facility in Palm Springs, California, on May 17, 2025, in a still image from video.

Abc Affiliate Kabc | Via Reuters

Two men suspected in last month’s bombing of a Palm Springs fertility clinic used a generative artificial intelligence chat program to help plan the attack, federal authorities said Wednesday.

Records from an AI chat application show Guy Edward Bartkus, the primary suspect in the bombing, “researched how to make powerful explosions using ammonium nitrate and fuel,” authorities said.

Officials didn’t name the AI program used by Bartkus.

Law enforcement authorities in New York City on Tuesday arrested Daniel Park, a Washington man who is suspected of helping to provide large amounts of chemicals used by Bartkus in a car bomb that damaged the fertility clinic.

Bartkus died in the blast, while four others were left injured by the explosion.

The FBI said in a criminal complaint against Park that Bartkus allegedly used his phone to look up information about “explosives, diesel, gasoline mixtures and detonation velocity,” NBC News reported.

It marks the second case this year of law enforcement pointing to the use of AI in assisting with a bombing or attempted bombing. In January, officials said a soldier who exploded a Tesla Cybertruck outside the Trump Hotel in Las Vegas used generative AI including ChatGPT to help plan the attack.

The soldier, Matthew Livelsberger, used ChatGPT to look for information about how he could put together an explosive, the speed at which ammunition certain rounds of ammunition would travel, among other things, according to law enforcement officials.

In response to the Las Vegas incident, OpenAI said it was saddened by the revelation its technology was used to plot the attack and that it was “committed to seeing AI tools used responsibly.”

The use of generative AI has soared in recent years with the rise of chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude and Google‘s Gemini. That’s spurred a flurry of development around consumer-facing AI services.

But in the race to stay competitive, tech companies are taking a growing number of shortcuts around the safety testing of their AI models before they’re released to the public, CNBC reported last month.

OpenAI last month unveiled a new “safety evaluations hub” to display AI models’ safety results and how they perform on tests for hallucinations, jailbreaks and harmful content, such as “hateful content or illicit advice.”

Anthropic last month added additional security measures to its Claude Opus 4 model to limit it from being misused for the development of weapons.

AI chatbots have faced a host of issues caused by tampering and hallucinations since they gained mass appeal.

Last month, Elon Musk’s xAI chatbot Grok provided users with false claims about “white genocide” in South Africa, an error that the company later attributed to human manipulation.

In 2024, Google paused its Gemini AI image generation feature after users complained the tool generated historically inaccurate images of people of color.

WATCH: Anthropic’s Mike Krieger: Claude 4 ‘can now work for you much longer’

Anthropic's Mike Krieger: Claude 4 'can now work for you for much longer'



Source link

GT
  • Website

Keep Reading

DocuSign stock tanks 18% after company cuts billings outlook

Omada Health prices IPO at $19 per share, in middle of expected range

Amazon’s R&D lab forms new agentic AI group

Reddit sues Anthropic for breach of contract, ‘unfair competition’

Amazon to invest $10 billion in North Carolina data centers in AI push

OpenAI tops 3 million paying business users, launches new features

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

DocuSign stock tanks 18% after company cuts billings outlook

June 6, 2025

Omada Health prices IPO at $19 per share, in middle of expected range

June 6, 2025

Amazon’s R&D lab forms new agentic AI group

June 4, 2025

FBI says Palm Springs bombing suspects used AI chat program

June 4, 2025
Latest Posts

Healthcare Cyber Attacks – 276 Million Patient Records were Compromised In 2024

May 15, 2025

Hackers Launching Cyber Attacks Targeting Multiple Schools & Universities in New Mexico

May 6, 2025

Over 90% of Cybersecurity Leaders Worldwide Encountered Cyberattacks Targeting Cloud Environments

May 1, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.

STEAM Education

At FutureBots, we believe the future belongs to creators, thinkers, and problem-solvers. That’s why we’ve made it our mission to provide high-quality STEM products designed to inspire curiosity, spark innovation, and empower learners of all ages to shape the world through robotics and technology.