Your weekly cybersecurity roundup covering the latest threats, exploits, vulnerabilities, and security news you need to know.
Millions installed 'productivity' Chrome extensions that became malware after acquisition. Here's how browser extensions became enterprise security's weakest link.
Don’t act surprised when your AI agent starts printing millions of pages of cabbages, deletes an entire system partition, or sends your life savings to fraudsters – they’re just being helpful.
A compromised Chrome extension with 7,000 users was updated to deploy malware, strip security headers, and steal cryptocurrency wallet seed phrases.
If you wanted to steal local files from someone using Perplexity's Comet browser, until last month you could just schedule ...
Biometric injection attacks are emerging as the key vulnerability in biometric remote identity verification and user authentication systems, making assurance that protections against them are ...
Abstract: This paper addresses the attack detection problem for cyber-physical systems subject to false data injection attacks. A novel detection framework is developed for cyber-physical systems ...
Threat actors are abusing Pastebin comments to distribute a new ClickFix-style attack that tricks cryptocurrency users into executing malicious JavaScript in their browser, allowing attackers to ...
Google Translate can be tricked into generating dangerous content instead of translations through simple prompt injection attacks discovered this week that exploit its Gemini AI foundation. A Tumblr ...
Run a prompt injection attack against Claude Opus 4.6 in a constrained coding environment, and it fails every time, 0% success rate across 200 attempts, no safeguards needed. Move that same attack to ...
This voice experience is generated by AI. Learn more. This voice experience is generated by AI. Learn more. Prompt injection attacks can manipulate AI behavior in ways that traditional cybersecurity ...
Bruce Schneier and Barath Raghavan explore why LLMs struggle with context and judgment and, consequently, are vulnerable to prompt injection attacks. These 'attacks' are cases where LLMs are tricked ...