Anthropic admits its AI is being used to conduct cybercrime
Anthropic’s agentic AI, Claude, has been “weaponized” in high-level cyberattacks, according to a new report published by the company. It claims to have successfully disrupted a cybercriminal whose “vibe hacking” extortion scheme targeted at least 17 organizations, including some related to healthcare, emergency services and government.
Anthropic says the hacker attempted to extort some victims into paying six-figure ransoms to prevent their personal data from being made public, with an “unprecedented” reliance on AI assistance. The report claims that Claude Code, Anthropic’s agentic coding tool, was used to “automate reconnaissance, harvest victims’ credentials, and penetrate networks.” The AI was also used to make strategic decisions, advise on which data to target and even generate “visually alarming” ransom notes.
As well as sharing information about the attack with relevant authorities, Anthropic says it banned the accounts in question after discovering criminal activity, and has since developed an automated screening tool. It has also introduced a faster and more efficient detection method for similar future cases, but doesn’t specify how that works.
The report (which you can read in full here) also details Claude’s involvement in a fraudulent employment scheme in North Korea and the development of AI-generated ransomware. The common theme of the three cases, according to Anthropic, is that the highly reactive and self-learning nature of AI means cybercriminals now use it for operational reasons, as well as just advice. AI can also perform a role that would once have required a team of individuals, with technical skill no longer being the barrier it once was.
Claude isn’t the only AI that has been used for nefarious means. Last year, OpenAI said that its generative AI tools were being used by cybercriminal groups with ties to China and North Korea, with hackers using GAI for code debugging, researching potential targets and drafting phishing emails. OpenAI, whose architecture Microsoft uses to power its own Copilot AI, said it had blocked the groups’ access to its systems.
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss