WASHINGTON: A hacker has carried out one of the most extensive AI-driven cybercrime campaigns recorded to date, using an artificial intelligence chatbot to plan and execute a string of digital extortion schemes,
NBC News reported.
Anthropic, the company behind the Claude chatbot, revealed in a report on Tuesday that an individual exploited the system to research, infiltrate and extort at least 17 organisations over a three-month period.
The case marks the first publicly known example of a hacker relying almost entirely on a leading AI tool to automate an entire cybercrime operation, from identifying vulnerable targets to drafting ransom notes.
According to the report, the hacker convinced Claude Code, a version of Anthropic’s chatbot designed for coding assistance, to locate weaknesses in companies’ systems and generate malicious software capable of extracting sensitive data.
The chatbot was then used to organise stolen files, analyse their contents and highlight the most valuable information, such as trade secrets, financial records and medical data.
It also examined financial documents to suggest realistic ransom demands in bitcoin and even produced draft extortion emails to be sent to the affected organisations.
Jacob Klein, Anthropic’s head of threat intelligence, said the campaign appeared to be the work of an individual based outside the United States and represented a determined attempt to evade safeguards.
While Anthropic declined to identify the 17 affected organisations, it confirmed they included a defence contractor, a financial institution and several health care providers, with stolen data ranging from Social Security numbers and bank details to patient records and restricted defence files.
The hacker’s demands varied between US$75,000 (RM317,137) and US$500,000 (RM2.1 million), though it remains unclear how many companies paid or the total amount collected.
Anthropic said it has since reinforced security measures within its systems and warned that as AI tools become more accessible, such misuse is likely to become more frequent unless stronger oversight and safeguards are introduced.