Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

Phishing Scams Linked To AI Chatbots: A Growing Threat To Users And Brands

Netcraft's report reveals alarming phishing scams linked to AI chatbots, particularly affecting smaller brands. Experts advise caution and recommend verifying URLs to mitigate risks.

Cybersecurity firm Netcraft has recently highlighted the increasing threat of phishing scams linked to AI chatbots like ChatGPT and Perplexity. These tools, trusted for quick information, are prone to hallucinations, where AI creates false or misleading content. This includes incorrect URLs that might lead users to harmful websites.

The study examined OpenAI's GPT-4.1 models by requesting login links for 50 prominent brands in sectors such as finance, retail, tech, and utilities. The chatbot provided accurate URLs 66% of the time but generated incorrect ones in 34% of cases. These erroneous links could be exploited by cybercriminals for large-scale phishing scams, redirecting users to fake sites that mimic legitimate ones.

Phishing Risks and Smaller Brands

Smaller brands face a greater risk due to their underrepresentation in AI training datasets, leading to more frequent misrepresentation. Cyber attackers can exploit this by registering the fake URLs generated by chatbots, turning them into active phishing traps.

A real-world incident mentioned in the report involved Perplexity AI suggesting a phishing site when asked for Wells Fargo's official URL. This example shows how easily users can be deceived when relying solely on AI-generated data.

Cryptocurrency Sector Under Siege

The report also uncovers a broader campaign targeting the cryptocurrency industry. Over 17,000 phishing pages hosted on GitBook have been identified. These pages disguise themselves as product documentation or support hubs and are crafted to appeal to both AI systems and users with their clean design and fast loading times.

Netcraft discovered an elaborate scheme to 'poison' AI coding assistants. In one instance, attackers created a fake Solana API, tricking developers into integrating it into their projects. This redirected crypto transactions to the attackers' wallet. Another fraudulent project called Moonshot-Volume-Bot was promoted through blog posts, Q&A forums, and GitHub repositories. The aim was to get indexed by AI training models so that coding assistants would unknowingly recommend the malicious tool to developers.

Expert Warnings on Blind Trust

With millions using AI chatbots daily, experts warn against blindly trusting AI-generated content, especially links. They advise verifying URLs and information through official sources to avoid potential dangers.

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+