News

Security Stop-Press: AI Chatbots Are Linking Users to Scam Sites

Chatbots powered by large language models (LLMs) are giving out fake or incorrect login URLs, exposing users to phishing risks, according to research from cybersecurity firm Netcraft.

In tests using GPT-4.1 models, including Perplexity and Microsoft Copilot, only 66 per cent of login links provided were correct. The rest pointed to inactive, unrelated, or unclaimed domains that scammers could exploit. In one case, Perplexity recommended a phishing site posing as Wells Fargo’s login page.

Smaller brands were more likely to be misrepresented, as they appear less in AI training data. Netcraft also found over 17,000 AI-generated phishing pages already targeting users.

To stay safe, businesses should avoid relying on AI for login links, train staff to recognise phishing attempts, and push for stronger safeguards from AI providers.


Don’t take our word for it, see what are our clients say

Having had some real bad experiences with IT companies in the past it has been a breath of fresh air to have you and your team assisting all of my staff with any issues that have arisen.

- Tony King -