The AI boom of 2023 has seen enormous numbers of people flocking to ChatGPT (and other, less infamous AIs) to see what all the fuss is about. The boom has also attracted a huge number of hackers, phishers and scammers, who are scaling up their operations with the help of the chatbots, and are now able to run their scams at far higher volumes than ever before.
This is already changing the face of cybercrime. Cybersecurity firms have noticed a huge rise in both volume and sophistication of phishing attacks, with a thriving black market in stolen ChatGPT API account passwords enabling hackers to run up massive bills on other people’s credit cards. More recently, the addition of multimodality - the ability to accept things other than text as prompts, such as images and code - has exposed flaws in chatbot safeguarding. Fortunately, there are simple steps you can take to limit your exposure!
This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.
The AI boom of 2023 has seen enormous numbers of people flocking to ChatGPT (and other, less infamous AIs) to see what all the fuss is about. The boom has also attracted a huge number of hackers, phishers and scammers, who are scaling up their operations with the help of the chatbots, and are now able to run their scams at far higher volumes than ever before.
This is already changing the face of cybercrime. Cybersecurity firms have noticed a huge rise in both volume and sophistication of phishing attacks, with a thriving black market in stolen ChatGPT API account passwords enabling hackers to run up massive bills on other people’s credit cards. More recently, the addition of multimodality - the ability to accept things other than text as prompts, such as images and code - has exposed flaws in chatbot safeguarding. Fortunately, there are simple steps you can take to limit your exposure!
One of Chat GPT’s most useful features is Custom Instructions, which allows you to specify your preferences for all chat interactions instead of having to repeat them in every chat. While a very useful time-saving option, If you’re at all concerned about your privacy, you should avoid using this as much as possible. The temptation is to include personal information in Custom Instructions, so that you get more accurate and relevant responses, but you pay for that convenience by sacrificing privacy and security. That said, even if you don’t use Custom Instructions at all, you should avoid entering personal information into your chatbot prompts, and you should be similarly wary of providing sensitive information about others, be it family, friends, or workplace matters. That said, if you need the extra benefits that giving private info to CI provides, make sure to mix it up with some convenient lies; stuff that won’t adversely affect your outputs, but may throw off unwanted observers.
Similar caution should be employed when submitting images as part of your prompts. With the rapid rise of facial recognition technology, uploading any photograph of a human to the internet has become a major privacy concern, and feeding them into chatbots means you’re not just giving permission for their use in AI training (see below), but also possibly exposing them to public scrutiny. Speaking of which…
TRUST NO-ONE
Data security in the age of Large Language Models is about more than dodging the cybercriminals, of course; you also have to be aware of the enormous content hoovers sucking up data and feeding them into AIs. That includes OpenAI, whose privacy policy makes it very clear that anything you enter into their chatbots can and will be used “to provide, administer, maintain and/or analyze the Services”. This includes any files you may upload, so if you’re using ChatGPT for work purposes, take care to anonymize any data you upload first! Many major corporations have already banned the use of chatbots by employees for fear of seeing their sensitive commercial data appear in future LLM responses. The same is of course true for other AI providers; if you have any concerns regarding your privacy, take care to read the developing company’s privacy policy. Even then, be cautious; Google have recently found themselves in hot water because chats with Google Bard were indexed by Google’s search bots, even though the company expressly denied this would happen in their privacy policy.
The issue of trust is also now a much more pressing concern thanks to cybercriminals, who are increasingly moving away from phishing links, and towards more complex scams that make use of the power of LLM. As a result, vigilance is key at all times, even when dealing with people you know; anybody can have their security compromised, and the hacker’s job is never easier than when they’re able to exploit a target’s trusted contacts. Especially in these early months of the AI boom, where it is hard to predict what new AI-powered hacking may look like, it’s best to take a safety first attitude wherever possible!
STAY VIGILANT
While the trend in cybercrime is moving away from crude techniques such as phishing links, they are likely to remain a substantial part of the hacker’s armory. One popular means of capturing user data that is unlikely to become unfashionable is pharming, where the hacker directs the user to a spoofed version of the website they were trying to access, either by infiltrating the user’s device or (less commonly) by hacking the servers that direct internet traffic. Your first defense against these attacks is straightforward: double check the URL of the website before you interact with it, and ensure that the padlock icon - which denotes the presence of an SSL certificate. This should both confirm the validity of the webpage you’re on (SSL certificates can be spoofed, but it is not easy and must be done on the server side, so highly unlikely), and that your data is being encrypted. There are no guarantees, of course, but if you check your URL bar before diving into the chatbot waters, you’ll be well on your way to avoiding every shark.
Another concern is “malvertising”; adverts that trick users into installing malware. MalwareBytes has detected malware being served as adverts on Bing Chat. Although this is uncommon, and Microsoft claims to have resolved the issue, it’s not a new development; the online advertising industry is notorious for slapdash monitoring of its ad inventory, and has been used by cybercriminals to legally distribute malware for decades. Treat all links provided by chatbots with caution, but especially those which are flagged as adverts!
USE THE API (OR THE APP)
All of the above advice pertains to ChatGPT’s website, but there are other ways to access OpenAI’s services. Using mobile apps is one way to reduce your risk, as they are far harder to infiltrate than ordinary webpages, but if you really want to close the door on that avenue of attack you should consider using an API for your chatbot interactions.
API stands for Application Programming Interface, and most major websites have one; they provide an alternative way of accessing services, through code. Using an API to access ChatGPT is substantially more secure than any other method, as you are forming a direct connection between your device and OpenAI’s servers, without the messy app/web routing that gives hackers opportunities to tap into your comms. Using an API is also substantially more complicated than the web/app routes from a technical perspective, but if you currently lack the knowhow, asking ChatGPT how it all works is a great way to get started!
This story is part of our weekly briefing. Sign up to receive the FREE briefing to your inbox.
Gadgets & Gifts
Put your spy skills to work with these fabulous choices from secret notepads & invisible inks to Hacker hoodies & high-tech handbags. We also have an exceptional range of rare spy books, including many signed first editions.
We all have valuable spy skills - your mission is to discover yours. See if you have what it takes to be a secret agent, with our authentic spy skills evaluation* developed by a former Head of Training at British Intelligence. It's FREE so share & compare with friends now!