Article Hero
Blog5 minutes read
October 25, 2023
  • telegram
  • facebook
  • twitter
  • github

Privacy and Security Risks of ChatGPT

AI technology is no longer something we only see in futuristic sci-fi movies, it is already a huge part of our everyday lives. Voice assistants like Alexa or Siri and facial recognition to unlock our smartphones are just a few examples of AI software in everyday use. But AI is evolving, and fast. But as it revolutionizes our lives, we shouldn't ignore the dangers of AI.

The latest exciting development in AI is ChatGPT, the chatbot that has everyone talking. But not all of that dialogue is good. We explore how ChatGPT can and is being used by cybercriminals and the resulting privacy risks of ChatGPT.


What is ChatGPT?

ChatGPT is a prototype of a dialogue-based AI chatbot developed by OpenAI, an AI research and deployment company based in San Francisco.

The name ChatGPT stands for “chat generative pre-trained transformer” and is a natural language processing model. What that means is that it understands natural human language and can generate human-like responses in written text. You can type in a query or command, and ChatGPT will respond by text within a matter of seconds.

Dubbed an alternative to Google Search, this transformational technology was made available to the public on November 30, 2022. Since then, it has been causing quite a stir across the internet— some good, some not so good, but we'll get into the dangers of AI and privacy risks of ChatGPT a little later.

What's all the fuss about?

In the week following its release, ChatGPT amassed more than one million users. To put that into context, it took Facebook 10 months to hit that same milestone, and Twitter, 24 months. ChatGPT did it in just five days!

People have been flocking to the website to test out the AI's powers with all sorts of prompts. From asking it to solve complicated maths problems to translate text, write poetry, plays, essays, computer code, and song lyrics.

But what makes this AI software so different from all the rest?

Well, ChatGPT is considered to be the next generation of AI, having come leaps and bounds in its ability to mimic human responses in both detail and coherence. It has also been praised for its versatility, which opens up a whole new world of possibilities.

One Twitter user compiled an entire thread detailing a list of possible real-world uses for ChatGPT which included:

  • Generating code
  • Fitness tracking
  • Debugging code
  • Using it as a personal assistant
  • Creating marketing plans
  • Creating a virtual machine
  • Giving creative liberty
  • Developing plugins
  • Game development
  • Medical aid chatbot

But not everyone is so enthusiastic or positive about such AI developments. Some are concentrating on the dangers of AI and the impact it has.

ChatGPT isn't pleasing everyone

Many artists and creators argue that AI technology like ChatGPT is exploiting them as the technology scrapes their hard work as source material. With people using AI technology to win art competitions and produce illustrated children's books, it's no wonder creatives aren't exactly welcoming AI with open arms.

But it's not just the art world that is shouting about the dangers of AI.

ChatGPT has already been banned by a growing number of sites and organizations. School districts across the US, including Seattle, LA, and New York's public schools, have all banned the use of OpenAI's new “toy” for fears of students using it to cheat.

The coding forum, Stack Overflow has also decided to ban users from sharing responses generated by ChatGPT, at least for the time being. Their reasoning for the temporary ban is that ChatGPT has a knack for presenting answers that look very plausible but can be completely incorrect. Stack Overflow's action is to stop the site from being overrun by inaccurate responses which may cause more confusion and end up negatively affecting the reputation of the site.

It's a similar view taken by DeFi bug-bounty platform Immunefi. They recently banned 15 users for submitting ChatGPT-generated reports.

Image source: Twitter

But providing students with an easy ride or wasting time with plausible but totally incorrect responses aren't the only dangers of AI. There are darker uses of ChatGPT that can cause longer-lasting and serious damage.

What are the cybersecurity and privacy risks of ChatGPT?

As with any technology, AI can be used by those with malicious intent and cybercriminals are already finding ways to harness the power of ChatGPT for their own dark deeds.

A research paper entitled "The security threat of AI-powered cyberattacks" by Traficom predicts that over the next five years, AI-enabled attacks will become more widespread.

Just as with any industry or “career path”, there are different types of hackers with varying degrees of competence. AI technology, such as ChatGPT, gives lesser-skilled cybercriminals access to more sophisticated means of attack.

They don't need to know how to write malicious code or convincing phishing emails, as ChatGPT will do it for them, opening up a whole set of privacy risks of ChatGPT. Let's look at the dangers of AI brought to us courtesy of ChatGPT!

ChatGPT used to write malicious code

ChatGPT could allow hackers with no development or coding skills to create and deploy malware. Even though OpenAI's terms of service ban the use of its software for such a purpose, it is apparently already happening...

Just a month after the release of ChatGPT, a hacker posted a thread on a popular underground hacking forum declaring that they were experimenting with OpenAI's software to recreate malware strains and techniques. Another hacker shared a Python script, the first script they had ever created and as the thread went on, they confessed that they had done so with the help of ChatGPT.

ChatGPT can enhance phishing emails

Many phishing emails received are written by hackers not using their first language. One of the tell-tale signs of a phishing email is poor grammar, spelling, or syntax. Paying attention to such details is one of the top tips for spotting phishing emails but ChatGPT could make it impossible.

Security researchers released a study on how ChatGPT could be (if it hasn't already been) put to use to produce phishing emails with impeccable syntax, spelling, and grammar.

The fact that ChatGPT can also translate or write in any language, and mimic any writing style or tone requested will certainly be of interest to cybercriminals. They can also request ChatGPT to rewrite an email to make it more professional, casual, friendly, urgent, or convincing.

ChatGPT has tried to insert some limitations. A threat actor directly asking it to write a phishing email will get a warning message that the content requested is not appropriate.

While ChatGPT may be the AI technology in the limelight it's not the only one, and there is no guarantee that every generative AI will have this security feature.

Plus with clever questioning or commands, a cybercriminal can easily manipulate ChatGPT to do what they want. For example, all they have to do is ask it to write a business-style email to a colleague asking them to review an attached document. In fact, we tried it. The ChatGPT still gave a warning, but it wrote the email.

It is very clear to see how ChatGPT could be used to enhance all sorts of social engineering scams from Business Email Compromise (BEC), spear phishing, angler phishing, and much more, and all free from red flags or any hint of malicious intent. With these dangers of AI on the rise, individuals stand to lose their personal information to even more cybercriminals, increasing the privacy risks of ChatGPT.

ChatGPT raises data security concerns

Adding to the privacy risks of ChatGPT is the fact that to use it you have to register for an account with an email and password, and then provide your full name and a mobile phone number, which, by the way, can't be a VoIP.

As with any online service you sign up for and hand over personal details, there are privacy and security concerns. When you sign up for an account with ChatGTP, you get a pop-up informing you that “conversations may be reviewed by our AI trainers to improve our systems” as well as a helpful reminder not to share any sensitive data.

You can check out the OpenAI privacy policy to see the type of information they gather, but it includes your IP address, browser type, and settings, date and time, the site features you use, your time zone, country, and device information such as the type of computer or mobile device and operating system.

Although OpenAI says they do not sell your personal data, its policy states it might share it.

“In certain circumstances, we may share your Personal Information with third parties without further notice to you.”

With users' recorded conversations and stored personal data, OpenAI servers could be a treasure trove of identifiable information just waiting to be scooped up by cybercriminals. Who knows, the hackers might even be able to get into the ChatGPT servers with malicious code written by ChatGPT?! Users just have to hope that OpenAI has spent the same amount of time and care on protecting their servers as they have in developing the AI.

Summing up ChatGPT

It's very easy to get caught up in the exciting developments within AI technology but we can't ignore the new dangers of AI and the privacy risks of ChatGPT.

To mark the importance of privacy, we asked ChatGPT to write a poem about privacy in the style of the Scottish bard, Robert Burns:

Of course, chatbots aren't the only AI technology posing privacy threats. Check out the 5 Major Problems With Facial Recognition Technology.

Ruby M
Hoody Editorial Team

Ruby is a full-time writer covering everything from tech innovations to SaaS, Web 3, and blockchain technology. She is now turning her virtual pen to the world of data privacy and online anonymity.

Latest


Blog
Timer7 minutes read

How the Government Hacks You, Final Chapter: IoT Hacks

Chapter 14: IoT Hacks

Will R
6 months ago
Blog
Timer9 minutes read

How the Government Hacks You, Chapter 13: GPS Tracking

Dive into the unsettling world of government-controlled GPS tracking!

Will R
6 months ago
Blog
Timer7 minutes read

How the Government Hacks You, Chapter 12: Garbage Day

Trash Talk: How your garbage can be exploited by hackers, law enforcement, and government agencies

Will R
7 months ago
Blog
Timer8 minutes read

How the Government Hacks You, Chapter 11: Resonance Attacks

It’s time to uncover how government surveillance gets personal.

Will R
7 months ago

Bulletproof privacy in one click

Discover the world's #1 privacy solution

  • Chrome Icon
  • Brave Icon
  • Edge Icon
  • Chromium Icon
  • Coming soon

    Firefox Icon
  • Coming soon

    Safari Icon
  • Coming soon

    Opera Icon

No name, no email, no credit card required

Create Key