Subscribe to our AI Insights Newsletter!

* indicates required

Elevate your content with our expert AI blog creation services!

Contact Us

ChatGPT is an AI-driven natural language processing tool that has rapidly amassed over 1 million users since its debut in November. The web-based chatbot has been used for a variety of tasks such as generating wedding speeches, hip-hop lyrics, academic essays, and writing computer code. However, ChatGPT’s capabilities have set a number of industries on edge.

A New York school banned ChatGPT over fears that it could be used to cheat, copywriters are already being replaced, and reports claim Google is so alarmed by ChatGPT’s capabilities that it issued a “code red” to ensure the survival of the company’s search business. The cybersecurity industry is also taking notice of ChatGPT’s potential to be abused by hackers with limited resources and zero technical knowledge.

Many of the security experts TechCrunch spoke to believe that ChatGPT’s ability to write legitimate-sounding phishing emails — the top attack vector for ransomware — will see the chatbot widely embraced by cybercriminals, particularly those who are not native English speakers. Chester Wisniewski, a principal research scientist at Sophos, said it’s easy to see ChatGPT being abused for “all sorts of social engineering attacks” where the perpetrators want to appear to write in a more convincing American English.

Check Point also recently sounded the alarm over the chatbot’s apparent ability to help cybercriminals write malicious code. The researchers say they witnessed at least three instances where hackers with no technical skills boasted how they had leveraged ChatGPT’s AI smarts for malicious purposes. One hacker on a dark web forum showcased code written by ChatGPT that allegedly stole files of interest compressed them, and sent them across the web. Another user posted a Python script, which they claimed was the first script they had ever created. Check Point noted that while the code seemed benign, it could “easily be modified to encrypt someone’s machine completely without any user interaction.” The same forum user previously sold access to hacked company servers and stolen data, Check Point said.

Dr. Suleyman Ozarslan, a security researcher and the co-founder of Picus Security, also recently demonstrated to TechCrunch how ChatGPT was used to write a World Cup–themed phishing lure and write macOS-targeting ransomware code. Ozarslan asked the chatbot to write code for Swift, the programming language used for developing apps for Apple devices, which could find Microsoft Office documents on a MacBook and send them over an encrypted connection to a web server, before encrypting the Office documents on the MacBook.

While the idea that a chatbot could write convincing text and realistic interactions isn’t far-fetched, it’s important to note that ChatGPT has built-in guardrails to prevent creating malicious or harmful content, but it can be bypassed by rewriting the request slightly.

Some experts have moved to debunk concerns that an AI chatbot could turn wannabe hackers into full-fledged cybercriminals, stating that actually getting malware and using it is a small part of the work that goes into being a bottom-feeder cyber-criminal. However, it’s not just the security professionals who are conflicted about what role ChatGPT will play in the future of cybersecurity. ChatGPT itself stated that it’s difficult to predict how it will be used in the future.