ChatGPT and Cybersecurity Threats

Millions of users were astounded by OpenAI’s groundbreaking AI language model ChatGPT when it was released in November. For many, though, curiosity rapidly gave way to genuine anxiety about the tool’s ability to further the objectives of evil actors. ChatGPT provides new routes for hackers to possibly compromise complex cybersecurity technologies. For a sector that is currently dealing with a 38% global rise in data breaches by 2022, executives must realize the rising influence of AI and respond accordingly.

Before we can develop remedies, we must first recognize the major concerns posed by ChatGPT’s widespread use. This article will look at these new threats, the training and tools needed for cybersecurity experts to respond, and the need for government supervision to ensure AI isn’t damaging to cybersecurity efforts.

AI-Generated Phishing Scams

While earlier versions of language-based AI have been open-sourced (or made publicly available) for years, ChatGPT is by far the most sophisticated iteration to date. ChatGPT’s ability to speak with users without spelling, grammatical, or verb tense errors makes it appear as though there is a real person on the other side of the chat window. ChatGPT is a game changer from the standpoint of a hacker.

According to the FBI’s 2021 Internet Crime Report, phishing is the most widespread IT threat in the United States. Most phishing schemes, on the other hand, are easily identified since they’re packed with misspellings, poor syntax, and overall uncomfortable phrasing, especially when coming from foreign nations where the bad actor’s primary language isn’t English. ChatGPT will afford hackers from all over the globe a near fluency in English to bolster their phishing campaigns.

For cybersecurity leaders, an increase in sophisticated phishing attacks requires immediate attention and actionable solutions. Leaders need to equip their IT teams with tools that can determine what’s ChatGPT-generated vs. what’s human-generated, geared specifically toward incoming “cold” emails. Fortunately, “ChatGPT Detector” technology already exists and is likely to advance alongside ChatGPT itself. Ideally, IT infrastructure would integrate AI detection software, automatically screening and flagging emails that are AI-generated. Additionally, it’s important for all employees to be routinely trained and re-trained on the latest cybersecurity awareness and prevention skills, with specific attention paid to AI-supported phishing scams. However, the onus is on both the sector and the wider public to continue advocating for advanced detection tools, rather than only fawning over AI’s expanding capabilities.

Duping ChatGPT into Writing Malicious Code

ChatGPT is proficient at generating code and other computer programming tools, but the AI is programmed not to generate code that it deems to be malicious or intended for hacking purposes. If hacking code is requested, ChatGPT will inform the user that its purpose is to “assist with useful and ethical tasks while adhering to ethical guidelines and policies.”

However, manipulation of ChatGPT is certainly possible and with enough creative poking and prodding, bad actors may be able to trick the AI into generating hacking code. In fact, hackers are already scheming to this end.

For example, Israeli security firm Check Point recently discovered a thread on a well-known underground hacking forum from a hacker who claimed to be testing the chatbot to recreate malware strains. If one such thread has already been discovered, it’s safe to say there are many more out there worldwide and “dark” webs. Cybersecurity pros need the proper training (i.e., continuous upskilling) and resources to respond to ever-growing threats, AI-generated or otherwise.

There’s also the opportunity to equip cybersecurity professionals with AI technology of their own to better spot and defend against AI-generated hacker code. While public discourse is first to lament the power ChatGPT provides to bad actors, it’s important to remember that this same power is equally available to good actors. In addition to trying to prevent ChatGPT-related threats, cybersecurity training should also include instruction on how ChatGPT can be an important tool in the cybersecurity professionals’ arsenal. As this rapid technology evolution creates a new era of cybersecurity threats, we must examine these possibilities and create new training to keep up. Moreover, software developers should look to develop generative AI that’s potentially even more powerful than ChatGPT and designed specifically for human-filled Security Operations Centers (SOCs).

Regulating AI Usage and Capabilities

While there has been much talk about bad actors using AI to assist them hack other software, the possibility of ChatGPT itself being hacked has received little attention. From there, unscrupulous actors might spread disinformation from a source that is normally perceived as, and intended to be, unbiased.

According to reports, ChatGPT has made measures to detect and avoid answering politically inflammatory inquiries. However, if the AI is hacked and used to offer information that appears impartial but is biased, or a skewed perspective, the AI might become a hazardous propaganda machine. The capacity of a corrupted ChatGPT to spread disinformation might be alarming, necessitating further government regulation of powerful AI technologies and organizations such as OpenAI.

The Biden administration has produced a “Blueprint for an AI Bill of Rights,” but with the debut of ChatGPT, the stakes are higher than ever. To elaborate, we need monitoring to guarantee that OpenAI and other firms offering generative AI products assess their security measures on a regular basis to limit the chance of their being hacked. Furthermore, new AI models should be subject to a set of basic security precautions before being open-sourced. Bing, for example, introduced their own generative AI in early March, and Meta is nearing the completion of a strong tool of their own, with more to come from other digital behemoths.

People are marveling at — and cybersecurity professionals are debating — the promise of ChatGPT and the growing generative AI sector, but checks and balances are required to ensure the technology does not become cumbersome. Aside from cybersecurity executives retraining and reequipping their employees and the government taking on a stronger regulatory role, we need to change our thinking and attitude toward AI.

We need to redefine what the underlying platform for AI looks like, especially for open-source examples like ChatGPT. Before making technology available to the public, developers must consider if its capabilities are ethical. Is the new tool built on a “programmatic core” that forbids manipulation? How do we create standards that necessitate this, and how do we hold developers accountable when they fail to meet those requirements? Organizations have established technology-agnostic guidelines to ensure that trades across many platforms — from edtech to blockchains and even digital wallets — are safe and ethical. It is vital that the same ideas be applied to generative AI. ChatGPT chatter is at an all-time high and as technology advances, it is imperative that technology leaders begin thinking about what it means for their team, their company, and society. If not, they won’t only fall behind their competitors in adopting and deploying generative AI to improve business outcomes, they’ll also fail to anticipate and defend against next-generation hackers who can already manipulate this technology for personal gain. With reputations and revenue on the line, the industry must come together to have the right protections in place and make the ChatGPT revolution something to welcome, not fear.