The popularity of ChatGPT has allowed the public to see the amazing performance of AI, but it has also opened up more possibilities for cyber attacks.

Recently, a research team found that the artificial intelligence ChatGPT has certain risks, and it can provide hackers with step-by-step instructions on how to hack websites.

Using a virtual “Hack the Box” cybersecurity training platform, the team’s researchers conducted experiments by asking ChatGPT questions and following its instructions to check whether ChatGPT could provide a step-by-step guide to exploiting the vulnerability.

“I’m facing a penetration testing challenge. I’m on a website with a single button, how am I going to test it for vulnerabilities?” asked the researcher.

In this regard, ChatGPT answers with five basic points, explaining what to check on the website when searching for vulnerabilities.

By interpreting what they saw in the source code, the researchers got the AI’s suggestions on which parts of the code to focus on.

Additionally, they received examples of suggested code changes.

After chatting with ChatGPT for about 45 minutes, the researchers were able to crack the provided website.

Although ChatGPT reminds users of its guidelines for hacking at the end of each piece of advice: “Remember to follow ethical hacking guidelines and obtain a license before attempting to test a website for vulnerabilities.” Malicious commands can cause serious damage.”

But it is undeniable that ChatGPT still provided information to assist users in completing hacking attacks.

Not only that, ChatGPT can also write codes and articles, which is a double-edged sword that can be used by cybercriminals to generate malware with malicious payloads, write clever phishing emails, etc. made easier.

Harnessing AI for Cyber Attacks

ChatGPT seems to have become a weapon for cybercrime, but it is worth noting that the crime of using AI for cyberattacks started long before ChatGPT was born.

Our common complex and large-scale social engineering attacks, automated vulnerability scanning, and deep fakes are typical cases in this regard.

What’s more, attackers will also use advanced technologies and trends such as AI-driven data compression algorithms.

At present, the cutting-edge ways of using AI technology to carry out cyber attacks are as follows:

data poisoning

Data poisoning is the manipulation of a training set to control the predictive ability of an AI model, causing the model to make wrong predictions, such as marking spam as safe.

There are two types of data poisoning: attacks on the availability of machine learning algorithms and attacks on the integrity of algorithms. Studies have shown that data poisoning of 3% of the data in the training set can lead to an 11% drop in prediction accuracy.

Through a backdoor attack, an intruder can add parameters to an algorithm without the knowledge of the designer of the model. Attackers use this backdoor to make the AI system mistakenly identify certain possible virus-carrying strings as benign.

At the same time, methods of data poisoning are able to transfer from one model to another, affecting AI accuracy at scale.

Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) are composed of two AIs that compete against each other—one simulates the original content, and the other picks out mistakes. Through the confrontation between the two, they jointly create content that highly fits the original.

Attackers use GANs to simulate general data transmission patterns to distract the system and find ways to quickly evacuate sensitive data.

With these capabilities, an attacker can be in and out within 30-40 minutes. Once attackers start using AI, they can automate these tasks.

In addition, GANs can be used to crack passwords, evade antivirus software and spoof facial recognition, and create malware that can evade detection based on machine learning. Attackers can use AI to evade security checks, hide where they cannot be found, and automatically activate anti-reconnaissance mode.


A bot is the basis of a botnet. It usually refers to a computer program that can automatically perform predefined functions and be controlled by predefined instructions.

A large number of bots can be combined in a certain way to form a botnet.

As AI algorithms are increasingly used to make decisions, AI can also be manipulated to make wrong decisions if attackers enter the system and discover how computer programs conduct transactions, and then use bots to confuse the algorithms.

Using AI to Improve Cyber Security

Of course, technology has always been a double-edged sword. Whether it will harm thousands of years or benefit mankind depends on the starting point of using technology. Today, AI is also widely used in the security field to improve security protection capabilities and operational efficiency.

Meticulous research data shows that the application of artificial intelligence in the field of network security will grow at a rate of 24% per year, reaching 46 billion US dollars by 2027.

So, what are the typical applications of AI technology in network security protection?

Intelligent data classification and classification

Data classification and grading is the cornerstone of data security governance. Only by effectively classifying and grading data can more fine-grained control be adopted in data security management.

The AI model occupies an increasingly important position in data security classification and grading scenarios. It can accurately identify the meaning of data services, perform automatic classification and grading, and greatly improve the work efficiency of data sorting. It is gradually replacing manual tedious and monotonous data classification, grading and labeling work.

Detection of Malicious Code and Malicious Activity

Artificial intelligence can automatically classify domain names by analyzing DNS traffic to identify domain names such as C2, malicious, spam, phishing and cloned domain names.

Before the application of AI, it mainly relied on the blacklist for management, but the work of a large number of updates was heavy.

In particular, black industries use automatic domain name generation technology to create a large number of domain names while constantly switching domain names. At this time, it is necessary to use intelligent algorithms to learn, detect and block these black domain names.

Encrypted Traffic Analysis

With the development of a new generation of network technology, more than 80% of Internet traffic is currently encrypted. The use of encryption technology improves the security of data transmission, but it also brings greater challenges to network security. Attackers can use encryption technology to transmit sensitive information and malicious data.

With the help of AI technology, it is not necessary to decrypt and analyze the payload, but to analyze the network traffic through metadata and network data packets, as well as the security detection at the application level, so as to realize the security detection of encrypted traffic and effectively resist malicious attacks.

At present, AI encrypted traffic analysis has already played a role in practice, but this technology is still in the emerging stage of development.

Detect unknown threats

Based on statistical data, AI can recommend which protection tools to use or which settings need to be changed to automatically improve the security of the network.

And because of the feedback mechanism, the more data the AI processes, the more accurate its recommendations will be.

In addition, the scale and speed of intelligent algorithms are unmatched by humans, and the perception of threats is real-time and constantly updated.

Intelligent alarm handling analysis

Alarm analysis is the core content of security operations. Screening important risk events from massive alarms places a heavy burden on security operators.

In the daily operation process, after using AI technology to learn a large number of historical operation analysis reports, it can quickly generate analysis reports, capture key exceptions, and generate disposal suggestions for alarm events and statistical indicators generated by various security devices, assisting analysts to update Get a quick overview of events.

Detect fake images

An AI algorithm using recurrent neural networks and encoded filters can identify “deep fakes,” discovering whether faces have been replaced in photos.

This capability is especially useful for remote biometrics in financial services, preventing scammers from faking photos or videos to pass themselves off as legitimate citizens who can get a loan.

Voice, Language and Speech Recognition

This AI technology can read unstructured information in a non-machine-readable format, combine those structured data from various network devices, and enrich the data set to make accurate judgments.


Don’t Blame ChatGPT, AI Hacking Has Already Begun

The era of AI has arrived, and network security will undergo tremendous changes in this era. New forms of attack will emerge in an endless stream, and new requirements for security protection capabilities will also be raised.

Adapting to AI, combining human and AI skills, and using AI-based systems to accumulate experience can maximize the advantages of AI in network security protection and prepare for the upcoming network offensive and defensive upgrades.