em360tech image

While AI promises to transform the enterprise for the better, the powerful software is becoming a double-edged sword for cybersecurity teams. 

The technology, which has gripped Silicon Valley since the launch of OpenAI’s ChatGPT in November last year, is playing an increasingly important role in cybersecurity, allowing organisations to better detect threats and protect their systems and data resources. 

But less than six months after AI first took the tech world by storm, malicious actors have already found ways to weaponise its capabilities.

From launching malware campaigns in minutes to building sophisticated phishing attacks, threat actors are finding new ways to turn AI tools into weapons for cybercrime and exploit the technology’s powerful capabilities to infiltrate IT infrastructures. 

This evolving cyber threat is set to open up a whole new dimension of risk. A recent investigation by Blackberry found that over half of IT, decision-makers believe their organisation will fall victim to a successful cybercrime credited to AI this year, with a particular concern for OpenAI’s ChatGPT. 

"It’s been well documented that people with malicious intent are testing the waters," Shishir Singh, CTO for Cybersecurity at BlackBerry, noted. 

“We expect to see hackers get a much better handle on how to use ChatGPT successfully for nefarious purposes; whether as a tool to write better Mutable malware or as an enabler to bolster their ‘skillset,” Singh added. 

Cybersecurity experts echo Singh’s concerns. Of the 1,500 IT Decision Makers interviewed, 72 per cent acknowledge AI tools like ChatGPT’s potential threat to cybersecurity. 

Paradoxically, however, 88 per cent of those interviewed also believe that the same technology will be essential for performing security tasks more efficiently. Whether AI will have an overall positive or negative impact on cybersecurity overall remains to be seen.

With the technology rapidly evolving, however, there’s no doubt that the technology will continue to polarise an industry that has long expressed its concern about the threat of AI models on business continuity. 

The good

Given the rapidly evolving nature of cyberattacks and the development of sophisticated attacking mediums, implementing AI in security practices is key to staying ahead of the threat landscape.

There are a number of endpoint security tools available on the market that use AI, machine learning, and natural language processing to detect sophisticated attacks before they happen, as well as classify and co-relate large amounts of security signals. 

These tools can automate threat detection and provide a more effective response compared to traditional systems and manual techniques, allowing businesses to optimise their cybersecurity measures and stay ahead of potential threats. 

Providing IT teams with AI tools can also reduce the burden on overworked staff, working to an increase in productivity and a reduction in human error. 

To read more about cybersecurity, visit our dedicated Business Continuity Page. 

IBM’s Cyber Security Intelligence Index Report found that "human error was a major contributing cause in 95 per cent of all breaches.

AI has the capability to swiftly and comprehensively analyze vast amounts of data, enabling it to identify threats even in seemingly routine activities. 

Microsoft's Cyber Signals, for instance, has been able to analyze an astounding 24 trillion security signals, 40 nation-state groups, and 140 hacker groups.

The process of data analysis, whether performed by human personnel or AI, can inevitably result in false positives. 

However, with AI, the need for manpower to investigate and eliminate these false positives is significantly reduced, allowing for more effective deployment of resources.

AI has the ability to learn and improve over time as it gathers more data, which enhances its ability to identify and examine false positives. It can analyse patterns and detect unusual practices more accurately as it accumulates more data.

These patterns are then stored in a knowledge bank, which aids in locating future threats and preventing potential breaches. 

The bad 

As AI becomes more accessible than ever before, an increasing number of malicious actors are exploiting the technology to launch sophisticated attacks against organisations – and traditional defences may struggle to keep pace with the sophistication and speed of AI-powered attacks.

AI can be used to analyse vast amounts of data and identify vulnerabilities in networks or systems, allowing cybercriminals to launch highly targeted and sophisticated attacks with greater precision and scale.

Another key cybersecurity risk brought on AI is adversarial machine learning. This refers to the manipulation of AI models by malicious actors to exploit vulnerabilities in their design and cause them to produce incorrect or harmful outputs. 

By introducing subtle and intentional changes to data inputs, attackers can deceive AI models into making wrong decisions or classifications, leading to potentially catastrophic consequences.

This could be exploited in various scenarios, such as autonomous vehicles making incorrect decisions, facial recognition systems being fooled by manipulated images, or AI-powered financial systems making flawed predictions, resulting in financial fraud.

The increasing reliance on AI for critical decision-making processes raises concerns about the ethical implications of AI-powered systems. Bias and discrimination in AI models can lead to unfair and discriminatory outcomes, perpetuating existing social inequalities. 

Generative AI also gives threat actors the ability to automate the creation of new attack methods. , by utilising a generative AI model trained on a dataset of known vulnerabilities, they can automatically generate exploit code that specifically targets these vulnerabilities.

This approach is not a new concept and has been previously employed with other techniques, such as fuzzing, which also automates the development of exploits.

The scary

While the malicious exploitation of generative AI  opens a can of worms in itself, cybersecurity experts have expressed particular concern with the risk of the technology greatly lowering the barrier of entry to cybercrime.

With the ability to generate human-like text and speech, generative models like ChatGPT can be exploited to automate the creation of phishing emails, social engineering attacks, and other types of malicious content.

As part of its content policy, OpenAI has placed restrictions to block malicious content creation on its platform.

While these restrictions should prevent the tool from being used for cybercrime, researchers from Check Point found that hackers have already found ways to bypass these measures.

ChatGPT AI phishing email
ChatGPT Telegram bot generates convincing phishing emails. Source: Check Point.

One such way involved creating Telegram bots that use OpenAI’s API in external channels, removing many of the anti-abuse measures put in place by the AI company. 

Researchers found that hackers were this method to rapidly create malicious content generated by the AI chatbot, including generating Python scripts to launch malware attacks and launching convincing phishing attacks in seconds. 

They also discovered several users selling these telegram bots as a service, a business model allowing cybercriminals to use the unrestricted ChatGPT model for 20 free queries before being charged $5.50 for every 100 queries they make. 

ChatGPT generates Malware
ChatGPT Telegram bot generates Python code for malware. Source: Check Point.

“What’s interesting is that these guys that posted it had never developed anything before, Sergey Shykevich, the lead ChatGPT researcher on the study, explained. 

Shykevich told Insider that ChatGPT and Codex, an OpenAI service that can write code for programmers, will "allow less experienced people to become developers.”

If these services become easily accessible, hackers with little experience could soon be launching attacks powerful enough to the toughest of cybersecurity defences. 

line em360

There has never been a more important time to put cyber security high on every UK business's agenda.

With one small business in the UK successfully hacked every 19 seconds and new ‘Cyber’ Certification Standards coming into play for both physical access control systems and intruder alarm installations, there has never been a more important time to put Cyber security high on your business's agenda.

National Cyber Security Show will help your organisation gain a better understanding of the current mitigating threats. Review and source the products and software solutions that will better protect your business, as you connect with and learn directly from industry-leading experts, at the NEC in Birmingham this April, co-located with The Security Event.

Stay ahead of the threat. REGISTER NOW!