Skip to Content

ChatGPT AI is changing the face of cybersecurity

ChatGPT AI is changing the face of cybersecurity

ChatGPT AI is changing the face of cybersecurity

SAN FRANCISCO – On March 14, artificial intelligence (AI) research laboratory OpenAI released the latest iteration of its popular AI deep learning model, GPT-4.

Built on that framework, OpenAI’s ChatGPT program has taken the world by storm in recent months as people and businesses come to terms with both the limitations, and the potential of the AI application. Some of those boundaries are already being pushed with this latest version of GPT.  “For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%,” researchers said of GPT-4’s current capabilities. The differences are subtle, they say, but GPT-4 is capable of receiving more nuanced instructions as a result.

The security world has taken notice of those abilities and have already begun applying it to enhance cybersecurity. British-based security software and hardware company Sophos already has three different projects harnessing GPT3.5 for improved detection of malicious activity for cybersecurity. Those include a natural language query interface allowing for search of malicious activity, GPT-based spam email detector, and a tool for analyzing “living off the land” binary (LOLbin) command lines.

“While not perfect, these approaches demonstrate the potential of using GPT-3 as a cyber-defender’s co-pilot,” Sean Gallagher, a senior threat researcher at Sophos, writes. “The results of both the spam filtering and command line analysis efforts are posted to SophosAI’s GitHub page as open source under the Apache 2.0 license, so those interested in trying them out or adapting them to their own analysis environments are welcome to build on the work.”

Of course, the opposite tactics are also at work with security researchers trying to determine just how much of a threat GPT-based malware can be. SentinelOne published a blog post recently about one of these attempts named BlackMamba, a proof of concept malware that reaches out to platforms like OpenAI to generate malicious code.

“The use of the AI is intended to overcome two challenges the authors perceived were fundamental to evading detection,” blog author Migo Kedem writes. “First, by retrieving payloads from a ‘benign’ remote source rather than an anomalous C2, they hope that BlackMamba traffic would not be seen as malicious. Second, by utilizing a generative AI that could deliver unique malware payloads each time, they hoped that security solutions would be fooled into not recognizing the returned code as malicious.”

Ultimately Kedem concludes that the use of AI platforms for cybersecurity provide as many tools as they do threats, and that cybersecurity experts must remain vigilant in the face of rapidly changing technology.

To their credit, OpenAI worked with over 50 experts in a variety of fields to mitigate the risks involved with their platform, among which cybersecurity was a focus. According to the company GPT-4 has an 82 percent reduced tendency to respond to request for disallowed content and other sensitive requests. “There’s still a lot of work to do, and we look forward to improving this model through the collective efforts of the community building on top of, exploring, and contributing to the model.”

Read OpenAI's full release of the GPT-4 platform at openai.com/research/gpt-4.

Comments

To comment on this post, please log in to your account or set up an account now.