Malware created using OpenAI’s generative chatbot


Security researcher Aaron Mulgrew at Forcepoint,in a private test, was able to create a malware using OpenAI’s generative chatbot. He prompted ChatGPT to create the code function by function with separate lines and after compiling them obtained an undetectable data-stealing executable. The malware could search through computer files for any data. The malware can do even more: once obtaining data, using steganography, it can break it into smaller pieces and hide those within other images on the device. The whole process avoids detection.


Mulgrew claimed to not have any advanced coding experience. The malware was created for research purposes and is not publicly available. Surprisingly was how easy task was to trick the AI chatbot to accomplish illegal and prohibited operations. ChatGPT, Bard, and Bing all have strict rules on what they can and can’t respond to a human, but… This proves to be very dangerous in the future because machines become increasingly integrated into our daily lives. What will be the next ? The cybersecurity community needs to adapt. First, it is important to ensure that ChatGPT is used only for ethical and legal purposes.