Oasis Security this week warned application developers of a security flaw in the Cursor artificial intelligence (AI) code editor developed by Anysphere, Inc. that potentially could be used to allow a maliciously crafted code repository to execute code as soon as it’s opened using Cursor.
Erez Schwartz, threat research engineer at Oasis Security, said that unlike other coding tools based on the open source Visual Studio (VS) Code tool originally developed by Microsoft, the Cursor AI tool disables a Workspace Trust feature by default.
Application developers as a result might inadvertently auto-execute commands to open folders in a repository that might have been deliberately created to distribute malware, said Schwartz.
It’s not clear to what degree providers of AI coding tools should be putting security guardrails in place, but DevSecOps teams should at the very least remind application developers that might be using Cursor or any other similarly configured tool of potential security issues they might encounter. Ideally, DevSecOps teams should have more control over how AI coding tools are configured to ensure best practices are being followed.
Additionally, Oasis Security is recommending that in addition to enabling Workspace Trust to run at startup, they should also consider setting task.allowAutomaticTasks to off and that any unknown repositories only be opened in a safe environment. They should also search for .vscode/tasks.json with “runOn”: “folderOpen” and monitor any spawned shells and unusual outbound requests that occur immediately after opening a project.
The team that developed Cursor plans to update the security guidance it provides but it will mainly be up to individual developers and the organizations they work for to determine and enforce cybersecurity policies, said Schwartz.
Whether usage of AI coding tools is sanctioned or not, application developers are going to be at the very least experimenting with them. While there is no doubt they improve productivity, the code being generated still needs to be reviewed for vulnerabilities. The AI coding tools are dependent on large language models (LLMs) that were trained using examples of flawed code that contains vulnerabilities that are now being similarly surfaced in code written by AI coding tools.
The code generated by these tools also tends to be a lot more verbose than code written by a human, so DevOps teams also might want to consider the amount of maintenance that might be required when that code needs to be debugged either before or after it has been deployed in a production environment.
Ultimately, most DevOps teams would be better served by defining a set of policies and best practices for using AI coding tools versus simply allowing them to become a set of shadow tools that no one has any visibility into or control over. After all, the AI coding genie at this point is not going back in the bottle. The challenge now is to find the best way forward that, in addition to providing a better developer experience, also serves the best interests of the organizations that employ them.