Nearly everything we use is built on code, from cars to smart fridges to doorbells. In businesses, countless applications keep devices, workflows and operations running. So, when early no-code development platforms launched in 2010, promising more accessible application development for citizen developers, its success felt inevitable.
It’s hard to deny the success of no-code. These platforms flatten the learning curve for would-be developers, allowing organizations to innovate and automate with useful applications despite a developer skills shortage. Plus, the out-of-the-box applications offered by no-code platforms expedite the application development process in a world where speed-to-market is king. Last year, Forrester found that 87% of enterprise developers use low-code and no-code tools or platforms for at least some of their workload.
But functionality is not the only sign of success. Unintentionally, the same trends that pushed for application development to be democratized have led to a wild west of insecure applications and misconfigurations that expose a whole host of organizations to cyberthreats.
The Importance of Security
While these platforms democratize development, they must be used with caution. The OWASP Top 10 highlights factors such as misconfiguration and using vulnerable components as common security threats. Yet, a reliance on no-code development could introduce un-spotted vulnerabilities directly into an organization.
Forrester has long warned of the risk of no- and low-code, featuring the vulnerability in its predictions for the coming years. The spectrer of an untrained employee creating applications is especially alarming: These platforms empower employees with no application security knowledge to develop programs that security teams are often unaware of.
Organizations must gain real oversight into who is responsible for developing software, whether professional developers leveraging no-code platforms as tools, or citizen developers creating applications for smaller teams and projects. It is no secret that CVEs are rising sharply. They hit a record 28,092 last year and are projected to increase by 25% throughout 2024. Last December, Microsoft revealed a high-severity CVE that affected low-code and no-code users.
When businesses are facing a tide of new exploits each day, skills such as vulnerability detection and remediation are critical to any new software development project.
Software development needs to become more flexible in its roles, but never at the expense of security. By fostering a culture of “security by design” across the organization, security leaders can ensure that all roles in the software development lifecycle (SDLC) understand their responsibilities in their security posture — including citizen developers.
Empowering Human Intervention
Just one in five organizations are confident in their ability to detect a vulnerability before an application is released, according to Security Journey research, meaning that the security knowledge in most SLDCs is insufficient. Developers need to be trained to create secure software and to sniff out insecure code in the rest of the code base – responding and fixing it quickly – before it gets to production. Without this, application security and security teams are left with an unnecessary burden, which ultimately requires more time, expense, and potential for business risk. Securing vulnerabilities after the fact, through regular security scanning and patching, should not be the norm.
Instead, organizations have an opportunity to empower developers with security skills and knowledge throughout their careers. These skills allow them to deliver a higher quality of output and play a vital role in securing the organization itself. And yet, just one in three organizations (36%) train developers to write secure code, which ensures that software security starts at the end of the development process 64% of the time. The same goes for citizen developers, whose work often shows a passion for problem-solving and innovation that should be harnessed and supported with the right training.
A Problem for the Future
It would be reductive to suggest that software security risk is rising purely because of no-code and low-code platforms, or even that citizen developers are a majority contributor to CVEs. The very culture of software development has been forcing the industry down a path of speed and ease over security for almost two decades. The surge in AI-generated coding proves this more than anything.
So long as developers and the project managers above them are incentivized to move fast, each new solution will be valued based on its ability to create functional code quickly. This will be the case until security is made a priority by regulators and, importantly, by executives and the board.
The cultural disconnection at the heart of the software security crisis is human-made. No matter which tool is in the spotlight, building secure software will require human solutions. While large language models (LLMs) aren’t yet equipped to tackle the complex considerations that developers face daily, the biggest risk will come from complacency, overreliance and blind trust from their users. Human developers, equipped to perform rigorous code reviews and remediate any vulnerabilities they find, are central to the safe implementation of any of these technologies, whether no-code, low-code or AI-generated. Without this, the tools run the risk of introducing more vulnerabilities with less critical supervision.
No matter what tools arise, knowledgeable human users will always need to understand what the tool is providing and can act as a stopgap for quality and security.