Shreds.AI today unveiled a namesake generative artificial intelligence (AI) platform based on a large language model (LLM) it trained to specifically automate software engineering tasks.
Available in beta, the Shreds.AI platform can assign tasks to as many as eight other LLMs by invoking the application programming interfaces (APIs) they expose.
Shreds.AI CEO Soufiane Amar said rather than producing small amounts of code the platform has been trained to create the tens of thousands of lines and files of code needed to drive complex software engineering workflows. The Shreds.AI platform will also accurately orchestrate the integration of various software components to build an application because it has been trained using tools that developers regularly employ, he added.
A developer inputs a simple natural language description of the software that they want Shreds.AI to create, and the platform then generates architectural diagrams and the code for independent and isolated features called shreds. DevOps teams only need to validate the code before employing it, a process Shreds.AI makes simpler via a network of independent developers that organizations can contract to review code, said Amar.
Shreds.AI is a Meta AI, in that in addition to generating code and reasoning across processes, is able to rank third-party LLMs based on their ability to perform specific tasks, he noted.
The Shreds.AI platform is already being tested by the automotive conglomerate Stellantis and Réseau de Transport d’Électricité (RTE), the electricity transmission system operator of France. Shreds.AI estimates that an application that previously might have cost $1 million can now be created for less than $30,000. The company claims Shreds.AI slashes the time to market for software, along with team sizes and costs, by over 80% compared to traditional software development methods. By enabling automatic maintenance, it also solves the software obsolescence problem. It increases software lifespan by more than 60% by, for example, making it easier to convert the programming language used to build that application into another language that more developers know.
It’s still early days so far as the incorporation of AI into DevOps workflows is concerned and the code generated by these platforms still needs to be managed. The challenge is that thanks to the rise of AI it’s expected that the amount of software that will be built and deployed in the next two years will exceed the amount of software that has been deployed in the past two decades. The only way to keep pace with that rate of development will be to also apply AI to the management of DevOps workflows as well.
In the hope of eliminating as much existing toil as possible, DevOps teams should be identifying which manual processes they regularly perform today, with an eye towards soon applying AI to them tomorrow. After all, the whole point of embracing DevOps in the first place was to ruthlessly automate as many software engineering processes as possible, to enable more applications to be built and deployed as quickly as possible.