Sourcegraph today made generally available an open source Cody tool that leverages generative artificial intelligence (AI) tools to write and fix code.
Version 1.0 of Cody invokes StarCoder, an open source large language model (LLM) for creating code made available by Hugging Face and a chat tool that makes use of dense-sparse vector retrieval system for code and documentation to invoke GPT-4 Turbo, Claude 2, GPT-3.5 Turbo, Claude Instant and Mixtral-8x7B LLMs.
Designed to be deployed as an extension of VS Code, Cody is available in a free edition as well as a professional edition, dubbed Cody Pro, that Sourcegraph supports. Cody Pro will be free until February 2024, after which it will cost $9 per month. There is also a version for the JetBrains editor available in beta, along with an experimental Neovim offering.
Cody is also integrated with the code search tools provided by Sourcegraph to provide development teams with additional context that, depending on the use case, delivers a completion acceptance rate of 30% or higher for code.
In addition to enabling developers to write code faster, Coy also simultaneously invokes the graph technology Sourcegraph created to provide additional context to a prompt that makes it simpler to surface relevant code by parsing code and producing a graph that captures the semantic understanding of code. Cody, for example, can search the codebase for files and documents that can be used to suggest a rough plan of attack for implementing a new feature or resolving issues that surfaced in a bug report.
Cody can also create, save and share custom commands to make it simpler to reuse the prompts created to write code.
Future versions of Cody will be more tightly integrated with the Sourcegraph universal code graph to provide additional context. In addition, Cody will soon leverage deeper graph context to make autocomplete suggestions based on symbols defined elsewhere.
Sourcegraph CEO Quinn Slack said the combination of graph technologies and LLMs reduces latency while helping to ensure the code generated by the LLM doesn’t produce hallucinations such as type errors and imaginary function names.
In addition, Sourcegraph is committed to applying AI and graph technologies beyond the codebase itself to provide deeper insights into the environment code will eventually be deployed in, added Slack.
It’s not clear how much developers are relying on generative AI tools such as Cody and GitHub Copilot to write code faster, but it’s clear the pace at which software can be written is accelerating. That increased pace of application development will eventually require DevOps to revisit workflows that were not designed to manage codebases that, in the months ahead, will significantly increase in size.
Each developer will most likely have a personal preference for one generative AI tool versus another, depending on the quality of the code generated. The issue is that no matter how good any of those tools are, however, the code generated will still need to be reviewed by a human before it can be deployed in any production environment.