CodeRabbit today added support for command line interfaces (CLIs) to a namesake platform that applies artificial intelligence (AI) and code graph analysis to reviewing code.
Additionally, CodeRabbit has added support for automatic unit test generation and custom pre-merge checks to improve test coverage and a Model Context Protocol (MCP) client to enable its platform to fetch, for example, feature requirements and engineering documentation from external sources to provide additional context.
Fresh off raising an additional $60 million in funding, CodeRabbit CEO Harjot Gill said support for a CLI alongside existing support for integrated development environments (IDEs)provides application developers with a range of options for reviewing code using their most preferred tool.
CodeRabbit is primarily used within any Git-based repository to review code as commits are being made. That approach ensures that all code destined for a production environment has been reviewed by an AI platform that is accessed via a natural language chat interface.
Alternatively, application developers can also embed CodeRabbit to review code in real time within the tooling they are using to write code.
The overall goal is to surface routine mistakes in a way that developers can either catch themselves or enable a reviewer of code to focus on more complex issues by relying on AI to identify more routine issues. In effect, CodeRabbit creates a trust layer that isolates code reviews from the tools and platforms that were used to create a piece of code in the first place, noted Gill.
That approach is especially crucial now that the volume of code being generated by AI tools continues to exponentially expand, noted Gill. Unfortunately, much of that code is being generated by general-purpose large language models (LLMs) that were trained using examples of flawed code that is now manifesting itself in the output of AI coding tools. CodeRabbit provides a means to review that code using an AI tool specifically trained for that purpose, using a different generative AI foundation.
CodeRabbit traverses each code repository in the Git platform, prior to pull requests and related Jira/Linear issues, using a code graph analysis capability that generates summaries and identifies code dependencies across files, custom instructions using Abstract Syntax Tree (AST) patterns. It also pulls data dynamically from external sources, such as an LLM, as needed to review code quality. That approach also makes it simpler to adhere to coding practices at the organization level, understand file dependencies that might impact other parts of the code and conform to security policies that might be required.
Since launching its platform in 2024, CodeRabbit now has more than 8,000 customers, including Chegg, Groupon, Life360 and Mercury. It also offers a free edition of the platform to maintainers of more than 100,000 open source software projects.
It’s still early days so far as adoption of AI within DevOps workflows is concerned, but it’s clear that code reviews are a use case where the potential benefits are substantial. The challenge and the opportunity now is determining how best to use AI to improve the quality of the code being generated.