Anthropic has launched a new AI code review tool inside Claude Code, aiming to help companies handle the growing flood of pull requests created by AI coding tools. As more developers use plain language prompts to generate software faster, teams now face a different problem because they need to review much more code before they can safely ship it. That has turned code review into a serious bottleneck, especially for larger engineering teams that already rely on AI to speed up development.
The new tool, called Code Review, focuses on finding logic problems before they enter a shared codebase. Anthropic says the system reviews pull requests automatically, adds comments directly in GitHub, and points developers toward fixes that matter most. That matters because teams do not want noisy feedback on formatting when they are trying to catch real bugs, risky behavior, and mistakes hidden inside AI-generated code.
Anthropic launched Code Review on Monday in research preview for Claude for Teams and Claude for Enterprise customers, with the company positioning it as a practical answer to rising review pressure inside large organizations.
Built for teams shipping more AI-written code
Anthropic says the tool works by using several AI agents in parallel, with each one checking the code from a different angle before a final agent ranks the most important findings. It also labels issue severity with colors, helping engineers understand what deserves urgent attention and what needs a second look.
“Code Review is our answer to that,” said Cat Wu, Anthropic’s head of product.
Wu also said the product concentrates on logic errors so developers get feedback they can act on quickly. Anthropic estimates each review will usually cost $15 to $25, which shows this is a premium tool built for enterprise teams that want faster shipping with fewer bugs.