“One thing like over 70 % of [Anthropic’s] pull requests are actually Claude code written,” Krieger informed me. As for what these engineers are doing with the additional time, Krieger mentioned they’re orchestrating the Claude codebase and, after all, attending conferences. “It actually turns into obvious how a lot else is within the software program engineering function,” he famous.
The pair fiddled with Voss water bottles and answered an array of questions from the press about an upcoming compute cluster with Amazon (Amodei says “elements of that cluster are already getting used for analysis,”) and the displacement of staff because of AI (“I do not assume you’ll be able to offload your organization technique to one thing like that,” Krieger mentioned).
We’d been informed by spokespeople that we weren’t allowed to ask questions on coverage and regulation, however Amodei provided some unprompted perception into his views on a controversial provision in President Trump’s megabill that may ban state-level AI regulation for 10 years: “In the event you’re driving the automotive, it is one factor to say ‘we do not have to drive with the steering wheel now.’ It is one other factor to say ‘we’ll rip out the steering wheel, and we won’t put it again in for 10 years,’” Amodei mentioned.
What does Amodei take into consideration probably the most? He says the race to the underside, the place security measures are reduce with a purpose to compete within the AI race.
“Absolutely the puzzle of working Anthropic is that we in some way need to discover a technique to do each,” Amodei mentioned, which means the corporate has to compete and deploy AI safely. “You might need heard this stereotype that, ‘Oh, the businesses which might be the most secure, they take the longest to do the security testing. They’re the slowest.’ That isn’t what we discovered in any respect.”