Anthropic has been killing it within the enterprise market, success that seems to be at the very least partially attributable to pushback towards the Pentagon.
The maker of the Claude household of fashions noticed enterprise software program subscriptions develop 4.9 % month over month in February, in accordance with AI fintech biz Ramp, a interval throughout which OpenAI’s subscription share fell 1.5 %.
In January, Anthropic’s subscription share grew 2.8 share factors whereas OpenAI adoption slipped 0.9 share factors, the corporate mentioned.
OpenAI nonetheless leads in total enterprise subscription market share, 34.4 % to 24.4 %, however Anthropic has been catching up quick.
“Almost one in 4 companies on Ramp now pays for Anthropic (a 12 months in the past, it was one in 25),” mentioned Ara Kharazian, an economist for Ramp, in a weblog put up. “OpenAI’s 1.5 % decline was the biggest in any single month for any AI mannequin firm since we began monitoring enterprise AI adoption.”
In response to Kharazian, companies deciding on AI companies for the primary time now select Anthropic about 70 % of the time.
Coincidentally, OpenAI is reportedly revising its technique to give attention to promoting AI to companies and software program builders, the very markets during which Anthropic seems to be prospering.
It appears to be like like the patron market is amplifying enterprise preferences. In late January, Reuters reported on a rift between Anthropic and the Protection Division over Anthropic’s refusal to take away mannequin guardrails to make its fashions extra amenable to navy purposes.
Having positioned itself because the accountable AI firm, solely to stroll that again a bit amid its authorities negotiations, Anthropic pushed again publicly on the finish of February.
That did not endear Anthropic to the Trump administration. On March 4, the AI biz mentioned it obtained discover that Washington designated it a provide chain threat to US nationwide safety, and filed lawsuits difficult its excommunication by the Protection Division.
Whereas Anthropic’s public dedication to accountable AI could also be considerably overstated – its fashions have been reportedly used within the US particular navy operation to seize former Venezuelan President Nicolás Maduro in January – its revival of Google’s long-abandoned “Do not be evil” messaging has raised its profile amongst individuals who take such sentiment at face worth.
As famous by app monitoring biz Sensor Tower, Anthropic’s Protection Division dustup coincided with a surge in Claude installations and in ChatGPT removals. OpenAI’s determination to do enterprise with the Pentagon and CEO Sam Altman’s acknowledgement that OpenAI dealt with the scenario poorly in all probability did not assist.
Pointing to public endorsements of Claude by celeb musician Katy Perry and US Senator Brian Schatz that adopted Anthropic’s disagreement with the Protection Division, Kharazian mentioned, “Anthropic positioned itself in a different way, and a sure class of person observed.”
That very same class of individuals – those that can afford to pay $20 or $200 monthly for entry to Claude – can also have some animus towards OpenAI for placing advertisements in ChatGPT.
Anthropic in February talked up its rising annual income run fee, now standing at $14 billion, because it celebrated elevating one other $30 billion to proceed working. It is price noting, nonetheless, that in a court docket submitting [PDF] from 9 days in the past, Anthropic CFO Krishna Rao mentioned the corporate has gained over $5 billion of income since coming into the business market.
Anthropic didn’t reply to a request for remark. ®
Anthropic has been killing it within the enterprise market, success that seems to be at the very least partially attributable to pushback towards the Pentagon.
The maker of the Claude household of fashions noticed enterprise software program subscriptions develop 4.9 % month over month in February, in accordance with AI fintech biz Ramp, a interval throughout which OpenAI’s subscription share fell 1.5 %.
In January, Anthropic’s subscription share grew 2.8 share factors whereas OpenAI adoption slipped 0.9 share factors, the corporate mentioned.
OpenAI nonetheless leads in total enterprise subscription market share, 34.4 % to 24.4 %, however Anthropic has been catching up quick.
“Almost one in 4 companies on Ramp now pays for Anthropic (a 12 months in the past, it was one in 25),” mentioned Ara Kharazian, an economist for Ramp, in a weblog put up. “OpenAI’s 1.5 % decline was the biggest in any single month for any AI mannequin firm since we began monitoring enterprise AI adoption.”
In response to Kharazian, companies deciding on AI companies for the primary time now select Anthropic about 70 % of the time.
Coincidentally, OpenAI is reportedly revising its technique to give attention to promoting AI to companies and software program builders, the very markets during which Anthropic seems to be prospering.
It appears to be like like the patron market is amplifying enterprise preferences. In late January, Reuters reported on a rift between Anthropic and the Protection Division over Anthropic’s refusal to take away mannequin guardrails to make its fashions extra amenable to navy purposes.
Having positioned itself because the accountable AI firm, solely to stroll that again a bit amid its authorities negotiations, Anthropic pushed again publicly on the finish of February.
That did not endear Anthropic to the Trump administration. On March 4, the AI biz mentioned it obtained discover that Washington designated it a provide chain threat to US nationwide safety, and filed lawsuits difficult its excommunication by the Protection Division.
Whereas Anthropic’s public dedication to accountable AI could also be considerably overstated – its fashions have been reportedly used within the US particular navy operation to seize former Venezuelan President Nicolás Maduro in January – its revival of Google’s long-abandoned “Do not be evil” messaging has raised its profile amongst individuals who take such sentiment at face worth.
As famous by app monitoring biz Sensor Tower, Anthropic’s Protection Division dustup coincided with a surge in Claude installations and in ChatGPT removals. OpenAI’s determination to do enterprise with the Pentagon and CEO Sam Altman’s acknowledgement that OpenAI dealt with the scenario poorly in all probability did not assist.
Pointing to public endorsements of Claude by celeb musician Katy Perry and US Senator Brian Schatz that adopted Anthropic’s disagreement with the Protection Division, Kharazian mentioned, “Anthropic positioned itself in a different way, and a sure class of person observed.”
That very same class of individuals – those that can afford to pay $20 or $200 monthly for entry to Claude – can also have some animus towards OpenAI for placing advertisements in ChatGPT.
Anthropic in February talked up its rising annual income run fee, now standing at $14 billion, because it celebrated elevating one other $30 billion to proceed working. It is price noting, nonetheless, that in a court docket submitting [PDF] from 9 days in the past, Anthropic CFO Krishna Rao mentioned the corporate has gained over $5 billion of income since coming into the business market.
Anthropic didn’t reply to a request for remark. ®















