Anthropic, an AI company founded by researchers who previously worked on ChatGPT’s creator OpenAI, is now releasing a ChatGPT competitor that they claim will be both “safer” and “less harmful” than other similar tools.


Anthropic calls its AI Claude and writes:

Claude can help with use cases including summarization, search, creative and collaborative writing, Q&A, coding, and more. Early customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable – so you can get your desired output with less effort. Claude can also take direction on personality, tone, and behavior.

Just like with ChatGPT, Claude is available in both a web version and via an API for customers who want to implement Claude directly into their own systems. There are also two different versions of Claude that are priced slightly differently depending on the advanced functionality that customers are looking for.

We’re offering two versions of Claude today: Claude and Claude Instant. Claude is a state-of-the-art high-performance model, while Claude Instant is a lighter, less expensive, and much faster option. We plan to introduce even more updates in the coming weeks. As we develop these systems, we’ll continually work to make them more helpful, honest, and harmless as we learn more from our safety research and our deployments.


Claude is currently available in an early access version and if you’re interested in it, you can apply to get access to it here. Below is a short demo where Claude creates a job description for a job advertisement.

Leave a comment

Your email address will not be published. Required fields are marked *