Anthropic quietly expands access to AI Claude ‘private alpha’ at open-source event in San Francisco

Anthropic — one of the OpenAI’s chief rivals — quietly expanded access to the “private alpha” version of the highly anticipated chat service, Claude, at a bustling open-source AI meetup attended by more than 5,000 people at the Exploratorium in downtown San Francisco on Friday.

a year ago   •   3 min read

By CloudNerve©
Anthropic quietly expands access to AI Claude ‘private alpha’ at open-source event in San Francisco
Table of contents

Author Credit:  https://venturebeat.com/ai/anthropic-quietly-expands-access-to-claude-private-alpha-at-open-source-event-in-san-francisco/

Anthropic — one of the OpenAI’s chief rivals — quietly expanded access to the “private alpha” version of the highly anticipated chat service, Claude, at a bustling open-source AI meetup attended by more than 5,000 people at the Exploratorium in downtown San Francisco on Friday.

This exclusive rollout offered a select group of attendees the opportunity to be among the first to access the innovative chatbot interface — Claude — that is set to rival ChatGPT. The public rollout of Claude has thus far has been muted. Anthropic announced Claude would begin rolling out to the public on March 14 — but it’s unclear exactly how many people currently have access to the new user interface.

Early access to a groundbreaking product

“We had tens of thousands join our waitlist after we introduced our business products in early March, and we’re working to grant them access to Claude,” said an Anthropic spokesperson in an email interview with VentureBeat. Today, anyone can use Claude on the chatbot client Poe, but access to the company’s official Claude chat interface is still limited. (You can sign up for the waitlist here.)

That’s why attending the open-source AI meetup may have been hugely beneficial for a large swath of dedicated users eager to get their hands on the new chat service.

As guests entered the Exploratorium museum on Friday, a nervous energy usually reserved for mainstream concerts took over the crowd. The people in attendance knew they were about to encounter something special: what inevitably turned out to be a breakout moment for the open-source AI movement in San Francisco.

As the throng of early arrivals jockeyed for position in the narrow hallway at the museum’s entrance, an unassuming person in a casual attire nonchalantly taped a mysterious QR code to the banister above the fray. “Anthropic Claude Access,” read the QR code in small writing, offering no further explanation.

I happened to witness this peculiar scene from a fortuitous vantage point behind the person I have since confirmed was an Anthropic employee. Never one to ignore an enigmatic communiqué — particularly one involving opaque technology and the promise of exclusive access — I promptly scanned the code and registered for “Anthropic Claude Access.” Within a few hours, I received word that I had been granted provisional entrance to Anthropic’s clandestine chatbot, Claude, rumored for months to be one of the most advanced AIs ever constructed.

Screenshot of Claude’s interface. Image source: Michael Nuñez.

It’s a clever tactic employed by Anthropic. Rolling out software to a group of dedicated AI enthusiasts first builds hype without spooking mainstream users. San Franciscans at the event are now among the first to get dibs on this bot everyone’s been talking about. Once Claude is out in the wild, there’s no telling how it might evolve or what may emerge from its artificial mind. The genie is out of the bottle, as they say — but in this case, the genie can think for itself.

“We’re broadly rolling out access to Claude, and we felt like the attendees would find value in using and evaluating our products,” said an Anthropic spokesperson in an interview with VentureBeat. “We’ve given access at a few other meetups as well.”

The promise of Constitutional AI

Anthropic, which is backed by Google parent company Alphabet and founded by ex-OpenAI researchers, is aiming to develop a groundbreaking technique in artificial intelligence known as constitutional AI, or a method for aligning AI systems with human intentions through a principle-based approach. It involves providing a list of rules or principles that serve as a sort of constitution for the AI system, and then training the system to follow them using supervised learning and reinforcement learning techniques.

“The goal of constitutional AI, where an AI system is given a set of ethical and behavioral principles to follow, is to make these systems more helpful, safer and more robust — and also to make it easier to understand what values guide their outputs,” said an Anthropic spokesperson.

“Claude performed well on our safety evaluations, and we are proud of the safety research and work that went into our model. That said, Claude, like all language models, does sometimes hallucinate — that’s an open research problem which we are working on.”

Anthropic applies constitutional AI to various domains, such as natural language processing and computer vision. One of their main projects is Claude, the AI chatbot that uses constitutional AI to improve on OpenAI’s ChatGPT model. Claude can respond to questions and engage in conversations while adhering to its principles, such as being truthful, respectful, helpful and harmless.

If ultimately successful, constitutional AI could help realize the benefits of AI while avoiding potential perils, ushering in a new era of AI for the common good. With funding from Dustin Moskovitz and other investors, Anthropic is setting out to pioneer this novel approach to AI safety.

Spread the word

Keep reading