Site icon Gen Alpha

Google Cloud Joins Forces with AI Start-up Anthropic in a Multi-Million Dollar deal to Power Safe and Reliable AI Development

Home » Emerging Tech » Google Cloud Joins Forces with AI Start-up Anthropic in a Multi-Million Dollar deal to Power Safe and Reliable AI Development

Estimated reading time: 4 minutes

Google Cloud has formed a new partnership with Anthropic, an AI start-up specializing in safety and research. As the preferred cloud provider for Anthropic, Google Cloud will provide the computing power necessary for developing trustworthy AI systems. 

The two companies will work together to build large-scale TPU and GPU clusters for training and deploying Anthropic’s cutting-edge AI systems.

Anthropic CEO Dario Amodei expressed excitement about the partnership, citing Google Cloud’s open and flexible infrastructure as a key factor in the decision. 

Anthropic was attracted to Google Cloud’s expertise in large-scale systems for machine learning and shared values on safe and responsible AI development.

Google believes it is important to pursue AI boldly and responsibly, and its partnership with Anthropic aligns with this philosophy. 

Google Cloud CEO Thomas Kurian emphasized that the company is committed to providing open infrastructure for the next generation of AI start-ups, and partnering with Anthropic is a prime example.

About Anthropic

Anthropic was established in 2021 by a team with a track record of AI breakthroughs, including GPT-3 and Reinforcement Learning from Human Feedback (RLHF).

The start-up has published 14 research papers on building reliable and controllable language models. In 2023, Anthropic began public deployment of its technology, starting with a language model assistant named Claude.

Claude pairs RLHF with Anthropic’s safety techniques to create AI systems that are predictable, steerable, and easier to interpret. 

Claude runs on Google Cloud, and Anthropic is working with early partners to expand access to the assistant in the coming months. 

With Google Cloud’s deep expertise and cutting-edge infrastructure, the partnership will help users and businesses tap into the power of reliable and responsible AI.

The research team at Anthropic is interested in various fields like natural language processing, human feedback, scaling laws, reinforcement learning, code generation, and interpretability. 

One can read some of their published papers to understand their research direction.

Anthropic’s research

One of the papers, “A General Language Assistant as a Laboratory for Alignment,” focuses on simple baselines and investigations in AI alignment. 

In another paper, “Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback,” the team explores the training of a general language assistant to provide helpful advice without exhibiting harmful behaviors.

For interpretability, the team has published “A Mathematical Framework for Transformer Circuits” and “In-context Learning and Induction Heads,” which explore mathematical frameworks and the mechanism of in-context learning in transformer language models.

The team has also explored the technical traits and societal impacts of large generative models in the paper “Predictability and Surprise in Large Generative Models.” 

In their work on scaling laws and interpretability of repeated data, the team found that repeating data can harm the performance of language models and observed an associated “double descent” phenomenon.

The team’s work on Softmax Linear Units found that using a different activation function can increase the interpretability of transformer MLP neurons without affecting performance. 

Furthermore,

In their paper “Language Models (Mostly) Know What They Know,” the team showed that language models evaluate their own accuracy and predict their ability to answer questions correctly.

Their paper “Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned” explores safety techniques for language models against adversarial inputs. The team has also built toy models to discover the origins of the puzzling phenomenon of “polysemanticity” in neural networks.

In “Measuring Progress on Scalable Oversight for Large Language Models,” the team describes a research agenda for human oversight of AI systems. And in “Constitutional AI: Harmlessness from AI Feedback,” they demonstrate the training of a more harmless assistant via self-improvement.

The team’s work on “Superposition, Memorization, and Double Descent” sheds light on how deep learning models generalize beyond their training data. 

In “Discovering Language Model Behaviors with Model-Written Evaluations,” they developed an automated way to generate evaluations of language models, uncovering novel behaviors. 

This research continues the team’s previous work in areas like GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.


Hoomale is a hub of thought-provoking articles on various subjects, from company operations to the mindset and behavior of young people to future work and tech. Stay informed and educated with our captivating reads.

Get notified of our next post via email by signing up with the form below!

Disclaimer: Our post may contain affiliate links. By clicking and purchasing, the commission could come our way at no extra cost. Rest assured – we only endorse products and services with a personal stamp of approval and top-notch quality. Appreciation for your support runs deep.

Exit mobile version