The individuals requested anonymity as the project details had not been officially disclosed to the public.
Amazon chose not to provide a comment. The Information first reported the project’s name on Tuesday.
The initiative is led by Rohit Prasad, the former head of Alexa, who now reports directly to CEO Andy Jassy. In his role as the Head Scientist of Artificial General Intelligence (AGI) at Amazon, Prasad has assembled a team of researchers who were previously involved with Alexa AI and the Amazon science team. They are collectively focused on model training, consolidating AI endeavors throughout the company with dedicated resources.
Amazon has previously conducted training for smaller models like Titan and engaged in partnerships with AI model startups like Anthropic and AI21 Labs, providing them to users of Amazon Web Services (AWS).
According to individuals familiar with the matter, Amazon sees the development of in-house models as a means to enhance the appeal of its offerings on AWS. Enterprise customers on AWS are keen on accessing high-performing models. However, there is no defined timetable for the release of this new model.
Large language models (LLMs) serve as the foundational technology for AI tools that acquire knowledge from extensive datasets to produce responses resembling human-like language.
Expanding the training of larger AI models is costlier due to the substantial computing resources involved. During an earnings call in April, Amazon’s leadership indicated their intention to boost investments in LLMs and generative AI while reducing expenditure in their retail business’s fulfillment and transportation operations.
Reported by Krystal Hu in San Francisco. Edited by Gerry Doyle.