According to the Fortune report, many of the former employees interviewed left Amazon because they believed the new Alexa would never be ready or would already be overtaken by competitors if and when it did launch. They felt that the old Alexa was getting in the way of the new Alexa and that Amazon had not yet figured out how to combine the capabilities of the current Alexa with the features touted for the new one. Additionally, some employees expressed frustration with the company's decentralized organizational structure and the challenges it presented in working on Alexa.
The technological limitations of the current Alexa model impact the development of the new, more advanced Alexa in several ways. Firstly, the existing infrastructure and feature set of the old model act as a constraint on the development of the new model. Amazon is struggling to integrate the new large language model (LLM) capabilities with the existing Alexa technology, which is designed to interact with various APIs and third-party devices2.
Secondly, the current Alexa model has trained its users to speak in a specific "Alexa language" rather than engaging in natural, conversational interactions. This poses a challenge for the development of the new model, as it needs to be trained on natural language data to improve its understanding and responsiveness.
Lastly, the decentralized organizational structure at Amazon, with multiple teams working on different aspects of Alexa, has led to friction and difficulties in coordinating the development of the new model. This has reportedly caused delays and hindered progress in advancing Alexa's capabilities.
Overall, the technological limitations of the current Alexa model present significant challenges in developing a more advanced and conversational AI assistant. However, Amazon is actively working on addressing these issues and integrating generative AI into Alexa's core components to improve its performance and capabilities.
The Fortune report highlights several challenges Amazon has faced in integrating generative AI into Alexa. These challenges include:
Organizational Dysfunction: Over a dozen former employees revealed stories of bureaucracy, constant strategy shifts, and a decentralized organizational structure that has hindered decision-making and progress.
Technological Challenges: Amazon's generative AI efforts have been hampered by a lack of data and access to the specialized computer chips needed to train and run large language models (LLMs) at the scale of rival efforts at companies like OpenAI.
Privacy Concerns: Amazon has invested $4 billion in AI startup Anthropic, whose LLM model Claude is considered competitive with OpenAI's models. However, privacy concerns have kept Alexa's teams from using Anthropic's Claude model.
Legacy Tech Stack: The old Alexa technology has been getting in the way of integrating the new generative AI capabilities. Amazon has struggled to combine the existing features of Alexa with the new conversational AI capabilities it demonstrated in the fall.
API Integration: The new Alexa LLM has struggled to reliably integrate APIs at scale, which is crucial for Alexa to interact with third-party smart home devices and music services.
Training for Natural Language: While Amazon has millions of devices in the wild, its customers have trained themselves to speak in "Alexa language" and don't interact conversationally with the device, making it difficult to train the LLM to understand natural language4.
Resource Allocation: Amazon's main focus after ChatGPT launched was to roll out Bedrock, a new AWS cloud computing service that allowed customers to build generative AI chatbots. As a result, Alexa took a backseat to other generative AI priorities.
These challenges have collectively slowed down Amazon's progress in integrating generative AI into Alexa, raising concerns that it might be falling behind its competitors in the AI race.