Artificially intelligent-led machines are designed with a primary focus on incorporating advanced AI technologies, often prioritizing technical capabilities over human factors. In contrast, integrity-led machines are built with a balanced approach that combines AI and human intelligence, emphasizing ethical considerations, human agency, and the ability to respond to unforeseen circumstances. The former focuses on what can be done, while the latter prioritizes what should be done.
Diane's incident highlights key limitations of Tesla's AI-First mode, including heavy reliance on AI and electronic systems without sufficient consideration of human factors, lack of intuitive manual intervention capabilities, and inadequate support for human psychology and stress responses in emergency situations.
In a low-level maturity artificial integrity scenario, the AI system attempts to assist in case of power failure by providing real-time notifications and guidance1. However, its ability to proactively support the situation and consider human factors is limited. The system relies heavily on external devices and lacks intuitive manual override design elements.