
The robot demonstrated its inferential abilities when a researcher asked if there was more of their favorite beverage in the refrigerator. The robot observed empty Coke cans near the researcher, guessed Coke was their favorite, rolled to the refrigerator, checked for Coke cans, and reported back.

The DeepMind robot demonstrates capabilities such as giving context-based guided tours, understanding natural language commands, interpreting visual instructions, performing inferential processing, and navigating complex environments using AI and multimodal input. It can also respond to written and drawn commands and gestures, with an 86-90% success rate in complex tasks involving reasoning and multimodal user instructions.

The robot uses its Gemini 1.5 Pro application to interpret and respond to requests. It can listen to a person, parse the request, and translate it into behavior. The application was trained to understand the office workspace layout and can perform inferential processing, allowing the robot to make educated guesses and respond accordingly.