AI models like OpenAI's GPT-3.5 Turbo and others exhibit surprisingly human-like biases when asked to pick random numbers, often avoiding extremes and certain patterns typical of human choice. This behavior is not due to AI consciousness but stems from their training on human-generated data. They replicate patterns frequently found in their training material, inadvertently mimicking human randomness inaccuracies. This phenomenon highlights the inherent human-like responses generated by AI, shaped by the data they are trained on.
The experiment involving AI models picking random numbers reveals several key aspects about their behavior and underlying programming. Firstly, the AI models, including OpenAI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and Gemini, displayed a preference for certain numbers, suggesting a pattern or bias in their "random" number selection2. For instance, GPT-3.5 Turbo favored the number 47, Claude 3 Haiku often chose 42, and Gemini preferred 72. This indicates that their responses are not truly random but are influenced by the data they were trained on.
Moreover, the AI models demonstrated human-like biases by avoiding extremely low or high numbers and certain patterns like consecutive digits24. For example, Claude never selected numbers above 87 or below 27, and certain double digits like 33, 55, or 66 were conspicuously absent in their choices2. This behavior mirrors human tendencies in random number selection, where people typically avoid extremes and show preferences for numbers ending in certain digits.
The experiment underscores that AI systems do not genuinely understand randomness or the concept of number selection. Their "choices" are based on the frequency of certain numbers appearing in the training data they were provided. When asked to pick a number, the AI models simply replicate the most common responses observed in their training datasets. This reveals that AI models operate by mimicking human input rather than processing or understanding content in a human-like manner. Thus, while AI responses might appear human-like, they are fundamentally a reflection of their programming and training data, highlighting the limitations of current AI technology in mimicking true human randomness and decision-making.
AI models, such as large language models (LLMs), exhibit a tendency to avoid picking extremely low or high numbers when asked to select randomly between 0 and 100 due to their training on human-generated data13. Humans generally do not choose numbers like 1 or 100 in random selection tasks, preferring numbers from the middle range. Consequently, the AI, trained on this data, mirrors this behavior. It doesn't understand randomness but rather replicates the patterns it has learned from the training data. Thus, the selection of numbers by AI models is influenced by the biases inherent in the human data they were trained on, leading them to avoid extremes just as humans do.