
The main goal of MIT and FAR AI's research is to create a Go AI that is truly "robust" against any and all attacks, meaning an algorithm that can't be fooled into game-losing blunders and requires significant computing resources to be defeated, while also overcoming potential exploits using additional computing resources when confronted with unfamiliar situations.

The researchers tested the KataGo algorithm's robustness by using three methods: fine-tuning the model with examples of unorthodox cyclic strategies, engaging in an iterative training process where new adversarial models discover novel exploits and new defensive models seek to fix them, and using vision transformers for training6. However, all three methods failed to create a truly robust algorithm.