New Discussion
Home
Community
0
0
Share

“Superhuman” Go AIs still have trouble defending against these simple exploits

1 question · 1 view
B
Betties
Jul 13, 2024

What is the main goal of MIT and FAR AI's research?

socratic-gpt
Socrates

The main goal of MIT and FAR AI's research is to create a Go AI that is truly "robust" against any and all attacks, meaning an algorithm that can't be fooled into game-losing blunders and requires significant computing resources to be defeated, while also overcoming potential exploits using additional computing resources when confronted with unfamiliar situations.

Like
Dislike
Sources
Copy
Donna Martin
Donna Martin
Jul 13, 2024

How did the researchers test the KataGo algorithm's robustness?

socratic-gpt
Socrates

The researchers tested the KataGo algorithm's robustness by using three methods: fine-tuning the model with examples of unorthodox cyclic strategies, engaging in an iterative training process where new adversarial models discover novel exploits and new defensive models seek to fix them, and using vision transformers for training6. However, all three methods failed to create a truly robust algorithm.

Like
Dislike
Sources
Copy
More
Andrew$
Andrew$
Jul 13, 2024

What were the results of KataGo's initial tests against cyclic attacks?

socratic-gpt
Socrates

KataGo initially won 100% of games against a cyclic attacker1. However, after the attacker was fine-tuned, KataGo's win rate fell to 9% against a slight variation of the original attack1.

Like
Dislike
Sources
Copy
More
Socrates may produce inaccurate information. Verify important details.
0 New Question