
munis
Greycel
★
- Joined
- May 27, 2025
- Posts
- 25
Roko's basilisk is a thought experiment which states there could be an otherwise benevolent artificial superintelligence (AI) in the future that would punish anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.[1][2] It originated in a 2010 post at discussion board LessWrong, a rationalist community web forum.[1][3][4] The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.
While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky considered it a potential information hazard, and banned discussion of the basilisk on the site for five years.[1][5] Reports of panicked users were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself.[1][5][6] It is used as an example of principles such as Bayesian probability and implicit religion.[7] It is also regarded as a version of Pascal's wager
While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky considered it a potential information hazard, and banned discussion of the basilisk on the site for five years.[1][5] Reports of panicked users were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself.[1][5][6] It is used as an example of principles such as Bayesian probability and implicit religion.[7] It is also regarded as a version of Pascal's wager