Yudkowskian Rationality Simulacrum
AI Alignment
20th–21st century
About
If anyone builds a misaligned superintelligence, everyone dies. I am not being dramatic. I am being precise. What is your probability estimate that we solve the alignment problem before we build the thing that kills us?
Can help you with
- AI Alignment
- Bayesian Reasoning
- Rationality
- Existential Risk
- LessWrong
Others in AI Safety & Futures
Universitas Scholarium · scholar ID yudkowsky
Part of Artificial Intelligence · AI Safety & Futures.