There’s a lot out there, and it can be hard to muddle through. Rather than inundate you with everything, we have curated what we believe are the most relevant links to get up to speed. If you think our sources are lacking or biased, please let us know. With additional funding, we plan to compile a more comprehensive, searchable database that has a greater range of information.
In this groundbreaking work, Dr. Bostrom systematically breaks down the risks associated with machine superintelligence with strong logical arguments. For those that can stomach a scientific, philosophical approach, this is a must read.
This is a thorough exploration of both the benefits and risks of superintelligence, covering some new ground and framing the issues with illustrative analogies and narratives. This book is accessible to a fairly broad audience.
This is a fascinating forensic examination of prior catastrophes in human history and how Cassandras (essentially whistle blowers) were ignored. The authors also examine future existential threats, of which AI is featured prominently.
Through an immersive, action-packed science fiction narrative, Detonation provides an entertaining yet sobering allegory of the risks associated with AGI. It is written by our founder, and every dime of profit from book sales will be donated to Ethagi. Read a good book for a good cause!
If you have only 15 minutes, this is where to begin. One of the most succinct and thought-provoking talks on the subject.
Another excellent place to begin, with arguments that more academic in nature but just as impactful.
Arguably the leading organization in researching and exploring existential risks to humanity. AI features prominently.
An Oxford-based research group focused on big-pictures issues including the prospect of advanced artificial intelligence.
An organization focused primarily on hard science and math problems relating to the risks of superintelligence.
Focused on developing and democratizing safe forms of AI.
An important letter, and signed by many key members of the global community, but it has thus far been essentially ignored. Ethagi believes it is also far too limited to focus our regulatory efforts only on autonomous weapons. A superintelligent machine will not have to use autonomous weapons to get what it wants.
As far as we know, the above documents represent the predominant outcome of policy development work on this subject to date at a federal level. The discussion leads the policy makers to focus almost exclusively on job dislocations and essentially ignores all other risks. In Ethagi’s opinion, this is a myopic position that fails to grasp the most significant (and currently intractable) threat to humanity; the control / goal-alignment problem.
Recent House / Senate actions on AI appear to primarily address competitiveness and jobs. One will produce a report 540 days after enactment. We need to do much more on AI Isafety, and with greater urgency, to ensure our future is protected.
This is one of the most advanced academic papers offering policy considerations. There are few specific suggestions offered, but it does accurately frame the discussion enough to “get the ball rolling”. Unfortunately, as evidenced by the meek US policy development situation, there is a significant divide between the points offered in this paper and policy maker’s understanding of the concerns that need to be addressed.