Nothing has been added in the last 14 days.

Don't worry! We're still updating this repo. We're focusing on quality which means links won't be added all the time, only when they are required.

Information Resources

There’s a lot out there, and it can be hard to muddle through. Rather than inundate you with everything, we have curated what we believe are the most relevant links to get up to speed. If you think our sources are lacking or biased, please let us know. With additional funding, we plan to compile a more comprehensive, searchable database that has a greater range of information.

Books

Superintelligence, Paths, Dangers, Strategies
By Nick Bostrom

In this groundbreaking work, Dr. Bostrom systematically breaks down the risks associated with machine superintelligence with strong logical arguments. For those that can stomach a scientific, philosophical approach, this is a must read.

Life 3.0: Being Human in the Age of Artificial Intelligence
By Max Tegmark

This is a thorough exploration of both the benefits and risks of superintelligence, covering some new ground and framing the issues with illustrative analogies and narratives. This book is accessible to a fairly broad audience.

Warnings: Finding Cassandras to Stop Catastrophes
By Richard A. Clarke and R.P. Eddy

This is a fascinating forensic examination of prior catastrophes in human history and how Cassandras (essentially whistle blowers) were ignored. The authors also examine future existential threats, of which AI is featured prominently.

Detonation
By Erik A. Otto

Through an immersive, action-packed science fiction narrative, Detonation provides an entertaining yet sobering allegory of the risks associated with AGI. It is written by our founder, and every dime of profit from book sales will be donated to Ethagi. Read a good book for a good cause!

YouTube Presentations

Sam Harris Ted Talk: Can We Build AI Without Losing Control Over It?

If you have only 15 minutes, this is where to begin. One of the most succinct and thought-provoking talks on the subject.

Nick Bostrom Ted Talk: What Will Happen When Our Computers Get Smarter Than Us?

Another excellent place to begin, with arguments that more academic in nature but just as impactful.

Important Organizations

Future of Life Institute

Arguably the leading organization in researching and exploring existential risks to humanity. AI features prominently.

Future of Humanity Institute

An Oxford-based research group focused on big-pictures issues including the prospect of advanced artificial intelligence.

Machine Intelligence Research Institute

An organization focused primarily on hard science and math problems relating to the risks of superintelligence.

OpenAI

Focused on developing and democratizing safe forms of AI.

Relevant Open Letters and Policy documents

An Open letter to the United Nations about the risk of autonomous weapons

An important letter, and signed by many key members of the global community, but it has thus far been essentially ignored. Ethagi believes it is also far too limited to focus our regulatory efforts only on autonomous weapons. A superintelligent machine will not have to use autonomous weapons to get what it wants.

United States Government Reports on AI development

▪ Preparing for the Future of Artificial Intelligence (Oct 2016)
▪ Artificial Intelligence, Automation and the Economy (Dec 2016)

As far as we know, the above documents represent the predominant outcome of policy development work on this subject to date at a federal level. The discussion leads the policy makers to focus almost exclusively on job dislocations and essentially ignores all other risks. In Ethagi’s opinion, this is a myopic position that fails to grasp the most significant (and currently intractable) threat to humanity; the control / goal-alignment problem.

Recent House and Senate Actions

Recent House / Senate actions on AI appear to primarily address competitiveness and jobs. One will produce a report 540 days after enactment. We need to do much more on AI Isafety, and with greater urgency, to ensure our future is protected.

Policy Desiderata in the Development of Superintelligent AI

This is one of the most advanced academic papers offering policy considerations. There are few specific suggestions offered, but it does accurately frame the discussion enough to “get the ball rolling”. Unfortunately, as evidenced by the meek US policy development situation, there is a significant divide between the points offered in this paper and policy maker’s understanding of the concerns that need to be addressed.