A Tentative Typology of AI-Foom Scenarios

  • Near Term — Human-intelligence AI is possible in the near term, say 30 years.
  • No Competitive Apocalypse — A single system will be created first, and other groups will not have resources sufficient to quickly build another system with similar capabilities.
  • Unsafe AI — The I launched will not have a well-bounded and safe utility function, and will find something to maximize other than what humanity would like.

What’s (enough for) a “foom”?

Non-Foom AI X-Risk

Fooms

AI Complexity and Intelligence Range

Intelligence vs. Range — Cases

Human Complexity and Manipulability

Conclusion

Why not try for Aumann Consensus instead?

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Advance Deep Learning In Healthcare using Medical Imaging

Anthropocentric AI

Byproducts of Progress

The Bot Chronicles S01E06: How to define your bot personality

Future Proof This: Automated Politicians

How Is AI Remodeling The Tourism Industry?

Engineering a MultiLayer Perceptron

Top Pros & Cons OF AI In Mobile App Development

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Manheim

David Manheim

More from Medium

Validity to Twitter Sports Reports

Kune foods did not fail because “start-ups fail”

One Up On Wall Street Book Review

Generation loss and the tapeless generation