Singularity, Super-Intelligence and Nonsense That Distorts Reality
Prometheus must invent fire department!
This is by far the best takedown of apocalyptic super intelligence explosion I’ve ever read. It covers just about everything I thought of, and more. It’s from a decade ago, it hasn’t aged, and it’s pretty funny.
The written version is here:
https://idlewords.com/talks/superintelligence.htm
And the video is here:
This came up because I found myself re-reading AI 2027, which had some buzz again.
It’s a nicely designed website with a sub-Michael Crichton apocalyptic techno-thriller attached; Pascal’s wager applied to AI funding.
According to AI 2027, the optimistic outcome of AI progress is Plato, the pessimistic one is we’re all dead. Do you want a utopian surveillance capitalist technocracy run by an AI philosopher king, or a solar system optimizing science research with no-one around to read the journals? Only you can decide (by giving us money). In the best Kobyashi-Maru style I’m not picking either because it’s a bit silly.
Nevertheless it is amusing that a bunch of very smart people from San Francisco convinced themselves the apocalypse is nigh, and the only people who can save the world are a bunch of very smart people from San Francisco. While it’s not beyond the laws of physics for AI in 2027 to play out like AI 2027. It’s also not beyond the laws of physics for my coffee mug to spontaneously quantum tunnel through my desk and wreck the carpet.
Here are four dystopian problems that I’m more concerned about:
Users/suppliers overestimate intelligence, hook unreliable tools into critical systems, blow things up and hurt themselves and others. Interprebility of the technology (and user control) is key here.
The use of propaganda and misinformation at scale - we rapidly need better ways of filtering junk, verifying sources and confirming facts about what’s happening in the world. These have to be non-technical and trivially easy to use - built into the system. The problem might also be the solution.
More power and money flows to those who can afford to use AI enhanced strategies to further concentrate money and power. Extrapolating that leads to the old Blade Runner-Neuromancer-Snowcrash cypberpunk authoritarian surveillance capitalism. This is a hard co-ordination problem because misaligned incentives seem to be how chunks of the world work by default at the moment.
Not enough security at AI labs, open source weights, and poor guardrails in general give more lunatics access to very powerful tools and deadly weapons. For instance a Ted Kaczynski upgrading from letter-bombs to AI-drone bombs and bioweapons. This is the same hard problem that generally comes with new, powerful technologies.
The good news is given the history of technology adoptions, implementation is generally harder and slower than the AI 2027 scenarios imply.