My notes on 'Life 3.0' by Max Tegmark ( @tegmark)

The absolute best book on AI!

Here are the ~70 things I learned from the book. 📚

👇 [ THREAD ] 👇
Had our universe never awoken, then it would have been completely pointless - merely a gigantic waste of space. Should our universe permanently go back to sleep due to some cosmic calamity or self-inflicted mishap, it will become meaningless.
Perhaps life will spread throughout the cosmos and flourish for billions of years - and perhaps this will be because of decisions we make here on our little planet during our lifetime.
Quantum mechanics forbids anything from being completely boring and uniform
It takes only twenty doublings to make a million.
Thirty to make a billion
Forty to make a trillion
(If you can double your money 37 times, you'll be the richest person on Earth)
Life = a process that can retain its complexity and replicate.
Evolution rewards life that's complex enough to predict and exploit regularities in its environment, so a more complex environment will lead to the evolution of more complex and intelligent life.
Life 1.0 = Biological stage. Evolves its hardware and software (animals)
Life 2.0 = Cultural stage. Evolve its hardware but designs its software (human civilization)
Life 3.0 = Technological stage. Design both hardware and software (AI)
For the first time, we might build technology powerful enough to permanently end the scourges of poverty, disease, and war - or to end humanity itself.
We can't say with great confidence that the probability of creating superhuman general AI is zero this century.
The average AI researcher thinks we'll see human-level AI by 2055.
As long as we're not 100% sure AI won't happen this century, it's smart to start safety research now to prepare for the eventuality.

To support a modest investment in AI-safety research, people don't need to be convinced the risks are high, just that they're non-negligible.
Machines can obviously have goals.

The behavior of a heat-seeking missile is best explained as a goal to hit a target.
The real worry isn't malevolence, but competence. An AI may be very good at attaining its goals, so its goals should be aligned with ours.

You don't step on ants out of malice. But if you're building a dam and there's an anthill that will be flooded, too bad for the ants.
There's no agreement on what intelligence is even among intelligent intelligence researchers.
Intelligence = the ability to accomplish complex tasks.
Comparing the intelligence of humans and computers:

Humans win hands-down on breadth, while machines outperform us in a small but ever-growing number of narrow domains.
Intelligent behavior is inexorably linked to goal attainment.
Intelligence is all about information and computation, not flesh, blood or carbon atoms.

There's no fundamental reason why machines can't one day be at least as intelligent as us.
Substrate independence: information can take on a life of its own, independent of its physical medium. Computation is substrate-independent.
Over the past 60 years, hard drives became 100 million times cheaper and memory storage 10 trillion times cheaper.

If you could get such a "99.99999999999% off" discount, you could buy all real estate in NYC for 10 cents and all the gold that's ever mined for around a dollar
Auto-associative memory: retrieving data by specifying something about what is stored, not so much where.
You can implement any well-defined function simply by connecting together enough NAND (Not-And) logic gates.
Once technology gets twice as powerful, it can often be used to design and build technology that's twice as powerful in return, triggering repeated capacity doubling in the spirit of Moore's Law.
Something that occurs just as often as regularly as the doubling of our technological power are claims that Moore's Law is ending.
We're nowhere near the limits of computation, as imposed by the laws of physics.
Intelligent agents = entities that collect information about their environment and process it to decide how to act back on their environment.
Deep reinforcement learning: getting a positive reward increases your tendency to do something again and vice versa.
Within a year of beating the World Champion at Go, DeepMind's AlphaGo system had played all twenty top players in the world without losing a single game.
Verification = "Did I build the system right?"
Validation = "Did I build the right system?"
If any military power pushes ahead with AI weapon development, n arms race is inevitable.

The endpoint is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. They're ideal for assassinations, destabilizing nations, subduing populations and ethnic cleansing.
Those who stand to gain most from an arms race aren't superpowers but small rogue states and terrorists. Once mass-produced, small AI-powered killer robots are likely to cost little more than a smartphone.
Kennedy emphasized that hard things are worth doing when success will greatly benefit the future of mankind.
The reason that the Athenian citizens had lives of leisure where they could enjoy democracy, art, and games was that they had slaves to do much of the work.
Technology drives inequality in ways:

1. By replacing old jobs with one requiring more skills, rewarding the educated;
2. Since 2000, more corporate profits have gone to those who own the companies
3. The digital economy benefits superstars over anyone else.
Career advice for future kids: go into professions that machines are currently bad at and seem unlikely to get automated in the near-future.
How to identify future-proof jobs:

- Does it require interacting with people and using social intelligence?
- Does it involve creativity and clever solutions?
- Does it require working in an unpredictable environment?
There's evidence that greater equality makes democracy work better: when there's a large well-educated middle class, the electorate is harder to manipulate and it's tougher for people to buy undue influence over the government.
It should be possible to make everyone as happy as if they had their personal dream job, but once one breaks free of the constraint that everyone's activities must generate income, the sky's the limit.
3 Steps to take over the world:

1. Build human-level AGI
2. Use AGI to create superintelligence
3. Use superintelligence to take over the world

Since it's hard to dismiss step one as forever impossible, it therefore becomes hard to dismiss the other two.
History reveals a trend towards more coordination over larger distances. New transportation technology makes coordination more valuable and new communication technology makes coordination easier.

Globalization is the latest example of this multi-billion year trend.
The most fundamental driver of decentralization will remain: it's wasteful to coordinate unnecessarily over large distances.
For AI, the laws of physics will place an upper limit on technology, making it unlikely that the highest levels of the hierarchy would be able to micromanage everything.
We won't get an intelligence explosion until the cost of doing human-level work drops below human-level hourly wages.

Once the cost of having computers reprogram themselves becomes cheaper than paying human programmers to do the same, the human can be laid off.
A good system of governance balances four concerns:

- Centralization: Trade-off between efficiency and stability;
- Inner threats: guard against power concentration and growing decentralization;
- Outer threats;
- Goal stability;
The Catholic Church is the most successful organization in human history in the sense that it's the only one to have survived for two millennia.
Exterminating 100% of humanity would be infinitely worse than exterminating 99%.

It would've killed all descendants that would otherwise have lived in the future, perhaps during billions of years on billions of trillions of planets.
"In the long run we are all dead" - John Maynard Keynes
The annual probability of accidental nuclear war is 0.1% with our current behavior.

That means the probability we'll have one in the next 10,000 years is ~99.995%.
We’ve dramatically underestimated life’s future potential. We're not limited to century-long life spans marred by disease. Life has the potential to flourish for billions of years, throughout the cosmos.
There is reason to suspect that ambition is a rather generic trait of advanced life. Almost regardless of what it's trying to maximize, it will need resources. It has an incentive to push its technology to its limits, to make the most of the resources it has.
We could meet all our current global energy needs by harvesting the sunlight striking an area smaller than 0.5% of the Sahara desert.
"We should expect that within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere that completely surrounds its parent star."
If your stomach were even 0.001% efficient, you'd only need to eat a single meal for the rest of your life.
If we don't improve our technology, the question isn't whether humanity will go extinct, but merely how.
Nature always prefers the optimal way when it chooses to do something. It always maximizes some quantity.
A hallmark of living systems is that they maintain or reduce entropy by increasing the entropy around them. Life maintains or increases complexity by making its environment messier.
If you start with one and double just three hundred times, you get a quantity exceeding the number of particles in our Universe.
Our cosmos invented life to help it approach heat death faster.
Not only do we as humans contain more matter than all other mammals except cows, but the matter in our machines, roads, and buildings appears on track to soon overtake all living matter on Earth.
Almost all goals can be better accomplished with more resources, so we should expect a superintelligence to want resources almost regardless of what ultimate goal it has.
A fast-forward replay of our 13.8-billion-year cosmic history:

1. Matter wants to maximize its dissipation
2. Primitive life tries to maximize its replication
3. Humans pursue not replication but goals related to pleasure, curiosity & compassion
4. Machines built to help humans
Societies and countries that have survived until the present tend to have ethical principles that were optimized for promoting their survival and flourishing.
The two mysteries of the mind:
1. How the brain processes information
2. Why we have subjective experiences
If consciousness is the way that information feels when it's processed in certain ways, then it must be substrate-independent. It's only the structure of the information processing that matters, not the structure of the matter doing the processing.
Since there can be no meaning without consciousness, it's not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
Science gathers knowledge faster than society gathers wisdom.
Mindful optimism is the expectation that good things will happen if you plan carefully and work hard for them.
If you enjoyed this thread, make sure to retweet the first tweet for visibility! https://twitter.com/AnthonyJCampbel/status/1280745028428730369
You can follow @AnthonyJCampbel.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: