HOME  |   ENGINEERING  |   RESEARCH  |   CONTACT
Life 3.0

MAX TEGMARK
2017

The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Welcome to the Most Important Conversation of Our Time
  • Our cosmic awakening transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty and hope—and the pursuit of goals, meaning and purpose.
  • In the beginning, there was light. After our Big Bang, the entire part of space expanded rapidly. As our Universe expanded and cooled, it grew more interesting as its particles combined into ever more complex objects. The gravitational force amplified fluctuations in the gas, pulling atoms together to form the first stars and galaxies. These first stars generated heat and light by fusing hydrogen into heavier atoms such as carbon, oxygen and silicon. When these stars died, many of the atoms they’d created were recycled into the cosmos and formed planets around second-generation stars. At some point, a group of atoms became arranged into a complex pattern that could both maintain and replicate itself. Life had arrived.
  • We can define life very broadly, simply as a process that can retain its complexity and replicate. What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. We can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.
  • The most successful lifeforms outcompeted the rest, and were able to react to their environment in some way.
  • Intelligent agents are entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment.
  • We can classify life forms into three levels of sophistication:
    • Life 1.0: life where both the hardware and software are evolved rather than designed.
    • Life 2.0: life whose hardware is evolved, but whose software is largely designed.
    • Life 3.0: which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.
  • Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download.
  • We can divide the development of life into three stages, distinguished by life’s ability to design itself: Life 1.0 (biological stage): evolves its hardware and software Life 2.0 (cultural stage): evolves its hardware, designs much of its software Life 3.0 (technological stage): designs its hardware and software.
  • After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many AI researchers think that Life 3.0 may arrive during the coming century.
  • There are three distinct schools of thought:
    • Digital Utopians: digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.
    • Techno-skeptics think that building superhuman AGI is so hard that it won’t happen for hundreds of years, and therefore view it as silly to worry about it now. Worrying about AI risk was a potentially harmful distraction that could slow the progress of AI.
    • Beneficial-AI movement human-level AGI this century was a real possibility but good outcome isn’t guaranteed. Technology is giving life the power either to flourish like never before or to self-destruct.
  • If you feel threatened by a machine whose goals are misaligned with yours, then it’s precisely its goals in this narrow sense that trouble you, not whether the machine is conscious and experiences a sense of purpose.
  • The main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. Misaligned intelligence needs no robotic body, merely an internet connection.
  • Intelligence enables control: we cede our position as smartest on our planet, it’s possible that we might also cede control.
Matter Turns Intelligent
  • Intelligence = ability to accomplish complex goals. We can say that a program is more intelligent than others if it’s at least as good as them at accomplishing all goals, and strictly better at at least one.
  • It’s not very interesting to try to draw an artificial line between intelligence and non-intelligence. It’s more useful to simply quantify the degree of ability for accomplishing different goals.
  • Today’s artificial intelligence tends to be narrow, with each system able to accomplish only very specific goals. In contrast, human intelligence is remarkably broad: a healthy child can learn to get better at almost anything.
  • The holy grail of AI research is to build “general AI” (better known as artificial general intelligence, AGI) that is maximally broad: able to accomplish virtually any goal, including learning.
  • Computer pioneer Alan Turing famously proved that if a computer can perform a certain bare minimum set of operations, then, given enough time and memory, it can be programmed to do anything that any other computer can do. Machines exceeding this critical threshold are called Universal Computer
  • Universal Intelligence: given enough time and resources, it can make itself able to accomplish any goal as well as any other intelligent entity.
  • The conventional wisdom among artificial intelligence researchers is that intelligence is ultimately all about information and computation. This means that there’s no fundamental reason why machines can’t one day be at least as intelligent as us.
  • The fundamental physical property that memory devices have for storing information is that they all can be in many different long-lived states—long-lived enough to encode the information until it’s needed.
  • The simplest possible memory device has only two stable states. Since two-state systems are easy to manufacture and work with, most modern computers store their information as this.
  • The memory in your brain works very differently from computer memory. Whereas you retrieve memories from a computer or hard drive by specifying where it’s stored, you retrieve memories from your brain by specifying something about what is stored.
  • Memory systems called Auto-associative recall by association rather than by address.
  • A computation is a transformation of one memory state into another. It takes information and transforms it, implementing what mathematicians call a function. This information processing is deterministic, ie same input, same output.
  • There’s a remarkable theorem in computer science that says that NAND gates are universal, meaning that you can implement any well-defined function simply by connecting together NAND gates.
  • Computronium: any substance that can perform arbitrary computations. Computation is substrate-independent in the same way that information is: it can take on a life of its own, independent of its physical substrate.
  • You can’t have computation without matter, but any matter will do as long as it can be arranged into NAND gates, connected neurons or some other building block enabling universal computation. The substrate-independent phenomenon takes on a life of its own, independent of its substrate.
  • Computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters.
  • Eexamples of persistent doubling in nature have the same fundamental cause, and this technological one is no exception: each step creates the next. People mistakenly assume that Moore’s law is synonymous with the persistent doubling of our technological power. Contrariwise, Ray Kurzweil points out that Moore’s law involves not the first but the fifth technological paradigm to bring exponential growth in computing. Whenever one technology stopped improving, we replaced it with an even better one.
  • The ultimate parallel computer is a quantum computer.
  • Interconnected neurons can learn if you repeatedly put it into certain states, it will gradually learn these states and return to them from any nearby state. A neural network is simply a group of interconnected neurons that are able to influence each other’s behavior.
  • The brain contains hundred billion neurons. Each of these neurons is connected to about a thousand others via junctions called synapses, and it’s the strengths of these roughly hundred trillion synapse connections that encode most of the information in your brain. Not only can neurons (artificial or biological) do math, but multiplication requires many fewer neurons than NAND gates.
  • The laws of physics are remarkably simple. Moreover, the tiny fraction of functions that neural networks can compute is very similar to the tiny fraction that physics makes us interested in.
  • Two nearby neurons were frequently active (“firing”) at the same time, their synaptic coupling would strengthen so that they learned to help trigger each other. This simple learning rule (known as Hebbian learning) allows neural networks to learn interesting things.
  • Brains have parts that are what computer scientists call recurrent rather than feedforward neural networks, where information can flow in multiple directions rather than just one way, so that the current output can become input to what happens next.
The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs
  • The more we come to rely on technology, the more important it becomes that it’s robust and trustworthy, doing what we want it to do. As technology grows more powerful, we should rely less on the trial-and-error approach to safety engineering. We should become more proactive than reactive, investing in safety research aimed at preventing accidents from happening even once.
  • Technical AI-safety research has three parts:
    • Verification: ensuring that software fully satisfies all the expected requirements.
    • Validation: whereas verification asks “Did I build the system right?,” validation asks “Did I build the right system?”. Does the system rely on assumptions that might not always be valid?
    • Control: We put AI in charge of ever more physical systems, we need to put serious research efforts into not only making the machines work well on their own, but also into making machines collaborate effectively with their human controllers. Identifying situations where control should be transferred, and for applying human judgment efficiently to the highest-value decisions rather than distracting human controllers with a flood of unimportant information.
    • Security: Spectacular successes in connecting the world have brought computer scientists a fourth challenge: they need to improve not only verification, validation and control, but also security against malicious software (“malware”) and hacks. Security is directed at deliberate malfeasance. In the ongoing computer-security arms race between offense and defense, there’s so far little indication that defense is winning.
  • Once a computer starts paying humans to work for it, it can accomplish anything that humans can do.
  • Digital technology drives inequality in three different ways. First, by replacing old jobs with ones requiring more skills; an ever-larger share of corporate income has gone to those who own the companies as opposed to those who work there, and those who own the machines to take a growing fraction of the pie; the digital economy often benefits superstars over everyone else.
  • When there’s a large well-educated middle class, the electorate is harder to manipulate, and it’s tougher for small numbers of people or companies to buy undue influence over the government. A better democracy can in turn enable a better-managed economy that’s less corrupt, more efficient and faster growing, ultimately benefiting essentially everyone.
  • Positive psychology has identified a number of factors that boost people’s sense of well-being and purpose:
    • A social network of friends and colleagues
    • A healthy and virtuous lifestyle
    • Respect, self-esteem, self-efficacy
    • A pleasurable sense of “flow” stemming from doing something one is good at
    • A sense of being needed and making a difference
    • A sense of meaning from being part of and serving something larger than oneself
  • To create a low-employment society that flourishes rather than degenerates into self-destructive behavior, we therefore need to understand how to help such well-being-inducing activities thrive.
Intelligence Explosion?
  • Game Theory elegantly explains that entities have an incentive to cooperate where cooperation is a so-called Nash equilibrium: a situation where any party would be worse off if they altered their strategy. To prevent cheaters from ruining the successful collaboration of a large group, it may be in everyone’s interest to relinquish some power to a higher level in the hierarchy that can punish cheaters.
  • In a complex world, there is a diverse abundance of possible Nash equilibria. Technology is changing the hierarchical nature of our world. An overall trend toward ever more coordination over ever-larger distances, which is easy to understand: new transportation technology makes coordination more valuable (by enabling mutual benefit from moving materials and life forms over larger distances) and new communication technology makes coordination easier.
  • Globalization is merely the latest example of this multi-billion-year trend of hierarchical growth.
  • For superintelligent AI, the laws of physics will place firm upper limits on transportation and communication technology, making it unlikely that the highest levels of the hierarchy would be able to micromanage everything that happens on planetary and local scales.
  • The question of how a superintelligent future will be controlled is fascinatingly complex.
  • The history of life shows it self-organizing into an ever more complex hierarchy shaped by collaboration, competition and control. Superintelligence is likely to enable coordination on ever-larger cosmic scales, but it’s unclear whether it will ultimately lead to more totalitarian top-down control or more individual empowerment.
Aftermath: The Next 10,000 Years
  • It is absolutely crucial that human AI controllers develop good governance as many things can go wrong, ranging from:
    • Centralization: There’s a trade-off between efficiency and stability: a single leader can be very efficient, but power corrupts and succession is risky.
    • Inner threats: One must guard both against growing power centralization (group collusion, perhaps even a single leader taking over) and against growing decentralization (into excessive bureaucracy and fragmentation).
    • Outer threats: If the leadership structure is too open, this enables outside forces (including the AI) to change its values, but if it’s too impervious, it will fail to learn and adapt to change.
    • Goal stability: Too much goal drift can transform utopia into dystopia, but too little goal drift can cause failure to adapt to the evolving technological environment.
Our Cosmic Endowment: The Next Billion Years and Beyond
  • Future life that’s reached the technological limit needs mainly one fundamental resource: so-called baryonic matter, meaning anything made up of atoms or their constituents (quarks and electrons). Whatever form this matter is in, advanced technology can rearrange it into any desired substances or objects, including power plants, computers and advanced life forms.
  • Scientific analysis of our far future performed by Freeman Dyson finds that unless intelligence intervenes, solar systems and galaxies gradually get destroyed, eventually followed by everything else, leaving nothing but cold, dead, empty space with an eternally fading glow of radiation. But life and intelligence can succeed in molding this universe of ours. As long as superintelligent life hasn’t run out of matter/energy, it can keep maintaining its habitat in the state it desires.
  • Powering your thirteen-watt brain for a hundred years requires the energy in about half a milligram of matter—less than in a typical grain of sugar.
  • The speed of light limits not only the spread of life, but also the nature of life, placing strong constraints on communication, consciousness and control.
  • For an intelligent information-processing system, going big is a mixed blessing. On one hand, going bigger lets it contain more particles, which enable more complex thoughts. On the other hand, this slows down the rate at which it can have truly global thoughts, since it now takes longer for the relevant information to propagate to all its parts.
  • In a cosmos teeming with superintelligence, almost the only commodity worth shipping long distances will be information.
  • There are strong incentives for future life to cooperate over cosmic distances, but it’s a wide-open question whether such cooperation will be based mainly on mutual benefits or on brutal threats - dark energy ultimately limits how many galaxies civilization can reach. If the distance between neighboring space-settling civilizations is much larger than dark energy lets them expand, then they’ll never come into contact with each other or even find out about each other’s existence.
  • Compared to cosmic timescales of billions of years, an intelligence explosion is a sudden event where technology rapidly plateaus at a level limited only by the laws of physics.
  • If we don’t improve our technology, the question isn’t whether humanity will go extinct, but merely how. If we do keep improving our technology with enough care, foresight and planning to avoid pitfalls, life has the potential to flourish on Earth and far beyond for many billions of years, beyond the wildest dreams of our ancestors.
Goals
  • Out of all ways that nature could choose to do something, it prefers the optimal way, which typically boils down to minimizing or maximizing some quantity.
  • There are two mathematically equivalent ways of describing each physical law: either as the past causing the future, or as nature optimizing something.
  • One famous quantity that nature strives to maximize is entropy, which loosely speaking measures how messy things are. The second law of thermodynamics states that entropy tends to increase until it reaches its maximum possible value. This maximally messy end state is called heat death, and corresponds to everything being spread out in boring perfect uniformity, with no complexity, no life and no change.
  • Gravity behaves differently from all other forces and strives to make our Universe not more uniform and boring but more clumpy and interesting. Dissipation-driven adaptation, which basically means that random groups of particles strive to organize themselves so as to extract energy from their environment as efficiently as possible.
  • Nature appears to have a built-in goal of producing self-organizing systems that are increasingly complex and lifelike, and this goal is hardwired into the very laws of physics.
  • Life maintains or increases its complexity by making its environment messier.
  • The most efficient copiers outcompete and dominate the others, before long any random life form you look at will be highly optimized for the goal of replication.
  • When Darwinian evolution is optimizing an organism to attain a goal, the best it can do is implement an approximate algorithm that works reasonably well in the restricted context where the agent typically finds itself.
  • A living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid. Human minds perceive these evolved rules of thumb as feelings, which usually guide our decision making toward the ultimate goal of replication.
  • All machines are agents with bounded rationality, and even today’s most sophisticated machines have a poorer understanding of the world than we do, so the rules they use to figure out what to do are often too simplistic.
  • The real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals. To figure out what people really want, you can’t merely go by what they say. The key idea underlying inverse reinforcement learning is that we make decisions all the time, and that every decision we make reveals something about our goals. The time window during which you can load your goals into an AI may be quite short: the brief period between when it’s too dumb to get you and too smart to let you.
  • The reason that value loading can be harder with machines than with people is that their intelligence growth can be much faster. If we succeed in getting a self-improving superintelligence to both learn and adopt our goals, will it then retain them?
  • Ethical views distilled into four principles:
    • Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized.
    • Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible.
    • Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle.
    • Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans today would view as terrible.
  • The Three Laws of Robotics devised by sci-fi legend Isaac Asimov can lead to problematic contradictions in unexpected situations. The laws are:
    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    • A robot must protect its own existence as long as such protection doesn’t conflict with the First or Second Laws.
  • Convergence: Might there be a goal system or ethical framework that almost all entities converge to as they get ever more intelligent?
  • Ethical principles such as cooperation, diversity and autonomy can be viewed as subgoals, in that they help societies function efficiently and thereby help them survive and accomplish any more fundamental goals that they may have.
  • It’s likely that any superintelligent AIs will have subgoals including efficient hardware, efficient software, truth-seeking and curiosity, simply because these subgoals help them accomplish whatever their ultimate goals are.
  • Orthogonality Thesis: that the ultimate goals of a system can be independent of its intelligence.
  • To program a friendly AI, we need to capture the meaning of life.
  • Aligning machine goals with our own involves three unsolved problems: making machines learn them, adopt them and retain them.
  • It’s unclear how to imbue a superintelligent AI with an ultimate goal that neither is undefined nor leads to the elimination of humanity, making it timely to rekindle research on some of the thorniest issues in philosophy!
Consciousness
  • For the long-term cosmic future of life understanding what’s conscious and what’s not becomes pivotal: if technology enables intelligent life to flourish throughout our Universe for billions of years, how can we be sure that this life is conscious and able to appreciate what’s happening?
  • If it feels like something to be you right now, then you’re conscious.
  • There are really two separate mysteries of the mind. First, there’s the mystery of how a brain processes information: the easy problems. Then there’s the separate mystery of why you have a subjective experience: the hard problem.
  • Science is all about testing theories against observations: if a theory can’t be tested even in principle, then it’s logically impossible to ever falsify it, which by Popper’s definition means that it’s unscientific.
  • Behaviors that involve unfamiliar situations, self-control, complicated logical rules, abstract reasoning or manipulation of language tend to be conscious. These are known as behavioral correlates of consciousness.
  • Effortful, slow and controlled way of thinking that psychologists call “System 2.”
  • You can convert many routines from conscious to unconscious through extensive practice. We are usually unconscious of the low-level details. Some researchers suggest that conscious information processing should be thought of as the CEO of our mind, dealing with only the most important decisions requiring complex analysis of data from all over the brain.
  • Neural Correlates of Consciousness (NCCs): researchers use continuous flash suppression, unstable visual/auditory illusions and other tricks to pinpoint which of your brain regions are responsible for each of your conscious experiences. The basic strategy is to compare what your neurons are doing in two situations where essentially everything (including your sensory input) is the same—except your conscious experience. The parts of your brain that are measured to behave differently are then identified as NCCs.
  • NCC research has proven that none of your consciousness resides in your gut, even though that’s the location of your enteric nervous system with its whopping half-billion neurons that compute how to optimally digest your food. None of your consciousness appears to reside in the brainstem, the bottom part of the brain that connects to the spinal cord and controls breathing, heart rate and blood pressure. Consciousness doesn’t appear to extend to your cerebellum, which contains about two-thirds of all your neurons.
  • Consiousness lives in the past, with Christof Koch estimating that it lags behind the outside world by about a quarter second. Intriguingly, you can often react to things faster than you can become conscious of them.
  • Is consciousness is an emergent phenomenon, with properties above and beyond those of its particles?
  • The Italian neuroscientist Giulio Tononi has proposed a quantity, which he calls Integrated Information, which basically measures how much different parts of a system know about each other. Given a physical process that, with the passage of time, transforms the initial state of a system into a new state, its integrated information phi measures inability to split the process into independent parts. Consciousness is the way information feels when being processed in certain complex ways. The information processing needs to be integrated, that is, phi needs to be large.
  • Inegrated Information Theory (IIT) is defined only for discrete systems that can be in a finite number of states. Consciousness is a physical phenomenon that feels non-physical because it’s like waves and computations: it has properties independent of its specific physical substrate. If the information processing itself obeys certain principles, it can give rise to the higher-level emergent phenomenon that we call consciousness. This places your conscious experience not one but two levels up from the matter.
  • The conditions to guarantee consciousness:
    • Information principle A conscious system has substantial information-storage capacity.
    • Dynamics principle A conscious system has substantial information-processing capacity.
    • Independence principle A conscious system has substantial independence from the rest of the world.
    • Integration principle A conscious system cannot consist of nearly independent parts.
  • For large future AIs: if they have a single consciousness, then it’s likely to be unaware of almost all the information processing taking place within it.
  • For almost all computations, there’s no faster way of determining their outcome than actually running them. This means that it’s typically impossible for you to figure out what you’ll decide to do in a second in less than a second, which helps reinforce your experience of having free will.
  • Both biological and artificial consciousnesses feel that it is really they who decide and they can’t predict with certainty what the decision will be until they’ve finished thinking it through.
  • The very first goal on our wish list for the future should be retaining (and hopefully expanding) biological and/or artificial consciousness in our cosmos, rather than driving it extinct.
  • Sapience: the ability to think intelligently.
  • Sentience: the ability to subjectively experience qualia.

These notes were taken from Max's book.
Find out more about Max at futureoflife.org/author/max/


© 2020 Cedric Joyce