Date published 

 

will show that the universe can be regarded as a giant quantum com- .. programming some piece of the universe to behave like a universal. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos . Home · Programming the . Views 1MB Size Report. DOWNLOAD PDF. Programming the Universe and millions of other books are available for site Kindle. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos Paperback – March 13, Seth Lloyd is Professor of Mechanical Engineering at MIT and a principal investigator at.

Author:MADLYN LOMACK
Language:English, Spanish, French
Country:Fiji
Genre:Lifestyle
Pages:389
Published (Last):17.11.2015
ISBN:446-8-66278-302-1
Distribution:Free* [*Registration needed]
Uploaded by: MELISA

73008 downloads 117161 Views 25.70MB PDF Size Report


Programming The Universe Pdf

PDF - Programming the Universe. Is the universe actually a giant quantum computer? According to Seth Lloyd—Professor of Quantum-Mechanical Engineering. PDF | On Nov 13, , Ovidiu Racorean and others published Is there a Lloyd S, Programming the Universe: A Quantum Computer Scientist. PDF | Extending a geometrical and logical unification of mind, light, and matter, titled Programming the Universe [1] that the universe is itself a.

Universe Type System Main content Object-oriented programs with arbitrary object structures are difficult to understand, to maintain, and to reason about since, in theory, all objects can interact with each other and methods might access any object in the heap memory via reference chains. To support modular verification, especially of invariants, we developed a new programming model and type system for a subset of Java. This type system allows one to structure the heap memory into so-called universes and provides rigor, statically checkable control of references across universe boundaries. To make the universe type system available to a wider community and as a basis for case studies, we currently implement it as a part of the Java Modeling Language JML. We also worked on the static and dynamic inference of ownership properties in order to ease the transition from current Java programs to programs that use the Universe type system. Key Publications W.

Since the s, advances in photolithography—the science of engineering ever more detailed circuits— have halved the size of the components of integrated circuits every eighteen months or so. Nowadays, the wires in the integrated circuits in a run-of-the-mill computer are only 1, atoms wide.

For future reference, let me define some of the types of computers to which I will refer. A digital computer is a computer that operates by applying logic gates to bits; a digital computer can be electronic or mechanical. A classical computer is a computer that computes using the laws of classical mechanics.

A classical digital computer is one that computes by performing classical logical operations on classical bits. An electronic computer is one that computes using electronic devices such as vacuum tubes or transistors. A digital electronic computer is a digital computer that operates electronically. Analog computers can be electronic or mechanical. A quantum computer is one that operates using the laws of quantum mechanics.

Quantum computers have both digital and analog aspects. Logic Circuits What are our ever more powerful computers doing? They are processing information by breaking it up into its component bits and operating on those bits a few at a time. As noted earlier, the information to be processed is presented to the computer in the form of a program, a set of instructions in a computer language.

The computer looks at the program a few bits at a time, interprets the bits as an instruction, and executes the instruction. Then it looks at the next few bits and executes that instruction. And so on. Physically, logic circuits consist of bits, wires, and gates. Bits, as we have seen, can register either 0 or 1; wires move the bits from one place to another; gates transform the bits one or two at a time. A COPY gate makes a copy of a bit: An AND gate takes two input bits and produces a single output bit equal to 1 if, and only if, both input bits are equal to 1; otherwise it produces the output 0.

An OR gate takes two input bits and produces an output bit equal to 1 if one or both of the input bits is equal to 1; if both input bits are equal to 0, then it produces the output 0. They make up a universal set of logic gates. Figure 3. Logic Gates Logic gates are devices that take one or more input bits and transform them into one or more output bits.

Clockwise from upper left: A digital computer is a computer that operates by implementing a large logic circuit consisting of millions of logic gates. Familiar computers such as Macintoshes and PCs are electronic realizations of digital computers. Figure 4. Logic Circuits file: A logic circuit can perform more complicated transformations of its input bits. In an electronic computer, bits are registered by electronic devices such as capacitors.

A capacitor is like a bucket that holds electrons. To fill the bucket, a voltage is applied to the capacitor. A capacitor at zero voltage has no excess electrons and is said to be uncharged.

An uncharged capacitor in a computer registers a 0. A capacitor at non-zero voltage holds lots of excess electrons and registers a 1. Capacitors are not the only electronic devices that computers use to store information.

As always, any device that has two reliably distinguishable states can register a bit. In a conventional digital electronic computer, logic gates are implemented using transistors. A transistor can be thought of as a switch.

When the switch is closed, current flows through. A transistor has two inputs and one output. In a p-type transistor, when the first input is kept at low voltage the switch is closed, so current can flow from the second input to the output; n- and p-type transistors can be wired together to create AND, OR, NOT, and COPY gates.

When a computer computes, all it is doing is applying logic gates to bits. Computer games, word processing, number crunching, and spam all arise out of the electronic transformation of bits, one or two at a time.

Uncomputability Up to this point, we have emphasized the underlying simplicity of information and computation. A bit is a simple thing; a computer is a simple machine. The only way to find out what a computer will do once it has embarked upon a computation is to wait and see what happens.

All sufficiently powerful systems of logic contain unprovable statements. The computational analog of an unprovable statement is an uncomputable quantity. A well-known problem whose answer is uncomputable is the so-called halting problem: Program a computer. Set it running. Does the computer ever halt and give an output? Or does it run forever? There is no general procedure to compute the answer to this question. That is, no computer program can take as input another computer program and determine with percent probability whether the first computer program halts or not.

Of course, for many programs, you can tell whether or not the computer will halt. A computer given this program as input prints 1,,,, then halts. But as a rule, no matter how long a computer has gone on computing without halting, you cannot conclude that it will never halt.

Although it may sound abstract, the halting problem has many practical consequences. Take, for example, the debugging of computer programs.

Such a debugger would take as input the computer program, together with a description of what the program is supposed to accomplish, and then check to see that the program does what it is supposed to do. Such a debugger cannot exist. The universal debugger is supposed to verify that its input program gives the right output. So the first thing a universal debugger should check is whether the input program has any output at all. But to verify that the program gives an output, the universal debugger needs to solve the halting problem.

That it cannot do. The only way to determine if the program will halt is to run it and see, and at that point, we no longer need the universal debugger. So the next time a bug freezes your computer, you can take solace in the deep mathematical truth that there is no systematic way to eliminate all bugs.

Or you can just curse and reboot. It is tempting to identify similar paradoxes in how human beings function. After all, human beings are masters of self-reference some humans seem capable of no other form of reference and are certainly subject to paradox.

Humans are notoriously unable to predict their own future actions. This is an important feature of what we call free will. That is, our own future choices are inscrutable to ourselves. They may not, of course, be inscrutable to others.

I, after spending a long time scrutinizing the menu, would always order the half plate of chiles rellenos, with red and green chile, and posole instead of rice. I felt strongly that I was exercising free will: My wife, however, knew exactly what I was going to order all the time. The inscrutable nature of our choices when we exercise free will is a close analog of the halting problem: Ironically, it is customary to assign our own unpredictable behavior and that of other humans to irrationality: In fact, it is just when we behave rationally, moving logically, like a computer, from step to step, that our behavior becomes provably unpredictable.

Rationality combines with self-reference to make our actions intrinsically paradoxical and uncertain. This lovely inscrutability of pure reason harks back to an earlier account of the role of logic in the universe. Reason is immortal exactly because it is not specific to any individual; instead, it is the common property of all reasoning beings. Computers certainly possess the ability to reason and the capacity for self-reference.

And just because they do, their actions are intrinsically inscrutable. Consequently, as they become more powerful and perform a more varied set of tasks, computers exhibit an unpredictability approaching that of human beings.

Programming computers to perform simple human tasks is difficult: By contrast, no special effort is required to program a computer to behave in unpredictable and annoying ways. When it comes to their 2 capacity to screw things up, computers are becoming more human every day. Part One The universe is made of atoms and elementary particles, such as electrons, photons, quarks, and neutrinos. Although we will soon delve into a vision of the universe based on a computational model, we would be foolish not to first explore the stunning discoveries of cosmology and elementary-particle physics.

Science already affords us excellent ways of describing the universe in terms of physics, chemistry, and biology. The computational universe is not an alternative to the physical universe. The universe that evolves by processing information and the universe that evolves by the laws of physics are one and the same. The two descriptions, computational and physical, are complementary ways of capturing the same phenomena.

Of course, humans have been speculating about the origins of the universe far longer than they have been dabbling in modern science. Telling stories about the universe is as old as telling stories. In Norse mythology, the universe begins when a giant cow licks the gods out of the salty lip of a primordial pit.

In Japanese mythology, Japan arises from the incestuous embrace of the brother and sister gods Izanagi and Izanami. In one Hindu creation myth, all creatures rise out of the clarified butter obtained from the sacrifice of the thousand-headed Purusha.

More recently, though, over the last century or so, astrophysicists and cosmologists have constructed a detailed history of the universe supported by observational evidence. The universe began a little less than 14 billion years ago, in a huge explosion called the Big Bang. As it expanded and cooled down, various familiar forms of matter condensed out of the cosmic soup. Three minutes after the Big Bang, the building blocks for simple atoms such as hydrogen and helium had formed.

These building blocks clumped together under the influence of gravity to form the first stars and galaxies million years after the Big Bang. Heavier elements, such as iron, were produced when these early stars exploded in supernovae. Our own sun and solar system formed 5 billion years ago, and life on Earth was up and running a little over a billion years later.

Programming the Universe PDF

This conventional history of the universe is not as sexy as some versions, and dairy products enter into only its later stages, but unlike older creation myths, the scientific one has the virtue of being consistent with known scientific laws and observations.

And even though it is phrased in terms of physics, the conventional history of the universe still manages to make a pretty good story. It has drama and uncertainty, and many questions remain: How did life arise? Why is the universe so complex?

What is the future of the universe and of life in particular? When we look into the Milky Way, our own galaxy, we see many stars much like our own. When we look beyond, we see many galaxies apparently much like the Milky Way. There is a scripted quality to what we see, in which the same astral dramas are played out again and again by different stellar actors in different places. If the universe is in fact infinite file: The story of the universe is a kind of cosmic soap opera whose actors play out all possible permutations of the drama.

In conventional cosmology, the primary actor is energy—the radiant energy in light and the mass energy in protons, neutrons, and electrons. What is energy? As you may have learned in middle school, energy is the ability to do work.

Energy makes physical systems do things. Famously, energy has the feature of being conserved: This is known as the first law of thermodynamics. But if energy is conserved, and if the universe started from nothing, then where did all of the energy come from?

Physics provides an explanation. Quantum mechanics describes energy in terms of quantum fields, a kind of underlying fabric of the universe, whose weave makes up the elementary particles—photons, electrons, quarks.

The energy we see around us, then—in the form of Earth, stars, light, heat—was drawn out of the underlying quantum fields by the expansion of our universe. Gravity is an attractive force that pulls things together. The energy in the quantum fields is almost always positive, and this positive energy is exactly balanced by the negative energy of gravitational attraction. As the expansion proceeds, more and more positive energy becomes available, in the form of matter and light—compensated for by the negative energy in the attractive force of the gravitational field.

The conventional history of the universe pays great attention to energy: How much is there? Where is it? What is it doing? By contrast, in the story of the universe told in this book, the primary actor in the physical history of the universe is information. Ultimately, information and energy play complementary roles in the universe: Information tells them what to do.

The Second Law of Thermodynamics If we could look at matter at the atomic scale, we would see atoms dancing and jiggling every which way at random. The energy that drives this random atomic dance is called heat, and the information that determines the steps of this dance is called entropy.

More simply, entropy is the information required to specify the random motions of atoms and molecules—motions too small for us to see. Entropy is the information contained in a physical system that is invisible to us. Entropy is a measure of the degree of molecular disorder existing in a system: The second law of thermodynamics states that the entropy of the universe as a whole file: Manifestations of the second law are all around us.

Hot steam can run a turbine and do useful work. As steam cools, its jiggling molecules transfer some of their disorder into increased disorder in the surrounding air, heating it up. As the molecules of steam jiggle slower and slower, the air molecules jiggle faster and faster, until steam and air are at the same temperature.

When the difference in temperatures is minimized, the entropy of the system is maximized. But room temperature steam will do no work. Here is yet another way to conceive of entropy. Most information is invisible. The number of bits of information required to characterize the dance of atoms vastly outweighs the number of bits we can see or know. Consider a photograph: It has an intrinsic graininess determined by the size of the grains of silver halide that make up the photographic film—or, if it is a digital photograph, by the number of pixels that make up the digital image on a screen.

A high-quality digital image can register almost a billion bits of visible information. How did I come up with that number? One thousand pixels per inch is a high resolution, comparable to the best resolution that can be distinguished with the naked eye.

At this resolution, each square inch of the photograph contains a million pixels. An 8- by 6-inch color photograph with 1, pixels per inch has 48 million pixels. Each pixel has a color.

Digital cameras typically use 24 bits to produce 16 million colors, comparable to the number that the human eye can distinguish. So an 8- by 6-inch color digital photograph with 1, pixels per inch and 24 bits of color resolution has 1,,, bits of information. An easier way to see how many bits are required to register a photograph is to look at how rapidly the memory space in your digital camera disappears when you take a picture. A typical digital camera takes high-resolution pictures with 3 million bytes—3 megabytes—of information.

A byte is 8 bits, so each picture on the digital camera registers approximately 24 million bits. To describe them would require more than a million billion billion bits , or a 1 followed by 24 zeros. The invisible jiggling atoms register vastly more information than the visible photograph they make up.

A photograph that registered the same amount of visible information as the invisible information in a gram of atoms would have to be as big as the state of Maine. The number of bits registered by the jiggling atoms that make up the photographic image on film can be estimated as follows. A grain of silver halide is about a millionth of a meter across and contains about a trillion atoms.

There are tens of billions of grains of silver halide in the photographic film. Describing where an individual atom at room temperature is in its infinitesimal dance requires 10 to 20 bits per atom. The total amount of information registered by the atoms in the photograph is thus bits. The billion bits of information visible in the photograph, as represented by the digital image, represent only a tiny fraction of this total.

The remainder of the information contained in the matter of the photograph is invisible. This invisible information is the entropy of the atoms. Free Energy file: To experience another example of the first and second laws, take a bite of an apple. The sugars in the apple contain what is called free energy.

Free energy is energy in a highly ordered form associated with a relatively low amount of entropy. In the case of the apple, the energy in sugar is stored not in the random jiggling of atoms but in the ordered chemical bonds that hold sugar together. It takes much less information to describe the form energy takes in a billion ordered chemical bonds than it does to describe that same energy spread among a billion jiggling atoms.

The relatively small amount of information required to describe this energy makes it available for use: Pick the apple, take a bite. Every gram of glucose contains a few kilocalories of free energy.

A calorie is the amount of energy required to raise one gram of water one degree Celsius. A kilocalorie, 1, calories, is what someone on a diet would normally call a Calorie: One hundred kilocalories is enough energy to lift a VW Bug one hundred feet in the air! While you run, the free energy in the sugar is converted into motion by your muscles. In obedience to the first law of thermodynamics, the total amount of energy remains the same. Unfortunately, to reverse this process is not so easy.

If you wanted to convert the energy in heat, which has lots of invisible information or entropy , back into energy in chemical bonds, which has much less entropy, you would have to do something with that extra information. As we will discuss, the problem of finding a place for the extra bits in heat puts fundamental limits on how well engines, humans, brains, DNA, and computers can function.

The universe we see around us arises from the interplay between these two quantities, interplay governed by the first and second laws of thermodynamics. Energy is conserved. Information never decreases. It takes energy for a physical system to evolve from one state to another. That is, it takes energy to process information. The more energy that can be applied, the faster the physical transformation takes place and the faster the information is processed.

The maximum rate at which a physical system can process information is proportional to its energy. The more energy, the faster the bits flip.

Earth, air, fire, and water in the end are all made of energy, but the different forms they take are determined by information. To do anything requires energy. To specify what is done requires information. Energy and information are by nature no pun intended intertwined.

The Story of the Universe: Part Two file: It is this interplay—this back-and-forth between information and energy—that makes the universe compute. Over the last century, advances in the construction of telescopes have led to ever more precise observations of the universe beyond our solar system.

The past decade has been a particularly remarkable one for observations of the heavens. Ground-based telescopes and satellite observatories have generated rich data describing what the universe looks like now, as well as what it looked like in the past.

The historical nature of cosmic observation proves useful as we attempt to untangle the early history of the universe. The universe began just under 14 billion years ago in a massive explosion. What happened before the 3 Big Bang? There was no time and no space.

Not just empty space, but the absence of space itself. Time itself had a beginning. There is nothing wrong with beginning from nothing. Before zero, there are no positive numbers. Before the Big Bang, there was nothing—no energy, no bits. Then, all at once, the universe sprang into existence. Time began, and with it, space. The newborn universe was simple; the newly woven fabric of quantum fields contained only small amounts of information and energy.

At most, it required a few bits of information to describe. In fact, if—as some physical theories speculate—there is only one possible initial state of the universe and only one selfconsistent set of physical laws, then the initial state required no bits of information to describe. Recall that to generate information, there must be alternatives—e.

If there were no alternatives to the initial state of the universe, then exactly zero bits of information were required to describe it; it registered zero bits.

This initial paucity of information is consistent with the notion that the universe sprang from nothing. As soon as it began, though, the universe began to expand. As it expanded, it pulled more and more energy out of the underlying quantum fabric of space and time. The early universe remained simple and orderly: The energy that was created was free energy. This paucity of information did not last for long, however. As the expansion continued, the free energy in the quantum fields was converted into heat, increasing entropy, and all sorts of elementary particles were created.

These particles were hot: To describe this jiggling would take a lot of information. After a billionth of a second—the amount of time it takes light to travel about a foot—had passed, the amount of information contained within the universe was on the order of million billion billion billion billion billion bits.

To store that much information visually would require a photograph the size of the Milky Way. The Big Bang was also a Bit Bang. A lot had happened. But what was the universe computing during this initial billionth of a second? Science fiction writers have speculated that entire civilizations could have arisen and declined during this time—a time very much shorter than the blink of an eye. We have no evidence of these fast-living folk. More likely, these early ops consisted of elementary particles bouncing off one another in random fashion.

After this first billionth of a second, the universe was very hot. Almost all of the energy that had been drawn into it was now in the form of heat. Lots of information would have been required to describe the infinitesimal jigglings of the elementary particles in this state. In fact, when all matter is at the same temperature, entropy is maximized. There was very little free energy—that is, order—at this stage, making the moments after the Big Bang a hostile time for processes like life.

Life requires free energy. Even if there were some form of life that could have withstood the high temperatures of the Big Bang, that life-form would have had nothing to eat. As the universe expanded, it cooled down. The elementary particles jiggled around more slowly.

You might also like: THE ROAD TO CHARACTER EPUB

The amount of information required to describe their jiggles stayed almost the same, though, increasing gradually over time. But, at the same time, the amount of space in which they were jiggling was increasing, requiring more bits to describe their positions. Thus, the total amount of information remained constant or increased in accordance with the second law of thermodynamics. As the jiggles got slower and slower, bits and pieces of the cosmic soup began to condense out.

This condensation produced some of the familiar forms of matter we see today. When the amount of energy in a typical jiggle became less than the amount of energy required to hold together some form of composite particle—a proton, for example—those particles formed.

When the jiggles of the constituent parts—quarks, in the case of a proton—were no longer sufficiently energetic to maintain them as distinct particles, they stuck together as a composite particle that condensed out of the cosmic soup. Every time a new ingredient of the soup condensed out, there was a burst of entropy—new information was written in the cosmic cookbook.

Particles condensed out of the jiggling soup in order of the energy required to bind them together. Protons and neutrons—the particles that make up the nuclei of atoms—condensed out a little more than a millionth of a second after the Big Bang, when the temperature was around 10 million million degrees Celsius.

Atomic nuclei began to form at one second, and about a billion degrees. After three minutes, the nuclei of the lightweight atoms—hydrogen, helium, deuterium, lithium, beryllium, and boron—had condensed. Electrons were still whizzing around too fast, though, for these nuclei to capture them and form complete atoms.

Three hundred eighty thousand years after the Big Bang, when the temperature of file: Order from Chaos the Butterfly Effect Until the formation of atoms, almost all the information in the universe lay at the level of the elementary particle. Nearly all bits were registered by the positions and velocities of protons, electrons, and so forth. On any larger scale, the universe still contained very little information: How uniform was it?

Imagine the surface of a lake on a windless morning so calm that the reflections of the trees are indistinguishable from the trees themselves. Imagine the earth with no mountain larger than a molehill. The early universe was more uniform still. Nowadays, by contrast, telescopes reveal huge variations and nonuniformities in the universe.

Matter clusters together to form planets such as Earth and stars such as the sun. Planets and suns cluster together to form solar systems. Our solar system clusters together with billions of others to form a galaxy, the Milky Way. The Milky Way, in turn, is only one of tens of galaxies in a cluster of galaxies— and our cluster of galaxies is only one cluster in a supercluster. This hierarchy of clusters of matter, separated by cosmic voids, makes up the present, large-scale structure of the universe.

How did this large-scale structure come about? Where did the bits of information come from? Their origins can be explained by the laws of quantum mechanics, coupled to the laws of gravity. Quantum mechanics is the theory that describes how matter and energy behave at their most fundamental levels. At the small scale, quantum mechanics describes the behavior of molecules, atoms, and elementary particles.

At larger scales, it describes the behavior of you and me. Larger still, it describes the behavior of the universe as a whole. The laws of quantum mechanics are responsible for the emergence of detail and structure in the universe. The theory of quantum mechanics gives rise to large-scale structure because of its intrinsically probabilistic nature.

Counterintuitive as it may seem, quantum mechanics produces detail and structure because it is inherently uncertain. The early universe was uniform: But it was not exactly the same. In quantum mechanics, quantities such as position, velocity, and energy density do not have exact values. Instead, their values fluctuate. We can describe their probable values—the most likely location of a particle, for example—but we cannot claim perfect certainty.

Because of these quantum fluctuations, some regions of the early universe were ever so slightly more dense than other regions.

As time passed, the attractive force of gravity caused more matter to move toward these denser regions, further increasing their energy density, and decreasing density in the surrounding volume. Gravity thus amplified the effect of an initially tiny discrepancy, causing it to increase. Just such a tiny quantum file: Slightly later on, further fluctuations formed the seeds for the positions of individual galaxies within the cluster, and still later fluctuations seeded the positions of planets and stars.

In the process of creating this large-scale structure, gravity also created the free energy that living things require to survive. As the matter clumped together, it moved faster and faster, gaining energy from the gravitational field; that is, the matter heated up. The larger the clump grew, the hotter the matter became. If enough matter clumped together, the temperature in the center of the clump rose to the point at which thermonuclear reactions are ignited: The light from the sun has lots of free energy —energy plants would use for photosynthesis, for example.

As soon as plants came into existence, that is. Perhaps the most famous example of chaos is the so-called butterfly effect. The minuscule quantum fluctuations of energy density at the time of the Big Bang are the butterfly effects that would come to yield the large-scale structure of the universe. Every galaxy, star, and planet owes its mass and position to quantum accidents of the early universe. Chance is a crucial element of the language of nature.

Every roll of the quantum dice injects a few more bits of detail into the world. As these details accumulate, they form the seeds for all the variety of the universe. Every tree, branch, leaf, cell, and strand of DNA owes its particular form to some past toss of the quantum dice. Without the laws of quantum mechanics, the universe would still be featureless and bare. Gambling for money may be infernal, but betting on throws of the quantum dice is divine.

In computer science, a universal computer is a device that can be programmed to process bits of information in any desired way. Conventional digital computers of the sort on which this book is being written are universal computers, and their languages are universal languages. Human beings are capable of universal computation, and human languages are universal.

Most systems that can be programmed to perform arbitrarily long sequences of simple transformations of information are universal. Universal computers can do pretty much anything with information. Two of the inventors of universal computers and universal languages, Alonzo Church and Alan Turing, hypothesized that any possible mathematical manipulation can be performed on a universal computer; that is, universal computers can generate mathematical patterns of any level of complexity.

A universal computer itself, though, need not file: Any desired transformation of however large a set of bits can be enacted by repeatedly performing operations on just one or two bits at a time. And any machine that can enact this sequence of simple logical operations is a universal computer.

Significantly, universal computers can be programmed to transform information in any way desired, and any universal computer can be programmed to transform information in the same way as any other universal computer. That is, any universal computer can simulate another, and vice versa. This intersimulatability means that all universal computers can perform the same set of tasks.

This feature of computational universality is a familiar one: I waited. What about an analog computer? They store information on continuous voltage signals. OK, I said, then what was the first computer? Like the first tools, the first computers were rocks.

Stonehenge may well have been a big rock computer for calculating the relations between the calendar and the arrangement of the planets. The technology used in computing puts intrinsic limits on the computations that can be performed think rocks versus an Intel Pentium IV.

And to deal with large numbers, you need a lot of rocks. Then it was discovered that if you used beads instead of rocks and mounted them on wooden rods, the beads were not only easy to move back and forth but also hard to lose. The wooden computer, or abacus, is a powerful calculating tool. Before the invention of electronic computers, a trained abacus operator could out-calculate a trained adding-machine operator. But the abacus is not merely a convenient machine for manipulating pebbles.

It embodies a powerful mathematical abstraction: zero. The concept of zero is the crucial piece of the Arabic number system—a system allowing arbitrarily large numbers to be represented and manipulated with ease—and the abacus is its mechanical incorporation. But which came first? Given the origin of the word for zero and the 1 antiquity of the first abacus, it seems likely that the machine did. Sometimes, machines make ideas. Ideas also make machines.

First rock, then wood: what material would supply the next advance in information processing?

In the early seventeenth century, the Scottish mathematician John Napier discovered a way of changing the process of multiplication into addition. He carved ivory into bars, ruled marks corresponding to numbers on the bars, and then performed multiplication by sliding the bars alongside each other until the marks corresponding to the two numbers lined up. The total length of the two bars together then gave the product of the two numbers.

The slide rule was born. In the beginning of the nineteenth century, an eccentric Englishman named Charles Babbage proposed making computers out of metal. Each gear would register information by its position, and then process that information by meshing with other gears and turning. It was designed with a central processing unit and a memory bank that could hold both program and data.

Although effective mechanical calculators were available by the end of the nineteenth century, large-scale working computers had to await the development of electronic circuit technology in the beginning of the twentieth.

By , an international competition had arisen between various groups to construct computers using electronic switches such as vacuum tubes or electromechanical relays.

The first simple electronic computer was built by Konrad Zuse in Germany in , followed by large-scale computers in the United States and Great Britain later in the s. In the s, vacuum tubes and electromechanical relays were replaced by transistors, semiconductor switches that were smaller and more reliable and required less energy. A semiconductor is a material such as silicon that conducts electricity better than insulators such as glass or rubber, but less well than conductors such as copper.

Starting in the late s, the transistors were made still smaller by etching them on silicon-based integrated circuits, which collected all the components needed to process information on one semiconductor chip.

Since the s, advances in photolithography—the science of engineering ever more detailed circuits— have halved the size of the components of integrated circuits every eighteen months or so.

Nowadays, the wires in the integrated circuits in a run-of-the-mill computer are only 1, atoms wide. For future reference, let me define some of the types of computers to which I will refer.

A digital computer is a computer that operates by applying logic gates to bits; a digital computer can be electronic or mechanical.

A classical computer is a computer that computes using the laws of classical mechanics. A classical digital computer is one that computes by performing classical logical operations on classical bits. An electronic computer is one that computes using electronic devices such as vacuum tubes or transistors. A digital electronic computer is a digital computer that operates electronically. Analog computers can be electronic or mechanical.

A quantum computer is one that operates using the laws of quantum mechanics. Quantum computers have both digital and analog aspects. Logic Circuits What are our ever more powerful computers doing?

Navigation Area

They are processing information by breaking it up into its component bits and operating on those bits a few at a time. As noted earlier, the information to be processed is presented to the computer in the form of a program, a set of instructions in a computer language. The computer looks at the program a few bits at a time, interprets the bits as an instruction, and executes the instruction. Then it looks at the next few bits and executes that instruction.

And so on. Physically, logic circuits consist of bits, wires, and gates. Bits, as we have seen, can register either 0 or 1; wires move the bits from one place to another; gates transform the bits one or two at a time. A COPY gate makes a copy of a bit: it transforms an input bit 0 into two output bits 00 and an input bit 1 into two output bits An AND gate takes two input bits and produces a single output bit equal to 1 if, and only if, both input bits are equal to 1; otherwise it produces the output 0.

An OR gate takes two input bits and produces an output bit equal to 1 if one or both of the input bits is equal to 1; if both input bits are equal to 0, then it produces the output 0. They make up a universal set of logic gates. Figure 3. Logic Gates Logic gates are devices that take one or more input bits and transform them into one or more output bits. A digital computer is a computer that operates by implementing a large logic circuit consisting of millions of logic gates.

Familiar computers such as Macintoshes and PCs are electronic realizations of digital computers. Figure 4. A logic circuit can perform more complicated transformations of its input bits. In an electronic computer, bits are registered by electronic devices such as capacitors.

A capacitor is like a bucket that holds electrons. To fill the bucket, a voltage is applied to the capacitor. A capacitor at zero voltage has no excess electrons and is said to be uncharged.

An uncharged capacitor in a computer registers a 0. A capacitor at non-zero voltage holds lots of excess electrons and registers a 1. Capacitors are not the only electronic devices that computers use to store information. As always, any device that has two reliably distinguishable states can register a bit. In a conventional digital electronic computer, logic gates are implemented using transistors.

A transistor can be thought of as a switch. When the switch is closed, current flows through. A transistor has two inputs and one output. In a p-type transistor, when the first input is kept at low voltage the switch is closed, so current can flow from the second input to the output; n- and p-type transistors can be wired together to create AND, OR, NOT, and COPY gates.

When a computer computes, all it is doing is applying logic gates to bits. Computer games, word processing, number crunching, and spam all arise out of the electronic transformation of bits, one or two at a time. Uncomputability Up to this point, we have emphasized the underlying simplicity of information and computation. A bit is a simple thing; a computer is a simple machine. The only way to find out what a computer will do once it has embarked upon a computation is to wait and see what happens.

All sufficiently powerful systems of logic contain unprovable statements. The computational analog of an unprovable statement is an uncomputable quantity. A well-known problem whose answer is uncomputable is the so-called halting problem: Program a computer. Set it running. Does the computer ever halt and give an output? Or does it run forever? There is no general procedure to compute the answer to this question.

That is, no computer program can take as input another computer program and determine with percent probability whether the first computer program halts or not.

Of course, for many programs, you can tell whether or not the computer will halt. For example, the program PRINT 1,,, clearly halts: A computer given this program as input prints 1,,,, then halts.

But as a rule, no matter how long a computer has gone on computing without halting, you cannot conclude that it will never halt. Although it may sound abstract, the halting problem has many practical consequences.

Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos

Take, for example, the debugging of computer programs. Such a debugger would take as input the computer program, together with a description of what the program is supposed to accomplish, and then check to see that the program does what it is supposed to do. Such a debugger cannot exist. The universal debugger is supposed to verify that its input program gives the right output. So the first thing a universal debugger should check is whether the input program has any output at all.

But to verify that the program gives an output, the universal debugger needs to solve the halting problem. That it cannot do. The only way to determine if the program will halt is to run it and see, and at that point, we no longer need the universal debugger.

So the next time a bug freezes your computer, you can take solace in the deep mathematical truth that there is no systematic way to eliminate all bugs. Or you can just curse and reboot.

It is tempting to identify similar paradoxes in how human beings function. After all, human beings are masters of self-reference some humans seem capable of no other form of reference and are certainly subject to paradox. Humans are notoriously unable to predict their own future actions. This is an important feature of what we call free will. That is, our own future choices are inscrutable to ourselves.

They may not, of course, be inscrutable to others. I, after spending a long time scrutinizing the menu, would always order the half plate of chiles rellenos, with red and green chile, and posole instead of rice. I felt strongly that I was exercising free will: until I chose the rellenos half plate, I felt anything was possible. My wife, however, knew exactly what I was going to order all the time.

The inscrutable nature of our choices when we exercise free will is a close analog of the halting problem: once we set a train of thought in motion, we do not know whether it will lead anywhere at all.

Ironically, it is customary to assign our own unpredictable behavior and that of other humans to irrationality: were we to behave rationally, we reason, the world would be more predictable. In fact, it is just when we behave rationally, moving logically, like a computer, from step to step, that our behavior becomes provably unpredictable. Rationality combines with self-reference to make our actions intrinsically paradoxical and uncertain. This lovely inscrutability of pure reason harks back to an earlier account of the role of logic in the universe.

Reason is immortal exactly because it is not specific to any individual; instead, it is the common property of all reasoning beings. Computers certainly possess the ability to reason and the capacity for self-reference. And just because they do, their actions are intrinsically inscrutable. Consequently, as they become more powerful and perform a more varied set of tasks, computers exhibit an unpredictability approaching that of human beings.

Programming computers to perform simple human tasks is difficult: getting a computerized robot to vacuum a room or empty a dishwasher, even to minimal standards, is a problem that has outstripped the abilities of several generations of researchers in artificial intelligence.

By contrast, no special effort is required to program a computer to behave in unpredictable and annoying ways. When it comes to their 2 capacity to screw things up, computers are becoming more human every day. Although we will soon delve into a vision of the universe based on a computational model, we would be foolish not to first explore the stunning discoveries of cosmology and elementary-particle physics. Science already affords us excellent ways of describing the universe in terms of physics, chemistry, and biology.

The computational universe is not an alternative to the physical universe. The universe that evolves by processing information and the universe that evolves by the laws of physics are one and the same. The two descriptions, computational and physical, are complementary ways of capturing the same phenomena. Of course, humans have been speculating about the origins of the universe far longer than they have been dabbling in modern science.

Telling stories about the universe is as old as telling stories. In Norse mythology, the universe begins when a giant cow licks the gods out of the salty lip of a primordial pit. In Japanese mythology, Japan arises from the incestuous embrace of the brother and sister gods Izanagi and Izanami. In one Hindu creation myth, all creatures rise out of the clarified butter obtained from the sacrifice of the thousand-headed Purusha.

More recently, though, over the last century or so, astrophysicists and cosmologists have constructed a detailed history of the universe supported by observational evidence.

The universe began a little less than 14 billion years ago, in a huge explosion called the Big Bang. As it expanded and cooled down, various familiar forms of matter condensed out of the cosmic soup. Three minutes after the Big Bang, the building blocks for simple atoms such as hydrogen and helium had formed.

These building blocks clumped together under the influence of gravity to form the first stars and galaxies million years after the Big Bang. Heavier elements, such as iron, were produced when these early stars exploded in supernovae. Our own sun and solar system formed 5 billion years ago, and life on Earth was up and running a little over a billion years later. This conventional history of the universe is not as sexy as some versions, and dairy products enter into only its later stages, but unlike older creation myths, the scientific one has the virtue of being consistent with known scientific laws and observations.

And even though it is phrased in terms of physics, the conventional history of the universe still manages to make a pretty good story. It has drama and uncertainty, and many questions remain: How did life arise? Why is the universe so complex? What is the future of the universe and of life in particular? When we look into the Milky Way, our own galaxy, we see many stars much like our own.

When we look beyond, we see many galaxies apparently much like the Milky Way. There is a scripted quality to what we see, in which the same astral dramas are played out again and again by different stellar actors in different places. The story of the universe is a kind of cosmic soap opera whose actors play out all possible permutations of the drama.

In conventional cosmology, the primary actor is energy—the radiant energy in light and the mass energy in protons, neutrons, and electrons. What is energy? As you may have learned in middle school, energy is the ability to do work. Energy makes physical systems do things. Famously, energy has the feature of being conserved: it can take different forms—heat, work, electrical energy, mechanical energy—but it is never lost.

This is known as the first law of thermodynamics. But if energy is conserved, and if the universe started from nothing, then where did all of the energy come from?

Physics provides an explanation. Quantum mechanics describes energy in terms of quantum fields, a kind of underlying fabric of the universe, whose weave makes up the elementary particles—photons, electrons, quarks. The energy we see around us, then—in the form of Earth, stars, light, heat—was drawn out of the underlying quantum fields by the expansion of our universe.

Gravity is an attractive force that pulls things together. The energy in the quantum fields is almost always positive, and this positive energy is exactly balanced by the negative energy of gravitational attraction.

As the expansion proceeds, more and more positive energy becomes available, in the form of matter and light—compensated for by the negative energy in the attractive force of the gravitational field. The conventional history of the universe pays great attention to energy: How much is there? Where is it?

What is it doing? By contrast, in the story of the universe told in this book, the primary actor in the physical history of the universe is information. Ultimately, information and energy play complementary roles in the universe: Energy makes physical systems do things. Information tells them what to do. Entropy: The Second Law of Thermodynamics If we could look at matter at the atomic scale, we would see atoms dancing and jiggling every which way at random. The energy that drives this random atomic dance is called heat, and the information that determines the steps of this dance is called entropy.

More simply, entropy is the information required to specify the random motions of atoms and molecules—motions too small for us to see. Entropy is the information contained in a physical system that is invisible to us. Manifestations of the second law are all around us. Hot steam can run a turbine and do useful work.

As steam cools, its jiggling molecules transfer some of their disorder into increased disorder in the surrounding air, heating it up. As the molecules of steam jiggle slower and slower, the air molecules jiggle faster and faster, until steam and air are at the same temperature. When the difference in temperatures is minimized, the entropy of the system is maximized. But room temperature steam will do no work. Here is yet another way to conceive of entropy. Most information is invisible.

The number of bits of information required to characterize the dance of atoms vastly outweighs the number of bits we can see or know. Consider a photograph: It has an intrinsic graininess determined by the size of the grains of silver halide that make up the photographic film—or, if it is a digital photograph, by the number of pixels that make up the digital image on a screen. A high-quality digital image can register almost a billion bits of visible information. How did I come up with that number?

One thousand pixels per inch is a high resolution, comparable to the best resolution that can be distinguished with the naked eye. At this resolution, each square inch of the photograph contains a million pixels. An 8- by 6-inch color photograph with 1, pixels per inch has 48 million pixels. Each pixel has a color. Digital cameras typically use 24 bits to produce 16 million colors, comparable to the number that the human eye can distinguish.

So an 8- by 6-inch color digital photograph with 1, pixels per inch and 24 bits of color resolution has 1,,, bits of information. An easier way to see how many bits are required to register a photograph is to look at how rapidly the memory space in your digital camera disappears when you take a picture. A typical digital camera takes high-resolution pictures with 3 million bytes—3 megabytes—of information.

A byte is 8 bits, so each picture on the digital camera registers approximately 24 million bits. Video of panel discussion: Is the Universe the Ultimate Computer? AP by itself does not have any additional predictive power. For example, it does not predict that tomorrow the sun will shine in the Sahara, or that gravity will work in quite the same way - neither rain in the Sahara nor certain changes of gravity would destroy us, and thus would be allowed by AP.

To make nontrivial predictions about the future we need more than AP - see below! Predictions for universes sampled from any computable probability distribution To make better predictions, can we postulate any reasonable nontrivial constraints on the prior probability distribution on our possible futures?

The distribution should at least be computable in the limit. That is, there should exist a program that takes as an input any beginning of the universe history as well as a next possible event, and produces an output converging on the conditional probability of the event.

If there were no such program we could not even formally specify our universe, leave alone writing reasonable scientific papers about it.

Similar files:


Copyright © 2019 aracer.mobi. All rights reserved.
DMCA |Contact Us