|
|||||||
АвтоАвтоматизацияАрхитектураАстрономияАудитБиологияБухгалтерияВоенное делоГенетикаГеографияГеологияГосударствоДомДругоеЖурналистика и СМИИзобретательствоИностранные языкиИнформатикаИскусствоИсторияКомпьютерыКулинарияКультураЛексикологияЛитератураЛогикаМаркетингМатематикаМашиностроениеМедицинаМенеджментМеталлы и СваркаМеханикаМузыкаНаселениеОбразованиеОхрана безопасности жизниОхрана ТрудаПедагогикаПолитикаПравоПриборостроениеПрограммированиеПроизводствоПромышленностьПсихологияРадиоРегилияСвязьСоциологияСпортСтандартизацияСтроительствоТехнологииТорговляТуризмФизикаФизиологияФилософияФинансыХимияХозяйствоЦеннообразованиеЧерчениеЭкологияЭконометрикаЭкономикаЭлектроникаЮриспунденкция |
Nailing Down ConclusionsIn everything I have been describing, I have stuck closely to the demonstrated facts of chemistry and molecular biology. Still, people regularly raise certain questions rooted in physics and biology. These deserve more direct answers. Will the uncertainty principle of quantum physics make molecular machines unworkable? This principle states (among other things) that particles can't be pinned down in an exact location for any length of time. It limits what molecular machines can do, just as it limits what anything else can do. Nonetheless, calculations show that the uncertainty principle places few important limits on how well atoms can be held in place, at least for the purposes outlined here. The uncertainty principle makes electron positions quite fuzzy, and in fact this fuzziness determines the very size and structure of atoms. An atom as a whole, however, has a comparatively definite position set by its comparatively massive nucleus. If atoms didn't stay put fairly well, molecules would not exist. One needn't study quantum mechanics to trust these conclusions, because molecular machines in the cell demonstrate that molecular machines work. Will the molecular vibrations of heat make molecular machines unworkable or too unreliable for use? Thermal vibrations will cause greater problems than will the uncertainty principle, yet here again existing molecular machines directly demonstrate that molecular machines can work at ordinary temperatures. Despite thermal vibrations, the DNA-copying machinery in some cells makes less than one error in 100,000,000,000 operations. To achieve this accuracy, however, cells use machines (such as the enzyme DNA polymerase I) that proofread the copy and correct errors. Assemblers may well need similar error-checking and error-correcting abilities, if they are to produce reliable results. Will radiation disrupt molecular machines and render them unusable? High-energy radiation can break chemical bonds and disrupt molecular machines. Living cells once again show that solutions exist: they operate for years by repairing and replacing radiation-damaged parts. Because individual machines are so tiny, however, they present small targets for radiation and are seldom hit. Still, if a system of nanomachines must be reliable, then it will have to tolerate a certain amount of damage, and damaged parts must regularly be repaired or replaced. This approach to reliability is well known to designers of aircraft and spacecraft. Since evolution has failed to produce assemblers, does this show that they are either impossible or useless? The earlier questions were answered in part by pointing to the working molecular machinery of cells. This makes a simple and powerful case that natural law permits small clusters of atoms to behave as controlled machines, able to build other nanomachines. Yet despite their basic resemblance to ribosomes, assemblers will differ from anything found in cells; the things they do - while consisting of ordinary molecular motions and reactions - will have novel results. No cell, for example, makes diamond fiber. The idea that new kinds of nanomachinery will bring new, useful abilities may seem startling: in all its billions of years of evolution, life has never abandoned its basic reliance on protein machines. Does this suggest that improvements are impossible, though? Evolution progresses through small changes, and evolution of DNA cannot easily replace DNA. Since the DNA/RNA/ribosome system is specialized to make proteins, life has had no real opportunity to evolve an alternative. Any production manager can well appreciate the reasons; even more than a factory, life cannot afford to shut down to replace its old systems. Improved molecular machinery should no more surprise us than alloy steel being ten times stronger than bone, or copper wires transmitting signals a million times faster than nerves. Cars outspeed cheetahs, jets outfly falcons, and computers already outcalculate head-scratching humans. The future will bring further examples of improvements on biological evolution, of which second-generation nanomachines will be but one. In physical terms, it is clear enough why advanced assemblers will be able to do more than existing protein machines. They will be programmable like ribosomes, but they will be able to use a wider range of tools than all the enzymes in a cell put together. Because they will be made of materials far more strong, stiff, and stable than proteins, they will be able to exert greater forces, move with greater precision, and endure harsher conditions. Like an industrial robot arm - but unlike anything in a living cell - they will be able to rotate and move molecules in three dimensions under programmed control, making possible the precise assembly of complex objects. These advantages will enable them to assemble a far wider range of molecular structures than living cells have done. Is there some special magic about life, essential to making molecular machinery work? One might doubt that artificial nanomachines could even equal the abilities of nanomachines in the cell, if there were reason to think that cells contained some special magic that makes them work. This idea is called "vitalism." Biologists have abandoned it because they have found chemical and physical explanations for every aspect of living cells yet studied, including their motion, growth, and reproduction. Indeed, this knowledge is the very foundation of biotechnology. Nanomachines floating in sterile test tubes, free of cells, have been made to perform all the basic sorts of activities that they perform inside living cells. Starting with chemicals that can be made from smoggy air, biochemists have built working protein machines without help from cells. R. B. Merrifield, for example, used chemical techniques to assemble simple amino acids to make bovine pancreatic ribonuclease, an enzymatic device that disassembles RNA molecules. Life is special in structure, in behavior, and in what it feels like from the inside to be alive, yet the laws of nature that govern the machinery of life also govern the rest of the universe. The case for the feasibility of assemblers and other nanomachines may sound firm, but why not just wait and see whether they can be developed? Sheer curiosity seems reason enough to examine the possibilities opened by nanotechnology, but there are stronger reasons. These developments will sweep the world within ten to fifty years - that is, within the expected lifetimes of ourselves or our families. What is more, the conclusions of the following chapters suggest that a wait-and-see policy would be very expensive - that it would cost many millions of lives, and perhaps end life on Earth. Is the case for the feasibility of nanotechnology and assemblers firm enough that they should be taken seriously? It seems so, because the heart of the case rests on two well-established facts of science and engineering. These are (1) that existing molecular machines serve a range of basic functions, and (2) that parts serving these basic functions can be combined to build complex machines. Since chemical reactions can bond atoms together in diverse ways, and since molecular machines can direct chemical reactions according to programmed instructions, assemblers definitely are feasible.
Nanocomputers Assemblers will bring one breakthrough of obvious and basic importance: engineers will use them to shrink the size and cost of computer circuits and speed their operation by enormous factors. With today's bulk technology, engineers make patterns on silicon chips by throwing atoms and photons at them, but the patterns remain flat and molecular-scale flaws are unavoidable. With assemblers, however, engineers will build circuits in three dimensions, and build to atomic precision. The exact limits of electronic technology today remain uncertain because the quantum behavior of electrons in complex networks of tiny structures presents complex problems, some of them resulting directly from the uncertainty principle. Whatever the limits are, though, they will be reached with the help of assemblers. The fastest computers will use electronic effects, but the smallest may not. This may seem odd, yet the essence of computation has nothing to do with electronics. A digital computer is a collection of switches able to turn one another on and off. Its switches start in one pattern (perhaps representing 2 + 2), then switch one another into a new pattern (representing 4), and so on. Such patterns can represent almost anything. Engineers build computers from tiny electrical switches connected by wires simply because mechanical switches connected by rods or strings would be big, slow, unreliable, and expensive, today. The idea of a purely mechanical computer is scarcely new. In England during the mid-1800s, Charles Babbage invented a mechanical computer built of brass gears; his co-worker Augusta Ada, the Countess of Lovelace, invented computer programming. Babbage's endless redesigning of the machine, problems with accurate manufacturing, and opposition from budget-watching critics (some doubting the usefulness of computers!), combined to prevent its completion. In this tradition, Danny Hillis and Brian Silverman of the MIT Artificial Intelligence Laboratory built a special-purpose mechanical computer able to play tic-tac-toe. Yards on a side, full of rotating shafts and movable frames that represent the state of the board and the strategy of the game, it now stands in the Computer Museum in Boston. It looks much like a large ball-and-stick molecular model, for it is built of Tinkertoys. Brass gears and Tinkertoys make for big, slow computers. With components a few atoms wide, though, a simple mechanical computer would fit within 1/100 of a cubic micron, many billions of times more compact than today's so-called microelectronics. Even with a billion bytes of storage, a nanomechanical computer could fit in a box a micron wide, about the size of a bacterium. And it would be fast. Although mechanical signals move about 100,000 times slower than the electrical signals in today's machines, they will need to travel only 1/1,000,000 as far, and thus will face less delay. So a mere mechanical computer will work faster than the electronic whirl-winds of today. Electronic nanocomputers will likely be thousands of times faster than electronic microcomputers - perhaps hundreds of thousands of times faster, if a scheme proposed by Nobel Prize-winning physicist Richard Feynman works out. Increased speed through decreased size is an old story in electronics.
Поиск по сайту: |
Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. Студалл.Орг (0.004 сек.) |