Russian
| English
"Куда идет мир? Каково будущее науки? Как "объять необъятное", получая образование - высшее, среднее, начальное? Как преодолеть "пропасть двух культур" - естественнонаучной и гуманитарной? Как создать и вырастить научную школу? Какова структура нашего познания? Как управлять риском? Можно ли с единой точки зрения взглянуть на проблемы математики и экономики, физики и психологии, компьютерных наук и географии, техники и философии?"

«ARTIFICIAL INTELLIGENCE AS A POSITIVE AND NEGATIVE FACTOR IN GLOBAL RISK» 
Eliezer S. Yudkowsky

In the field of AI it is daring enough to openly discuss human-level AI, after the field’s past experiences with such discussion. There is the temptation to congratulate yourself on daring so much, and then stop. Discussing transhuman AI would seem ridiculous and unnecessary, after daring so much already. (But there is no privileged reason why AIs would slowly climb all the way up the scale of intelligence, and then halt forever exactly on the human dot.) Daring to speak of Friendly AI , as a precaution against the global catastrophic risk oftranshuman AI, would be two levels up from the level of daring that is just daring enough to be seen as transgressive and courageous.

There is also a pragmatic objection which concedes that Friendly AI is an important problem, but worries that, given our present state of understanding, we simply are not in a position to tackle Friendly AI: If we try to solve the problem right now , we’ll just fail, or produce anti-science instead of science.

And this objection is worth worrying about. It appears to me that the knowledge is out there — that it is possible to study a sufficiently large body of existing knowledge, and then tackle Friendly AI without smashing face-first into a brick wall — but the knowledge is scattered across multiple disciplines : Decision theory and evolutionary psychology and probability theory and evolutionary biology and cognitive psychology and information theory and the field traditionally known as «Artificial Intelligence»… There is no curriculum that has already prepared a large pool of existing researchers to make progress on Friendly AI.

The «ten-year rule» for genius, validated across fields ranging from math to music to competitive tennis, states that no one achieves outstanding performance in any field without at least ten years of effort. (Hayes 1981.) Mozart began composing symphonies at age 4, but they weren’t Mozart symphonies — it took another 13 years for Mozart to start composing outstanding symphonies. (Weisberg 1986.) My own experience with the learning curve reinforces this worry. If we want people who can make progress on Friendly AI, then they have to start training themselves, full-time, years before they are urgently needed .

If tomorrow the Bill and Melinda Gates Foundation allocated a hundred million dollars of grant money for the study of Friendly AI, then a thousand scientists would at once begin to rewrite their grant proposals to make them appear relevant to Friendly AI. But they would not be genuinely interested in the problem — witness that they did not show curiosity before someone offered to pay them. While Artificial General Intelligence is unfashionable and Friendly AI is entirely off the radar, we can at least assume that anyone speaking about the problem is genuinely interested in it. If you throw too much money at a problem that a field is not prepared to solve, the excess money is more likely to produce anti-science than science — a mess of false solutions.

I cannot regard this verdict as good news. We would all be much safer if Friendly AI could be solved by piling on warm bodies and silver. But as of 2006 I strongly doubt that this is the case — the field of Friendly AI, and Artificial Intelligence itself, is too much in a state of chaos. Yet if the one argues that we cannot yet make progress on Friendly AI, that we know too little, we should ask how long the one has studied before coming to this conclusion. Who can say what science does not know? There is far too much science for any one human being to learn. Who can say that we are not ready for a scientific revolution, in advance of the surprise? And if we cannot make progress on Friendly AI because we are not prepared, this does not mean we do not need Friendly AI. Those two statements are not at allequivalent!

So if we find that we cannot make progress on Friendly AI, then we need to figure out how to exit that regime as fast as possible! There is no guarantee whatsoever that, just because we can’t manage a risk, the risk will obligingly go away.

If unproven brilliant young scientists become interested in Friendly AI of their own accord, then I think it would be very much to the benefit of the human species if they could apply for a multi-year grant to study the problem full-time. Some funding for

Friendly AI is needed to this effect — considerably more funding than presently exists. But I fear that in these beginning stages, a Manhattan Project would only increase the ratio of noise to signal.

Conclusion

It once occurred to me that modern civilization occupies an unstable state. I.J. Good’s hypothesized intelligence explosion describes a dynamically unstable system, like a pen precariously balanced on its tip. If the pen is exactly vertical, it may remain upright; but if the pen tilts even a little from the vertical, gravity pulls it farther in that direction, and the process accelerates. So too would smarter systems have an easier time making themselves smarter.

A dead planet, lifelessly orbiting its star, is also stable. Unlike an intelligence explosion, extinction is not a dynamic attractor — there is a large gap between almost extinct, and extinct. Even so, total extinction is stable.

Must not our civilization eventually wander into one mode or the other?

As logic, the above argument contains holes. Giant Cheesecake Fallacy, for example: minds do not blindly wander into attractors, they have motives. Even so, I suspect that, pragmatically speaking, our alternatives boil down to becoming smarter or becoming extinct.

Nature is, not cruel, but indifferent; a neutrality which often seems indistinguishable from outright hostility. Reality throws at you one challenge after another, and when you run into a challenge you can’t handle, you suffer the consequences. Often Nature poses requirements that are grossly unfair, even on tests where the penalty for failure is death. How is a 10th-century medieval peasant supposed to invent a cure for tuberculosis? Nature does not match her challenges to your skill, or your resources, or how much free time you have to think about the problem. And when you run into a lethal challenge too difficult for you, you die. It may be unpleasant to think about, but that has been the reality for humans, for thousands upon thousands of years. The same thing could as easily happen to the whole human species, if the human species runs into an unfair challenge.

If human beings did not age, so that 100-year-olds had the same death rate as 15-year-olds, we would not be immortal. We would last only until the probabilities caught up with us. To live even a million years, as an unaging human in a world as risky as our own, you must somehow drive your annual probability of accident down to nearly zero . You may not drive; you may not fly; you may not walk across the street even after looking both ways, for it is still too great a risk. Even if you abandoned all thoughts of fun, gave up living to preserve your life, you couldn’t navigate a million-year obstacle course. It would be, not physically impossible, but cognitively impossible.

The human species, Homo sapiens, is unaging but not immortal. Hominids have survived this long only because, for the last million years, there were no arsenals of hydrogen bombs, no spaceships to steer asteroids toward Earth, no biological weapons labs to produce superviruses, no recurring annual prospect of nuclear war or nanotechnological war or rogue Artificial Intelligence. To survive any appreciable time, we need to drive down each risk to nearly zero . «Fairly good» is not good enough to last another million years.

It seems like an unfair challenge. Such competence is not historically typical of human institutions, no matter how hard they try. For decades the U.S. and the U.S.S.R. avoided nuclear war, but not perfectly; there were close calls, such as the Cuban Missile Crisis in 1962. If we postulate that future minds exhibit the same mixture of foolishness and wisdom, the same mixture of heroism and selfishness, as the minds we read about in history books — then the game of existential risk is already over; it was lost from the beginning. We might survive for another decade, even another century, but not another million years.

But the human mind is not the limit of the possible. Homo sapiens represents the first general intelligence. We were born into the uttermost beginning of things, the dawn of mind. With luck, future historians will look back and describe the present world as an awkward in-between stage of adolescence, when humankind was smart enough to create tremendous problems for itself, but not quite smart enough to solve them.

Yet before we can pass out of that stage of adolescence, we must, as adolescents, confront an adult problem: the challenge of smarter-than-human intelligence. This is the way out of the high-mortality phase of the life cycle, the way to close the window of vulnerability; it is also probably the single most dangerous risk we face. Artificial Intelligence is one road into that challenge; and I think it is the road we will end up taking. I think that, in the end, it will prove easier to build a 747 from scratch, than to scale up an existing bird or graft on jet engines.

I do not want to play down the colossal audacity of trying to build, to a precise purpose and design, something smarter than ourselves. But let us pause and recall that intelligence is not the first thing human science has ever encountered which proved difficult to understand. Stars were once mysteries, and chemistry, and biology. Generations of investigators tried and failed to understand those mysteries, and they acquired the reputation of being impossible to mere science. Once upon a time, no one understood why some matter was inert and lifeless, while other matter pulsed with blood and vitality. No one knew how living matter reproduced itself, or why our hands obeyed our mental orders. Lord Kelvin wrote:

«The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms.» (Quoted in MacFie 1912.)

All scientific ignorance is hallowed by ancientness. Each and every absence of knowledge dates back to the dawn of human curiosity; and the hole lasts through the ages, seemingly eternal, right up until someone fills it. I think it is possible for mere fallible humans to succeed on the challenge of building Friendly AI. But only if intelligence ceases to be a sacred mystery to us, as life was a sacred mystery to Lord Kelvin. Intelligence must cease to be any kind of mystery whatever, sacred or not. We must execute the creation of Artificial Intelligence as the exact application of an exact art. And maybe then we can win.

Bibliography

Asimov, I. 1942. Runaround. Astounding Science Fiction, March 1942.

Barrett, J. L. and Keil, F. 1996. Conceptualizing a non-natural entity: Anthropomorphism in God concepts. Cognitive Psychology, 31 : 219-247.

Bostrom, N. 1998. How long before superintelligence? Int. Jour. of Future Studies, .

Bostrom, N. 2001. Existential Risks: Analyzing Human Extinction Scenarios. Journal of Evolution and Technology, .

Brown, D.E. 1991. Human universals. New York: McGraw-Hill.

Crochat, P. and Franklin, D. (2000.) Back-Propagation Neural Network Tutorial. http://ieee.uow.edu.au/~daniel/software/libneural/

Deacon, T. 1997. The symbolic species: The co-evolution of language and the brain. New York: Norton.

Drexler, K. E. 1992. Nanosystems: Molecular Machinery, Manufacturing, and Computation. New York: Wiley-Interscience.

Ekman, P. and Keltner, D. 1997. Universal facial expressions of emotion: an old controversy and new findings. In Nonverbal communication: where nature meets culture, eds. U. Segerstrale and P. Molnar. Mahwah, NJ: Lawrence Erlbaum Associates.

Good, I. J. 1965. Speculations Concerning the First Ultraintelligent Machine . Pp. 31-88 in Advances in Computers, vol 6 , eds. F. L. Alt and M. Rubinoff. New York: Academic Press.

Hayes, J. R. 1981. The complete problem solver. Philadelphia: Franklin Institute Press.

Hibbard, B. 2001. Super-intelligent machines. ACM SIGGRAPH Computer Graphics 35 (1).

Hibbard, B. 2004. Reinforcement learning as a Context for Integrating AI Research. Presented at the 2004 AAAI Fall Symposium on Achieving Human-Level Intelligence through Integrated Systems and Research.

Hofstadter, D. 1979. Godel, Escher, Bach: An Eternal Golden Braid. New York: Random House

Jaynes, E.T. and Bretthorst, G. L. 2003. Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.

Jensen, A. R. 1999. The G Factor: the Science of Mental Ability. Psycoloquy 10 (23).

MacFie, R. C. 1912. Heredity, Evolution, and Vitalism: Some of the discoveries of modern research into these matters � their trend and significance. New York: William Wood and Company.

McCarthy, J., Minsky, M. L., Rochester, N. and Shannon, C. E. 1955. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

Merkle, R. C. 1989. Large scale analysis of neural structure. Xerox PARC Technical Report CSL-89-10. November, 1989.

Merkle, R. C. and Drexler, K. E. 1996. Helical Logic. Nanotechnology, : 325-339.

Minsky, M. L. 1986. The Society of Mind. New York: Simon and Schuster.

Monod, J. L. 1974. On the Molecular Theory of Evolution . New York: Oxford.

Moravec, H. 1988. Mind Children: The Future of Robot and Human Intelligence . Cambridge: Harvard University Press.

Moravec, H. 1999. Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press.

Raymond, E. S. ed. 2003. DWIM . The on-line hacker Jargon File, version 4.4.7, 29 Dec 2003.

Rhodes, R. 1986. The Making of the Atomic Bomb. New York: Simon & Schuster.

Rice, H. G. 1953. Classes of Recursively Enumerable Sets and Their Decision Problems. Trans. Amer. Math. Soc., 74 : 358-366.

Russell, S. J. and Norvig, P. Artificial Intelligence: A Modern Approach. Pp. 962-964. New Jersey: Prentice Hall.

Sandberg, A. 1999. The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains. Journal of Evolution and Technology, .

Schmidhuber, J. 2003. Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. In Artificial General Intelligence, eds. B. Goertzel and C. Pennachin. Forthcoming. New York: Springer-Verlag.

Sober, E. 1984. The nature of selection. Cambridge, MA: MIT Press.

Tooby, J. and Cosmides, L. 1992. The psychological foundations of culture. In The adapted mind: Evolutionary psychology and the generation of culture, eds. J. H. Barkow, L. Cosmides and J. Tooby. New York: Oxford University Press.

Vinge, V. 1993. The Coming Technological Singularity. Presented at the VISION-21 Symposium, sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute. March, 1993.

Wachowski, A. and Wachowski, L. 1999. The Matrix , USA, Warner Bros, 135 min.

Weisburg, R. 1986. Creativity, genius and other myths . New York: W.H Freeman.

Williams, G. C. 1966. Adaptation and Natural Selection: A critique of some current evolutionary thought. Princeton, NJ: Princeton University Press.