Mybrary.info
mybrary.info » Книги » Разное » The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio - Isaacson Walter (книги полностью .txt) 📗

The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio - Isaacson Walter (книги полностью .txt) 📗

Тут можно читать бесплатно The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio - Isaacson Walter (книги полностью .txt) 📗. Жанр: Разное. Так же Вы можете читать полную версию (весь текст) онлайн без регистрации и SMS на сайте mybrary.info (MYBRARY) или прочесть краткое содержание, предисловие (аннотацию), описание и ознакомиться с отзывами (комментариями) о произведении.
Перейти на страницу:

Ada might also be justified in boasting that she was correct, at least thus far, in her more controversial contention: that no computer, no matter how powerful, would ever truly be a “thinking” machine. A century after she died, Alan Turing dubbed this “Lady Lovelace’s Objection” and tried to dismiss it by providing an operational definition of a thinking machine—that a person submitting questions could not distinguish the machine from a human—and predicting that a computer would pass this test within a few decades. But it’s now been more than sixty years, and the machines that attempt to fool people on the test are at best engaging in lame conversation tricks rather than actual thinking. Certainly none has cleared Ada’s higher bar of being able to “originate” any thoughts of its own.

Ever since Mary Shelley conceived her Frankenstein tale during a vacation with Ada’s father, Lord Byron, the prospect that a man-made contraption might originate its own thoughts has unnerved generations. The Frankenstein motif became a staple of science fiction. A vivid example was Stanley Kubrick’s 1968 movie, 2001: A Space Odyssey, featuring the frighteningly intelligent computer HAL. With its calm voice, HAL exhibits attributes of a human: the ability to speak, reason, recognize faces, appreciate beauty, show emotion, and (of course) play chess. When HAL appears to malfunction, the human astronauts decide to shut it down. HAL becomes aware of the plan and kills all but one of them. After a lot of heroic struggle, the remaining astronaut gains access to HAL’s cognitive circuits and disconnects them one by one. HAL regresses until, at the end, it intones “Daisy Bell”—an homage to the first computer-generated song, sung by an IBM 704 at Bell Labs in 1961.

Artificial intelligence enthusiasts have long been promising, or threatening, that machines like HAL would soon emerge and prove Ada wrong. Such was the premise of the 1956 conference at Dartmouth organized by John McCarthy and Marvin Minsky, where the field of artificial intelligence was launched. The conferees concluded that a breakthrough was about twenty years away. It wasn’t. Decade after decade, new waves of experts have claimed that artificial intelligence was on the visible horizon, perhaps only twenty years away. Yet it has remained a mirage, always about twenty years away.

John von Neumann was working on the challenge of artificial intelligence shortly before he died in 1957. Having helped devise the architecture of modern digital computers, he realized that the architecture of the human brain is fundamentally different. Digital computers deal in precise units, whereas the brain, to the extent we understand it, is also partly an analog system, which deals with a continuum of possibilities. In other words, a human’s mental process includes many signal pulses and analog waves from different nerves that flow together to produce not just binary yes-no data but also answers such as “maybe” and “probably” and infinite other nuances, including occasional bafflement. Von Neumann suggested that the future of intelligent computing might require abandoning the purely digital approach and creating “mixed procedures” that include a combination of digital and analog methods. “Logic will have to undergo a pseudomorphosis to neurology,” he declared, which, roughly translated, meant that computers were going to have to become more like the human brain.1

In 1958 a Cornell professor, Frank Rosenblatt, attempted to do this by devising a mathematical approach for creating an artificial neural network like that of the brain, which he called a Perceptron. Using weighted statistical inputs, it could, in theory, process visual data. When the Navy, which was funding the work, unveiled the system, it drew the type of press hype that has accompanied many subsequent artificial intelligence claims. “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence,” the New York Times reported. The New Yorker was equally enthusiastic: “The Perceptron, . . . as its name implies, is capable of what amounts to original thought. . . . It strikes us as the first serious rival to the human brain ever devised.”2

That was almost sixty years ago. The Perceptron still does not exist.3 Nevertheless, almost every year since then there have been breathless reports about some marvel on the horizon that would replicate and surpass the human brain, many of them using almost the exact same phrases as the 1958 stories about the Perceptron.

Discussion about artificial intelligence flared up a bit, at least in the popular press, after IBM’s Deep Blue, a chess-playing machine, beat the world champion Garry Kasparov in 1997 and then Watson, its natural-language question-answering computer, won at Jeopardy! against champions Brad Rutter and Ken Jennings in 2011. “I think it awakened the entire artificial intelligence community,” said IBM CEO Ginni Rometty.4 But as she was the first to admit, these were not true breakthroughs of humanlike artificial intelligence. Deep Blue won its chess match by brute force; it could evaluate 200 million positions per second and match them against 700,000 past grandmaster games. Deep Blue’s calculations were fundamentally different, most of us would agree, from what we mean by real thinking. “Deep Blue was only intelligent the way your programmable alarm clock is intelligent,” Kasparov said. “Not that losing to a $10 million alarm clock made me feel any better.”5

Likewise, Watson won at Jeopardy! by using megadoses of computing power: it had 200 million pages of information in its four terabytes of storage, of which the entire Wikipedia accounted for merely 0.2 percent. It could search the equivalent of a million books per second. It was also rather good at processing colloquial English. Still, no one who watched would bet on its passing the Turing Test. In fact, the IBM team leaders were afraid that the show’s writers might try to turn the game into a Turing Test by composing questions designed to trick a machine, so they insisted that only old questions from unaired contests be used. Nevertheless, the machine tripped up in ways that showed it wasn’t human. For example, one question was about the “anatomical oddity” of the former Olympic gymnast George Eyser. Watson answered, “What is a leg?” The correct answer was that Eyser was missing a leg. The problem was understanding oddity, explained David Ferrucci, who ran the Watson project at IBM. “The computer wouldn’t know that a missing leg is odder than anything else.”6

John Searle, the Berkeley philosophy professor who devised the “Chinese room” rebuttal to the Turing Test, scoffed at the notion that Watson represented even a glimmer of artificial intelligence. “Watson did not understand the questions, nor its answers, nor that some of its answers were right and some wrong, nor that it was playing a game, nor that it won—because it doesn’t understand anything,” Searle contended. “IBM’s computer was not and could not have been designed to understand. Rather, it was designed to simulate understanding, to act as if it understood.”7

Even the IBM folks agreed with that. They never held Watson out to be an “intelligent” machine. “Computers today are brilliant idiots,” said the company’s director of research, John E. Kelly III, after the Deep Blue and Watson victories. “They have tremendous capacities for storing information and performing numerical calculations—far superior to those of any human. Yet when it comes to another class of skills, the capacities for understanding, learning, adapting, and interacting, computers are woefully inferior to humans.”8

Перейти на страницу:

Isaacson Walter читать все книги автора по порядку

Isaacson Walter - все книги автора в одном месте читать по порядку полные версии на сайте онлайн библиотеки mybrary.info.


The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio отзывы

Отзывы читателей о книге The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio, автор: Isaacson Walter. Читайте комментарии и мнения людей о произведении.


Уважаемые читатели и просто посетители нашей библиотеки! Просим Вас придерживаться определенных правил при комментировании литературных произведений.

  • 1. Просьба отказаться от дискриминационных высказываний. Мы защищаем право наших читателей свободно выражать свою точку зрения. Вместе с тем мы не терпим агрессии. На сайте запрещено оставлять комментарий, который содержит унизительные высказывания или призывы к насилию по отношению к отдельным лицам или группам людей на основании их расы, этнического происхождения, вероисповедания, недееспособности, пола, возраста, статуса ветерана, касты или сексуальной ориентации.
  • 2. Просьба отказаться от оскорблений, угроз и запугиваний.
  • 3. Просьба отказаться от нецензурной лексики.
  • 4. Просьба вести себя максимально корректно как по отношению к авторам, так и по отношению к другим читателям и их комментариям.

Надеемся на Ваше понимание и благоразумие. С уважением, администратор mybrary.info.


Прокомментировать
Подтвердите что вы не робот:*