mardi 7 janvier 2014

Thinking in Silicon

A lire sur: http://www.technologyreview.com/featuredstory/522476/thinking-in-silicon/

Picture a person reading these words on a laptop in a coffee shop. The machine made of metal, plastic, and silicon consumes about 50 watts of power as it translates bits of information—a long string of 1s and 0s—into a pattern of dots on a screen. Meanwhile, inside that person’s skull, a gooey clump of proteins, salt, and water uses a fraction of that power not only to recognize those patterns as letters, words, and sentences but to recognize the song playing on the radio.


This computer chip, made by IBM in 2011, features components that serve as 256 neurons and 262,144 synapses.
Computers are incredibly inefficient at lots of tasks that are easy for even the simplest brains, such as recognizing images and navigating in unfamiliar spaces. Machines found in research labs or vast data centers can perform such tasks, but they are huge and energy-hungry, and they need specialized programming. Google recently made headlines with software that can reliably recognize cats and human faces in video clips, but this achievement required no fewer than 16,000 powerful processors.
A new breed of computer chips that operate more like the brain may be about to narrow the gulf between artificial and natural computation—between circuits that crunch through logical operations at blistering speed and a mechanism honed by evolution to process and act on sensory input from the real world. Advances in neuroscience and chip technology have made it practical to build devices that, on a small scale at least, process data the way a mammalian brain does. These “neuromorphic” chips may be the missing piece of many promising but unfinished projects in artificial intelligence, such as cars that drive themselves reliably in all conditions, and smartphones that act as competent conversational assistants.
“Modern computers are inherited from calculators, good for crunching numbers,” says Dharmendra Modha, a senior researcher at IBM Research in Almaden, California. “Brains evolved in the real world.” Modha leads one of two groups that have built computer chips with a basic architecture copied from the mammalian brain under a $100 million project called Synapse, funded by the Pentagon’s Defense Advanced Research Projects Agency.
The prototypes have already shown early sparks of intelligence, processing images very efficiently and gaining new skills in a way that resembles biological learning. IBM has created tools to let software engineers program these brain-inspired chips; the other prototype, at HRL Laboratories in Malibu, California, will soon be installed inside a tiny robotic aircraft, from which it will learn to recognize its surroundings.
The evolution of brain-inspired chips began in the early 1980s with Carver Mead, a professor at the California Institute of Technology and one of the fathers of modern computing. Mead had made his name by helping to develop a way of designing computer chips called very large scale integration, or VLSI, which enabled manufacturers to create much more complex microprocessors. This triggered explosive growth in computation power: computers looked set to become mainstream, even ubiquitous. But the industry seemed happy to build them around one blueprint, dating from 1945. The von Neumann architecture, named after the Hungarian-born mathematician John von Neumann, is designed to execute linear sequences of instructions. All today’s computers, from smartphones to supercomputers, have just two main components: a central processing unit, or CPU, to manipulate data, and a block of random access memory, or RAM, to store the data and the instructions on how to manipulate it. The CPU begins by fetching its first instruction from memory, followed by the data needed to execute it; after the instruction is performed, the result is sent back to memory and the cycle repeats. Even multicore chips that handle data in parallel are limited to just a few simultaneous linear processes.
That approach developed naturally from theoretical math and logic, where problems are solved with linear chains of reasoning. Yet it was unsuitable for processing and learning from large amounts of data, especially sensory input such as images or sound. It also came with built-in limitations: to make computers more powerful, the industry had tasked itself with building increasingly complex chips capable of carrying out sequential operations faster and faster, but this put engineers on course for major efficiency and cooling problems, because speedier chips produce more waste heat. Mead, now 79 and a professor emeritus, sensed even then that there could be a better way. “The more I thought about it, the more it felt awkward,” he says, sitting in the office he retains at Caltech. He began dreaming of chips that processed many instructions—perhaps millions—in parallel. Such a chip could accomplish new tasks, efficiently handling large quantities of unstructured information such as video or sound. It could be more compact and use power more efficiently, even if it were more specialized for particular kinds of tasks. Evidence that this was possible could be found flying, scampering, and walking all around. “The only examples we had of a massively parallel thing were in the brains of animals,” says Mead.

  • 1
  • 2
  • 3
  • 4

  • Aucun commentaire:

    Enregistrer un commentaire