Search

What is neuromorphic computing? Everything you need to know about how it is changing the future of computing - ZDNet

sambitasa.blogspot.com

What is neuromorphic computing?

As the name suggests, neuromorphic computing uses a model that's inspired by the workings of the brain.

Innovation

The brain makes a really appealing model for computing: unlike most supercomputers, which fill rooms, the brain is compact, fitting neatly in something the size of, well... your head. 

Brains also need far less energy than most supercomputers: your brain uses about 20 watts, whereas the Fugaku supercomputer needs 28 megawatts -- or to put it another way, a brain needs about 0.00007% of Fugaku's power supply. While supercomputers need elaborate cooling systems, the brain sits in a bony housing that keeps it neatly at 37°C. 

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

True, supercomputers make specific calculations at great speed, but the brain wins on adaptability. It can write poetry, pick a familiar face out of a crowd in a flash, drive a car, learn a new language, take good decisions and bad, and so much more. And with traditional models of computing struggling, harnessing techniques used by our brains could be the key to vastly more powerful computers in the future.

Why do we need neuromorphic systems?

Most hardware today is based on the von Neumann architecture, which separates out memory and computing. Because von Neumann chips have to shuttle information back and forth between the memory and CPU, they waste time (computations are held back by the speed of the bus between the compute and memory) and energy -- a problem known as the von Neumann bottleneck.

By cramming more transistors onto these von Neumann processors, chipmakers have for a long time been able to keep adding to the amount of computing power on a chip, following Moore's Law. But problems with shrinking transistors any further, their energy requirements, and the heat they throw out mean without a change in chip fundamentals, that won't go on for much longer.

As time goes on, von Neumann architectures will make it harder and harder to deliver the increases in compute power that we need.

To keep up, a new type of non-von Neumann architecture will be needed: a neuromorphic architecture. Quantum computing and neuromorphic systems have both been claimed as the solution, and it's neuromorphic computing, brain-inspired computing, that's likely to be commercialised sooner. 

As well as potentially overcoming the von Neumann bottleneck, a neuromorphic computer could channel the brain's workings to address other problems. While von Neumann systems are largely serial, brains use massively parallel computing. Brains are also more fault-tolerant than computers -- both advantages researchers are hoping to model within neuromorphic systems.

So how can you make a computer that works like the human brain?

First, to understand neuromorphic technology it make sense to take a quick look at how the brain works. 

Messages are carried to and from the brain via neurons, a type of nerve cell. If you step on a pin, pain receptors in the skin of your foot pick up the damage, and trigger something known as an action potential -- basically, a signal to activate -- in the neurone that's connected to the foot. The action potential causes the neuron to release chemicals across a gap called a synapse, which happens across many neurons until the message reaches the brain. Your brain then registers the pain, at which point messages are sent from neuron to neuron until the signal reaches your leg muscles -- and you move your foot.

An action potential can be triggered by either lots of inputs at once (spatial), or input that builds up over time (temporal). These techniques, plus the huge interconnectivity of synapses -- one synapse might be connected to 10,000 others -- means the brain can transfer information quickly and efficiently.

SEE: Neuromorphic computing finds new life in machine learning

Neuromorphic computing models the way the brain works through spiking neural networks. Conventional computing is based on transistors that are either on or off, one or zero. Spiking neural networks can convey information in both the same temporal and spatial way as the brain can and so produce more than one of two outputs. Neuromorphic systems can be either digital or analogue, with the part of synapses played by either software or memristors.

Memristors could also come in handy in modelling another useful element of the brain: synapses' ability to store information as well as transmitting it. Memristors can store a range of values, rather than just the traditional one and zero, allowing it to mimic the way the strength of a connection between two synapses can vary. Changing those weights in artificial synapses in neuromorphic computing is one way to allow the brain-based systems to learn.

Along with memristive technologies, including phase change memory, resistive RAM, spin-transfer torque magnetic RAM, and conductive bridge RAM, researchers are also looking for other new ways to model the brain's synapse, such as using quantum dots and graphene.

What uses could neuromorphic systems be put to?

For compute heavy tasks, edge devices like smartphones currently have to hand off processing to a cloud-based system, which processes the query and feeds the answer back to the device. With neuromorphic systems, that query wouldn't have to be shunted back and forth, it could be conducted within the device itself. 

But perhaps the biggest driving force for investments in neuromorphic computing is the promise it holds for AI.

Current generation AI tends to be heavily rules-based, trained on datasets until it learns to generate a particular outcome. But that's not how the human brain works: our grey matter is much more comfortable with ambiguity and flexibility.

SEE: Neuromorphic computing could solve the tech industry's looming crisis

It's hoped that the next generation of artificial intelligence could deal with a few more brain-like problems, including constraint satisfaction, where a system has to find the optimum solution to a problem with a lot of restrictions. 

Neuromorphic systems are also likely to help develop better AIs as they're more comfortable with other types of problems like probabilistic computing, where systems have to cope with noisy and uncertain data. There are also others, such as causality and non-linear thinking, which are relatively immature in neuromorphic computing systems, but once they're more established, they could vastly expand the uses AIs could be put to.

Are there neuromorphic computer systems available today?

Yep, academics, startups and some of tech's big names are already making and using neuromorphic systems.

Intel has a neuromorphic chip, called Loihi, and has used 64 of them to make an 8 million synapse system called Pohoiki Beach, comprising 8 million neurones (it's expecting that to reach 100 million neurones in the near future). At the moment, Loihi chips are being used by researchers, including at the Telluride Neuromorphic Cognition Engineering Workshop, where they're being used in the creation of artificial skin and in the development of powered prosthetic limbs.

IBM also has its own neuromorphic system, TrueNorth, launched in 2014 and last seen with 64 million neurones and 16 billion synapses. While IBM has been comparatively quiet on how TrueNorth is developing, it did recently announce a partnership with the US Air Force Research Laboratory to create a 'neuromorphic supercomputer' known as Blue Raven. While the lab is still exploring uses for the technology, one option could be creating smarter, lighter, less energy-demanding drones.

Neuromorphic computing started off in a research lab (Carver Mead's at Cal-tech) and some of the best known are still in academic institutions. The EU-funded Human Brain Project (HBP), a 10-year project that's been running since 2013, was set up to advance understanding of the brain through six areas of research, including neuromorphic computing.

The HBP has led to two major neuromorphic initiatives, SpiNNaker and BrainScaleS. In 2018, a million-core SpiNNaker system went live, the largest neuromorphic supercomputer at the time, and the university hopes to eventually scale it up to model one million neurones. BrainScaleS has similar aims as SpiNNaker, and its architecture is now on its second generation, BrainScaleS-2.

What are the challenges to using neuromorphic systems?

Shifting from von Neumann to neuromorphic computing isn't going to come without substantial challenges.

Computing norms -- how data is encoded and processed, for example -- have all grown up around the von Neumann model, and so will need to be reworked for a world where neuromorphic computing is more common. One example is dealing with visual input: conventional systems understand them as a series of individual frames, while a neuromorphic processor would encode such information as changes in a visual field over time. 

SEE: Building the bionic brain (free PDF) (TechRepublic)

Programming languages will also need to be rewritten from the ground up, too. There are challenges on the hardware side: new generations of memory, storage and sensor tech will need to be created to take full advantage of neuromorphic devices.  

Neuromorphic technology could even need a fundamental change in how the hardware and software is developed, because of the integration between different elements in neuromorphic hardware, such as the integration between memory and processing.

Do we know enough about the brain to start making brain-like computers?

One side effect of the increasing momentum behind neuromorphic computing is likely to be improvements in neuroscience: as researchers start to try to recreate our grey matter in electronics, they may learn more about the brain's inner workings that help biologists learn more about the brain.

And similarly, the more we learn about the human brain, the more avenues are likely to open up for neuromorphic computing researchers. For example, glial cells -- the brain's support cells -- don't figure highly in most neuromorphic designs, but as more information comes to light about how these cells are involved in information processing, computer scientists are starting to examine whether they should figure in neuromorphic designs too.

And of course, one of the more interesting questions about the increasingly sophisticated work to model the human brain in silicon is whether researchers may eventually end up recreating -- or creating -- consciousness in machines.

Artificial Intelligence

Let's block ads! (Why?)



"need" - Google News
December 08, 2020 at 06:15PM
https://ift.tt/36Xrznk

What is neuromorphic computing? Everything you need to know about how it is changing the future of computing - ZDNet
"need" - Google News
https://ift.tt/3c23wne
https://ift.tt/2YsHiXz

Bagikan Berita Ini

0 Response to "What is neuromorphic computing? Everything you need to know about how it is changing the future of computing - ZDNet"

Post a Comment

Powered by Blogger.