Advances in machine learning have moved at a gallop in recent years, but the computer processors these programs run on have barely changed. To remedy this, companies have been re-tuning existing chip architecture to fit the demands of AI, but on the cutting edge of research, an entirely new approach is taking shape: remaking processors so they work more like our brains.
This is called “neuromorphic computing,” and scientists from MIT this week said they’ve made significant progress in getting this new breed of chips up and running. Their research, published in the journal Nature Materials, could eventually lead to processors that run machine learning tasks with lower energy demands — up to 1,000 times less. This would enable us to give more devices AI abilities like voice and image recognition. p>
To understand what these researchers have done, you need to know a little about neuromorphic chips. The key difference between these processors and the ones used in your computer is that they process data in an analog, rather than a digital fashion. This means that instead of sending information in a series of on / off electrical bursts, they vary the intensity of these signals — just like our brain’s synapses do. p>
Cover of Science magazine covered the NeuroChip in depth. |
This means that more information can be packed into each jolt, drastically reducing the amount of power needed. It’s like the difference between morse code and speech. The former encodes data using just two outputs, dots, and dashes — making meanings easy to understand but lengthy to communicate. Speech, by comparison, can be difficult to interpret (think fuzzy phone lines and noisy cafes) but each individual utterance holds much more data. p>
A big difficulty with building neuromorphic chips, though, is being able to precisely control these analog signals. Their intensity needs to vary, yes, but in a controlled and consistent fashion. p>
Attempts to find a suitable medium for these varying electrical signals to travel through have previously been unsuccessful, because the current ends up spreading out all over the place. To fix this, researchers led by MIT’s Jeehwan Kim, used crystalline forms of silicon and germanium that resemble lattices at the microscopic level. Together, these create clear pathways for the electrical signals, leading to much less variance in the strength of the signals.
“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim told MIT News.
To test this premise, Kim and his team created a simulation of their new chip design, with the same degree of variance in signals. Using it, they were able to train a neural network that could recognize handwriting (a standard training task for new forms of AI) with 95 percent accuracy. That’s less than the 97 percent baseline using existing algorithms and chips, but it’s promising for new technology. p>
There’s a long way to go before we’ll know whether neuromorphic chips are suitable for mass production and real-world usage. But when you’re trying to redesign how computers think from the ground up you have to put in a lot of work. Making sure neuromorphic chips are firing their electric synapses in order is just the start. p>
Tweet |