How Neuromorphic Chips will become the Future of AI?

Over the recent past, a slew of popular films dealing with artificial intelligence (AI) have been seen which include Ex Machina, Her, Chappie, and Imitation Game. Although several researchers have been working on artificial intelligence over five decades, the computers still cannot perform many tasks that are simple for human brains. With the sluggish progress of artificial intelligence, the prospect of computers with an intelligence of human level seems further away as compared to the time when “Classic I Robot” published by Issac Asimov in 1950. However, current developments in neuromorphic chips are likely to lead the researchers in realising human-level artificial intelligence in the near future.

Simulation of Brain Activity for Development of AI computers

Since recent past, there has been an increasing focus on neuroscience and prospect to understand brain functionality, utilising principles of neural computation for addressing today’s technological limitations. The research community, recognising this potentiality, has launched several remarkable projects for supporting computational neuroscience, for studying information processing properties of nervous system. One such example is the Blue Brain Project being held at Ecole Polytechnique Federale de Lausanne, Switzerland. Utilising detailed analysis of nervous system, this project focuses on the simulation of 10,000 neurons in a rat’s brain.

Another example is the project held in California, San Jose, at IBM Almaden Research centre, which focuses on understanding outer processing layer of the brain, the cortex. In this project, the researchers utilise neuron models for simulating 900 million neurons linked through 9 trillion synapses. Both of the aforementioned projects have provided great results, but depend upon large-scale simulations of studied models in the High Performance Computing clusters, or supercomputers as they are known, such as IBM BlueGene/P. However, these simulations are very slow, as computers require several minutes for modelling just few seconds of brain activity, given the drawbacks of von-Neumann computing architecture and large number of studied parameters. However, advancements in neuromorphic chips are expected to overcome these drawbacks, paving a path for development of a system to efficiently understand the brain functionality.

Concept of Deep learning & Development of Neuromorphic Chips

In early 2000’s, the possibility of neural network models in solving several tasks was realised, based on the working process of human brains. The buzzphrase “deep learning” has become a prevalent term for all neural network models and associated techniques. Most of the practitioners in deep learning acknowledge its popularity driven by hardware, among particular graphics processing units (GPUs). Core algorithms of the neural networks, for e.g. backpropagation algorithm, was developed in 1970s & 80s for calculation of gradients, whereas in late 90s convolutional neural networks were developed. The next logical step is the inclination from utilisation of GPUs to neuromorphic chips. Although GPU architecture is designed for computer graphics, neural networks can be implemented directly into hardware by neuromorphic chips. Currently, various private and public entities such as the EU, DARPA, Qualcomm and IBM, are developing neuromorphic chips.

Neuromorphic chips are foreseen to make more cost-effective and faster magnitude of deep learning orders. This is likely to be a key driver in enhancing AI in areas of character recognition, driverless car technology, big data mining, robotic control, and surveillance. Owing to lower power consumption of neuromorphic chips, it is conceivable that the future smartphones would comprise of a neuromorphic chip, enabling to perform tasks such as road signs translation from foreign languages, or speech to text conversion. The applications performing deep learning tasks necessitate connection to cloud for performing necessary computations. The property of low power consumption makes neuromorphic chips more attractive for applications in military field robots, which currently require high power consumption resulting into quick drainage of their batteries.

Developments in Neuromorphic Likely to Lead Towards Artificial General Intelligence

Considering the amount of investments in neuromorphic chips, one can already discern a path leading to “artificial general intelligence” (AGI). However time required for the development of a suitable cognitive architecture is unknown. The neuromorphic hardware’s fundamental physics is solid, capable of mimicking the brain in power consumption and component density, with thousands of times of the speed. Although some governments wish to banish AGI development, it will eventually be realised by someone, at some place. What proceeds further is the matter of speculation. If AGI gets capable of repetitive self-improvement, and is provided with an internet access, results would be hazardous for humanity. Therefore, it is better to start coming up with solutions for such futuristic problems before hand, while working on the development of neuromorphic chips.

It is apparent that computational systems based on neuromorphic chips, scalable to capabilities of human brain, can possibly be realised if provided with an effort of all-round research & development synergised in architecture, software, hardware, and understanding & simulation of brain functioning. In the optimum aim of mimicking human brain, the development of the artificial brains of smaller species, or the specific parts of human brain is likely to be enchanting from commercial point of view. Applications and availability of neuromorphic systems for large-scale commercial utilisation would provide an impetus to development of these systems.

Abhishek Budholiya

Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.