CyNAPSE v1.0 : A Low-power Neuromorphic Acceleration Fabric for biologically plausible spiking competitive networks
Motivation
​
This project aims to build an ultra-low power neural accelerator for running reconfigurable Spiking Neural Networks for diverse learning and inference tasks. Spiking neural networks are inherently low-power tools that closely resemble the working of basic computational units of the mammalian brain. Neural cognition is thrifty in terms of power as well as fast and efficient. Something, that conventional von-neumann computers aren't designed to do very well. The motivations of such a computing unit can be described two-fold.
​
Neural simulations are exceptionally demanding and an enormous amount of compute capability is required to simulate neural models that are not even close to the amount of detail and number of units that exist in a primate cortex. Even simulating a cat brain model takes ridiculous amount of power and resources. This creates a frustrating bottleneck in theoretical neuroscientific research. Massively parallel neural processors will help simulate large models of the brain and with very efficient energy handling. Afterall, the human brain consumes only 20 watts of power in what is probably the most elegant computing infrastructure ever encountered. This drives computational neuroscientists to look for mathematical models of neurons and synapses that are good enough in detail but also can support computational feasibility. This is the prime trade-off of this field and to be able to afford further detail that can be efficiently simulated is a strong motivation.
​
On the other hand, applications that employ large neural networks to learn and adapt to the environment will have a natural predilection towards low-power devices. Embedded learning capability is highly coveted in UAVs, Drones, IOT Devices and other field instruments where the amount of power supply is heavily constrained. Spiking neural networks make a strong case for embedded unsupervised learning using extremely energy-efficient hardware architectures and can find themselves as viable alternatives to highly popular artificial neural networks in a compromise of some accuracy for energy savings.
​
Neuromorphic engineering amalgamates these two aspects into one monolithic network template that is both driven to be low power and efficient and exhibit real neural behavior. There are a number of notable works in this field and it is an active community that is growing strong.
​
Design
​
This project entails the design of a Neural Processing Engine in a co-processor configuration that allows simulation of reconfigurable spiking competitive network topologies that can perform diverse machine learning tasks. Figure 1 shows the high level microarchitectural description of CyNAPSE v1.0. It is a simple accelerator infrastructure that is designed to power-efficient and error-resilient, albeit fast and robust. For more details on the intricacies of the design, I encourage you to check out the technical report here.

Figure 1. High-level microarchitectural description of CyNAPSE v1.0
In Summary, it boasts the following aspects:
​
​1. Biological plausibility: Biological neurons in the mammalian prefrontal cortex do not involve simple current-based synapses where the synaptic efficacies or weights are directly integrated into the membrane potential. Rather, synapses alter the ion channel conductance according to their efficacy which, in turn, increases or decreases the permeability of certain ions to diffuse into the intracellular fluid and thereby change the membrane potential. Custom neuromorphic hardware currently supports simple current integration depending only on the synaptic activity. This project aims to build a neuromorphic template that can simulate more biologically plausible current integration that depends not only on the synaptic activity but also on the current membrane potential of the post-synaptic neuron.
2. Dynamic neurophysical phenomenon: Research suggests that neural cognition is a result of many dynamic activities on different timescales. One of the experimentally established dynamic behavior is homeostasis. It has been seen that homeostasis plays an important role in competitive learning networks of real neurons. This project aims to incorporate dynamic adaptive threshold capabilities for spiking neurons in recurrent competitive networks.
​
3. Seamless Spike-Packet encoding: It uses an Address-Event Representation scheme to encode spike packets within the neural processing engine and this is recognized by the master processor that can easily run a simple routine to recognize the neural spike packets and understand what the network is trying to predict. Internal to the accelerator, this is used as an easy way to route spikes within the neuron array and from Input to Output FIFO Queues in the accelerator.
​
4. Recurrent Competitive Networks: This accelerator provides the scope of modelling a Winner-take-all topology, a common topology in competitive learning and also ubiquitous in the mammalian brain. This hardware network template can efficiently simulate Pyramidal and basket cell dynamics found in the Hippocampal regions of the brain, for instance, albeit using simpler mathematical abstraction but those which are capable of cutting down the expense of artificial neural models and give some insight into the actual dynamics of the brain with the promise of scalability.
​
5. Hardware specifications:
​
-
Performance: Runs at 167MHz and gives a 5.83x faster-than-real-time resolution of neural inference
-
Accuracy: MNIST Handwritten digit recognition accuracy of 90.64%. Best-case accuracy/neuron value of 0.113% (state-of-the-art for Spiking Neural Networks with STDP learning)
-
Power: Power consumtion per-neuron value of 0.04 mW/MHz statically estimated (by Synthesis tool)
-
Tools: Designed using Synthesizable Verilog HDL and synthesized using a 65nm TSMC library in Cadence Encounter
​
​
For more details on the intricacies of the design, I encourage you to check out the technical report here. Check out the code repository here (This is the most updated version of an ongoing initiative. May be ahead of the description here). This project is an ongoing open-ended venture and will continue upgrading to new versions. Please keep checking this section for further updates