figure 3

Figure 3
figure 3

Encoding mechanism of the virtual neuron. Numbers can be encoded by selectingthe appropriate synaptic weights. Here, we use four neurons to encode (a) positive integers; (b) positive rationals; and (c) negative rationals. The four spiking neurons can encode 4 bits of information.

Figure 4
figure 4

A2-bit virtual neuron. Takes two 2-bit numbers as input on the left, X and Y, represented as \(x_1\), \(x_0\) and \(y_1\), \(y_0\), respectively. Adds the two numbers and generates their sum on the right. The sum of two 2-bit numbers can at most be a 3-bit number.

Figure 5
figure 5

Working mechanism of the virtual neuron. (a) Time Step 0: Input signals are received in the input neurons (blue and yellow). (b) Time Step 1: Signals from input neurons representing the least significant bits (bottom blue and bottom yellow) are received in the first set of bit neurons, which also represent the least significant bit. (c) Signals from input neurons and first set of bit neurons are received in the second set of bit neurons. (d) Signals from second set of bit neurons are received in the third set of bit neurons. (e) Signals from all bit neurons are received in the output neurons. (f) Output neurons return the output of the circuit.

Virtual neuron is a new SNN primitive that is structurally composed of groups of LIF neurons and synapses that are connected in a particular way. Functionally, the virtual neuron mimics the behavior of a numerical artificial neuron with identity activation. In other words, the virtual neuron takes two input values and adds them together for the output. The encoding mechanism used by the virtual neuron allows groups of LIF neurons to be interpreted as positive or negative rational numbers that are then added together similarly to a ripple carry adder53. The rationale behind the encoding mechanism of the virtual neuron is rooted in the binary encoding of numbers. Figure 3 shows three ways of encoding 4-bit numbers on a neuromorphic computer. Notice that each neuron in the figure represents a bit. The synapse coming out of the neuron assigns a value to the binary spike of the neuron by multiplying it with its synaptic weight. By having powers of 2 as the synaptic weights, we can encode rational numbers using a group of neurons. For instance, the synapses coming out of the four neurons in Fig. 3a have weightsof \(2^0\), \(2^1\), \(2^2\), and \(2^3\). When the second and fourth neurons (from the bottom) spike, the result gets multiplied by 2 and 8 in the outgoing synapses, respectively. This is interpreted as the number 10 under this encoding mechanism. Similarly, we can set the synaptic weights to be negative powers of 2, as shown in Fig. 3b. This enables us to encode positive fractions as well. When the first and third neurons (from the bottom) spike as shown in the figure, the result is interpreted as 0.625. Lastly, if the synaptic weights are set to negatives of positive and negative powers of 2 as shown in Fig. 3c, thenwe can encode negative rational numbers. When the three neurons spike in the figure, the output is interpreted as \(-3.5\).

We now show how the virtual neuron can integrate the incoming signals and generate a rational number as output. For ease of explanation, we stick to the 2-bit virtual neuron as shown in Fig. 4. The 2-bit virtual neuron takes as input two 2-bit numbers, X and Y, shown in the figure as \([x_1, x_0]\) (blue neurons) and \([y_1, y_0]\) (yellow neurons), respectively. It then adds X and Y in the three groups of bit neurons, which are shown in red. We call them bit neurons because they are responsible for the bit-level operations in the circuit (e.g.,bitwise addition, propagating the carry bit). Finally, it produces a 3-bit number, Z, as output, shown in the figure as \([z_2, z_1, z_0]\) (green neurons).

The default internal states of all neurons are set to \(-1\). Furthermore, all neurons have a leak of 0, and this means they reset to their default internal state instantaneously if they do not spike. The reset state (or reset voltage) of all neurons is set to \(-1\) so that the internal state of all neurons will be reset to \(-1\) after they spike. The numbers on the neurons indicate their thresholds.For example, the top bit neurons (red neurons) have thresholds 0, 1, and 2, respectively. The synapse parameters are indicated in chevrons onthe top or bottom of the synapses. The first parameter is the synaptic weight, and the second parameter is the synaptic delay. If a group of synapses has the same parameters,then it is indicated with a dotted arc. The synaptic delays are adjusted such that the bit operations of red neurons are synchronized, and the output Z is produced at the same time.

Next, we describe the inner workings of the virtual neuron shown in Fig. 4 by taking the example \([x_1, x_0] = [1, 1]\), and \([y_1, y_0] = [0, 1]\). We start our analysis when the inputs X and Y have been received in the blue and yellow neuronslet us call this the 0-th time step, as shown in Fig. 5a. In the first time step (Fig. 5b), the bottom set of bit neurons in red receive an input of 1 along each of their incoming synapses. Thus, the total incoming signal at both these neurons is 2, which changes their internal state from \(-1\) to 1. As a result, both the bottom red neurons spike. Their spikes are sent along their outgoing synapses, which delay the signal for 3 time steps.

In the second time step (Fig. 5c), the middle group of bit neurons receives all of the inputs: 1 from the blue incoming neuron representing \(x_1\), 0 from the yellow neuron representing \(y_1\), and 1 from the bit neuron with a threshold of 1 in the bottom group. Thus, thesum of their incoming signals is 2, and their internal states reach a value of 1. As a result, neurons with thresholds 0 and 1 in the middle group of bit neurons spike, whereas the one with threshold 2 does not spike. The spikes from the middle red neurons with thresholds 0 and 1 are sent to the green output neuron representing \(z_1\) along their outgoing synapses, which stall for 2 time steps.

In the third time step (Fig. 5d), the 3-bit neurons in the top group of red neurons receive an input of 1 along each of their incoming synapses. As a result, their internal states are incremented by 1 to the value of 0. The neuron witha 0 threshold spikes as a result and sends its spike along its outgoing synapse to the green neuron representing \(z_2\).

In the fourth time step (Fig. 5e), the green neurons representing \(z_0\), \(z_1\), and \(z_2\) receive their inputs. \(z_0\) receives a 1 and \(-1\) from the bit neurons with the thresholds 0 and 1, respectively, in the bottom group of red neurons. Its total input is thus \(1 – 1 = 0\), which keeps its internal state at \(-1\), and it does not spike. Similar operations happen at the green neuron representing \(z_1\). It too does not spike. The green neuron representing \(z_2\) receives a signal of 1 from the bit neuron with the threshold of 0 in the top red set. As a result, its internal state is incremented by 1 to the value of 0, and it spikes.

At the fifth time step (Fig. 5f), the net output \([z_2, z_1, z_0]\) from the circuit is [1,0,0], which can be interpreted as a 4 in binary. Given that our inputs were \([x_1, x_0] = [1, 1]\) and \([y_1, y_0] = [0, 1]\) (i.e., \(X = 3\), and \(Y = 1\)), we have received the correct output of 4 from the virtual neuron circuit. Although we restricted ourselves to 2-bit positive integers in this example, we show in the subsequent subsections that similar circuits can be used to encode and add two rational numbers in the virtual neuron and generate a rational number as output. Finally, note that we did not use powers of 2 in the synapses inside of the virtual neuron. However, the powers of 2 are used implicitly to interpret the value of the input and output groups, and care must be taken to connect the virtual neurons together such that the representation is maintained. Depending on the application, powers of 2 as synaptic weights may be used on the incoming or outgoing synapses for a given virtual neuron. As an example, the synaptic weights can be used on the outgoing synapse to accumulate the numerical value in a traditional LIF neuron.

In the following subsections, we present virtual neuron circuits that have higher precision. We let \(P_+\) and \(P_-\) denote the number of bits used to represent positive and negative numbers, respectively. We call them positive precision and negative precision. In general, the positive precision, \(P_+\), will be distributed among bits used to represent positive integers (\(2^0, 2^1, 2^2, \ldots\)) and positive fractionals (\(2^{-1}, 2^{-2}, 2^{-3}, \ldots\)). Similarly, the negative precision, \(P_-\), will be distributed among bits used to represent negative integers (\(-2^0, -2^1, -2^2, \ldots\)) and negative fractionals (\(-2^{-1}, -2^{-2}, -2^{-3}, \ldots\)).

We now describe the connections for a virtual neuron with arbitrary precision. Each input neuron has both threshold and leakset as 0. Each input \(x_i\) and \(y_i\) is connected to the set of bit neurons thatcorrespond to bit i. In the case of bit 0, there are two such bit neurons, while for every other bit, there are three neurons per bit, shown in red. The synaptic weights of all these connections are unity, and their delays are \(i+1\). Each set of bit neurons has neurons with thresholds of 0 and 1. All bit neurons except the 0 bit have a neuron with a threshold of 2 as well. The neuron with a threshold of 1 in the set of neuronsthat representsbit i is connected to all neurons in the \((i+1)\)-th set. This neuron is responsible for propagating the carry bit to the next set of bit neurons. It spikes only when there is a carry operation to be performed at the i-th bit. The carry synapses have both weights and delays as unity. The bit neurons of the i-th bit are connected to the i-th output neuron. The synaptic weights for the bit neurons having thresholds of 0 and 2 are 1, whereas those for the bit neurons having threshold of 1 are \(-1\). The \(-1\) weight is seen as an inhibitory connection that cancels the signal coming from the neuron with threshold 0 in the same bit set. The delays on the synapses going from i-th bit set to the i-th output neuron are set to \(\max \{P_+, P_-\} – i + 1\). This delay ensures that all output neurons spike at the same time.

Figure 6
figure 6

Encoding (a) positive integers, (b) positive fractionals, (c) negative integers, and (d) negative fractionals using the virtual neuron. The key differences are in the synaptic weight values of the outgoing synapses (i.e., synapses coming out of the green neurons located on the right in each figure). Positive integers(a) and positive fractionals (b) have \(P_+\) precision, whereasnegative integers (c) and negative fractionals (d) have \(P_-\) precision.

Positive integers

Figure 6a shows the virtual neuron circuit that takes two \(P_+\) bit numbers X and Y as inputs, shown as blue and yellow neurons, respectively. The bit-level addition and carry operations are performed by the bit neurons shown in red. There are \(P_+ + 1\) groups of these bit neurons. Finally, the output of the virtual neuron Z has \(P_+ + 1\) bit precision and is shown by the green output neurons. In the figure, we omit synapse parameters for brevity. Note that the synaptic weights on the outgoing synapses are positive powers of 2.

Positive fractionals

Figure 6b shows the \(P_+\) bit virtual neuron for encoding positive fractionals. The circuit is almost identical to Fig. 6a. The only difference is in the synaptic weights of the outgoing synapses. In this case, these synapses have negative powers of 2 (i.e., \(2^0, 2^{-1}, 2^{-2}, 2^{-3}, \ldots\)) as their weights.

Negative integers

Figure 6c shows the virtual neuron circuit for encoding negative integers. It takes two \(P_-\) bit numbers X and Y as inputs. After standard virtual neuron operations, a \(P_- + 1\) bit number Z is produced as the output. In this case, these weights are negatives of positive powers of 2 (i.e., \(-2^0, -2^1, -2^2, \ldots\)).

Negative fractionals

Figure 6d shows the \(P_-\) bit virtual neuron circuit for encoding negative fractionals. This circuit is identical to Fig. 6c, except the outgoing synapses have weights that are negatives of negative powers of 2 (i.e., \(-2^0, -2^{-1}, -2^{-2}, \ldots\)).

Positive and negative rational numbers

Figure 7
figure 7

Encoding \(P_+\) bit positive rationals and \(P_-\) bit negative rationals using thevirtual neuron.

In this case (Fig. 7), the virtual neuron operates on two \(P_+ + P_-\) bit rational numbers X and Y as inputs. These are shown in blue and yellow rectangles, which denote aggregation of respective neurons. The positive precision \(P_+\) is split between the positive integers and positive fractionals. Similarly, negative precision is split between the negative integers and negative fractionals. Note that the positive part of the circuit (upper half) is completely independent from the negative part of the circuit (lower half).

Computational complexity

Table 1 Neurons, synapses, and time taken with increasing precision.
Figure 8
figure 8

Scalability of thenumber of neurons, synapses, and time steps for the number of precision bits (\(P_+\) or \(P_-\)).

Appendix shows the Python code used to setup the virtual neuron using the NEST simulator. This code includes creating neurons with specific neuron parameters and then connecting them using synapses, which have their own set of parameters. For \(P_+\) bit positive operations, we use \(\mathscr {O}(P_+)\) neurons and synapses and perform the virtual neuron operations in \(\mathscr {O}(P_+)\) time steps. Similarly, for \(P_-\) bit negative operations, we use \(\mathscr {O}(P_-)\) neurons and synapses and perform the virtual neuron operations in \(\mathscr {O}(P_-)\) time steps. All in all, we use \(\mathscr {O}(P_+ + P_-)\) neurons and synapses and consume \(\mathscr {O}(\max \{P_+, P_-\})\) time steps for the virtual neuron operations.

Table 2 Comparing thevirtual neuron to other neuromorphic encoding mechanisms for representing two N-bit numbers exactly.

We validate these space and time complexities empirically for positive operations by increasing \(P_+\). The results of this analysis apply to negative operations as well. We increase the positive precision from \(1, 2, 4, \ldots , 128\) and count the number of neurons, synapses, and time steps in each case. The numerical results are presented in Table 1. These numbers are also plotted in Fig. 8 on logarithmic X and Y axes. From the table, we can conclude that we use \(6P_+ + 3\) neurons, \(12P_+\) synapses, and \(P_+ + 2\) time steps for virtual neuron operations.

Table 3 Comparingthe virtual neuron to other neuromorphic encoding mechanisms for adding two N-bit numbers.

We can extend these time complexities to negative operations to conclude that they would require \(6P_- + 3\) neurons, \(12P_-\) synapses, and \(P_- + 2\) time steps. This validates the space complexity as needing \(\mathscr {O}(P_+ + P_-)\) neurons and synapses. Because the positive and negative operations happen in parallel, the overall time complexity of the circuit would stem from the larger of \(P_+\) and \(P_-\). So, the overall time complexity is validated as \(\mathscr {O}(\max \{P_+, P_-\})\).

Lastly, in computing the above space and time complexities, our assumption is that the positive and negative precisions are variable. However, we envision using the virtual neuron in settings where a neuromorphic computer has a fixed, predetermined positive and negative precision. This is similar to how the precision on our laptops and desktops is fixed to 32, 64, or 128 bits. In such a scenario, \(P_+\) and \(P_-\) can be treated as constants. Thus, the resulting space and time complexities for thevirtual neuron would all be \(\mathscr {O}(1)\).

Table2 comparesdifferent neuromorphic encoding approaches in the literature with our approach using the virtual neuron. Because a neuromorphic computer consumes energy that is proportional to the number of spikes, we use the number of spikes in the worst and average case as an estimate for the energy usage of different neuromorphic approaches. Across different comparison metrics (e.g., network size, number of spikes), the virtual neuron scales linearly with the bit-precision N while giving the exact representation of the input number. Other approaches take either exponential space (Binning), exponential time (Rate Encoding), or are unable to represent rational numbers exactly (IEEE 754).

Table3 presents the comparison of computational complexity for performing addition with two N-bit numbers under different neuromorphic encoding schemes. Here, we do not includea temporal encoding scheme because under such a simple approach, binary spikes occurring at different time instances cannot be added exactly by spiking neurons. Although the virtual neuron can perform the addition operation in linear time steps andby usinga linear number of neurons, synapses, and energy (as estimated by the spiking efficiency), other approaches use either exponential time or exponential space or consumean exponential amount of energy for their operations.

#Encoding #integers #rationals #neuromorphic #computers #virtual #neuron #Scientific #Reports
Image Source : www.nature.com

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *