Skip to content

Commit

Permalink
clean up notebook 11
Browse files Browse the repository at this point in the history
  • Loading branch information
dmeliza committed Apr 13, 2023
1 parent 53de760 commit 322b3cf
Showing 1 changed file with 82 additions and 143 deletions.
225 changes: 82 additions & 143 deletions 11-Spiking-Neural-Networks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@
"\n",
"+ Learn additional features of *Brian 2* that are useful in making SNNs.\n",
"+ How to create populations of neurons with heterogeneous dynamics.\n",
"+ Create and manipulate synapses between neurons.\n",
"+ Become familiar with liquid state machines."
"+ Create and manipulate synapses between neurons."
]
},
{
Expand Down Expand Up @@ -44,7 +43,7 @@
"source": [
"## Simulating Many Neurons\n",
"\n",
"It is easy to simulate large groups of neurons using *Brian 2*. We can add randomness to parameters of the neuron models to produce heterogeneous networks. Let's simulate 20 LIF neurons that have a random $\\tau$ parameter between $0$ and $10ms$ and constant input $I(t) = 2$. The **rand()** function used in a string in *Brian 2* will produce a random number between $0$ and $1$ for each neuron in the network. We can also set up monitors for the **v** state variables, the spike times of each neuron, and the overall population firing rate."
"It is easy to simulate large groups of neurons using *Brian 2*. We can add randomness to parameters of the neuron models to produce heterogeneous networks. Let's simulate 20 LIF neurons that have a random $\\tau$ parameter between 0-10 ms and constant input $I(t) = 2$. The **rand()** function used in a string in *Brian 2* will produce a random number between 0-1 for each neuron in the network. We can also set up monitors for the **v** state variables, the spike times of each neuron, and the overall population firing rate."
]
},
{
Expand Down Expand Up @@ -95,15 +94,20 @@
"metadata": {},
"outputs": [],
"source": [
"brian_plot(Rates)\n",
"plt.title(\"Population PSTH\")\n",
"plt.show()\n",
"\n",
"plt.plot(spkM.t/ms,spkM.i, '|',color = 'k')\n",
"plt.title(\"Population Raster Plot\")\n",
"plt.yticks(np.arange(20))\n",
"plt.xlabel(\"Time (ms)\")\n",
"plt.ylabel(\"Neuron Index\")"
"fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(9, 4))\n",
"ax[0].plot(spkM.t/ms,spkM.i, '|',color = 'k')\n",
"ax[0].set_title(\"Population Raster Plot\")\n",
"ax[0].set_ylabel(\"Neuron Index\")\n",
"\n",
"brian_plot(Rates, axes=ax[1])\n",
"ax[1].set_title(\"Population PSTH\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also look at the internal state variable for any of the neurons in the network. Here are the first 3 neurons:"
]
},
{
Expand All @@ -112,13 +116,12 @@
"metadata": {},
"outputs": [],
"source": [
"n_plot = 5\n",
"fig, ax = plt.subplots(nrows = 1, ncols = n_plot, figsize=(20,5))\n",
"for i in range(n_plot):\n",
" ax[i].plot(M.t/ms,M.v[i])\n",
" ax[i].set_xlabel(\"Time (ms)\")\n",
" ax[i].set_title(f\"Neuron {i}\")\n",
"ax[0].set_ylabel(\"V\")"
"n_plot = 3\n",
"fig, axes = plt.subplots(nrows = n_plot, ncols = 1, sharex=True, figsize=(9,6))\n",
"for idx, ax in enumerate(axes):\n",
" ax.plot(M.t/ms,M.v[idx])\n",
" ax.set_ylabel(f\"V_{idx}\")\n",
"ax.set_xlabel(\"Time (ms)\");"
]
},
{
Expand All @@ -129,7 +132,7 @@
"\n",
"**A**. Create a network with the following properties:\n",
"\n",
"+ 30 QIF neurons with a threshold of 40 and reset of 0.5\n",
"+ 30 QIF neurons with a threshold of 40 and reset of 0.5 (look in the previous notebook for this model)\n",
"+ random $\\tau$ between 5 and 10ms\n",
"+ random initial **v** condition between 0 and 1\n",
"+ constant input of 1\n",
Expand All @@ -143,7 +146,7 @@
"source": [
"## Synapses\n",
"\n",
"Creating synapses between the neurons in our network will allow the firing behavior of one neuron to be modulated by the firing of other neurons in the network (we can add an additonal monitor for the synapses). We will start out with a very basic synapse. When the pre-synaptic neuron fires, the voltage of the post-synaptic neuron will jump by the synaptic weight value between the two neurons. Let's simulate a network identical to the one in the first example, but now with a synapse connectivity matrix with the following properties:\n",
"Right now our network is totally unconnected: each neuron spikes without any dependence on the others. To make things more interesting, we need to create synapses between the neurons in our network. We will start out with a basic synapse. When the pre-synaptic neuron fires, the voltage of the post-synaptic neuron will jump by the synaptic weight value between the two neurons. Let's simulate a network identical to the one in the first example, but now with a synapse connectivity matrix with the following properties:\n",
"\n",
"+ the weights of each synapse will follow a Gaussian distribution with a mean of 0 (i.e., equal excitatory and inhibitory connections) and standard deviation of 0.1\n",
"+ each neuron will have a probability of 0.5 to be synapsed to any other neuron\n",
Expand Down Expand Up @@ -178,15 +181,44 @@
"#Create Synapses\n",
"S = Synapses(G,G,\"w:1\",on_pre='v += w') #create synapses from neuron group G to neuron group G\n",
"S.connect(p = 0.5, condition = 'i!=j') #connects synapses with optional conditons\n",
"S.w = np.random.normal(0,.1,size = len(S)) #random matrix with size equal to number of synapses needed\n",
"\n",
"S.w = np.random.normal(0,.1,size = len(S)) #random matrix with size equal to number of synapses needed"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the connectivity matrix of the network:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"brian_plot(S.w);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we'll run the simulation and plot the resulting activity:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Monitor Network State\n",
"M = StateMonitor(G,\"v\", record = True)\n",
"spkM = SpikeMonitor(G)\n",
"Rates = PopulationRateMonitor(G)\n",
"Syn_M = StateMonitor(S,'w',record = True) #monitor for synapses\n",
"\n",
"\n",
"run(sim_length)"
]
},
Expand All @@ -196,19 +228,13 @@
"metadata": {},
"outputs": [],
"source": [
"brian_plot(Rates)\n",
"plt.title(\"Population PSTH\")\n",
"plt.show()\n",
"\n",
"brian_plot(S.w)\n",
"plt.title(\"Connectivity Matrix\")\n",
"plt.show()\n",
"\n",
"plt.plot(spkM.t/ms,spkM.i, '|',color = 'k')\n",
"plt.title(\"Population Raster Plot\")\n",
"plt.yticks(np.arange(20))\n",
"plt.xlabel(\"Time (ms)\")\n",
"plt.ylabel(\"Neuron Index\")"
"fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(9, 4))\n",
"ax[0].plot(spkM.t/ms,spkM.i, '|',color = 'k')\n",
"ax[0].set_title(\"Population Raster Plot\")\n",
"ax[0].set_ylabel(\"Neuron Index\")\n",
"\n",
"brian_plot(Rates, axes=ax[1])\n",
"ax[1].set_title(\"Population PSTH\")"
]
},
{
Expand All @@ -217,13 +243,12 @@
"metadata": {},
"outputs": [],
"source": [
"n_plot = 5\n",
"fig, ax = plt.subplots(nrows = 1, ncols = n_plot, figsize=(20,5))\n",
"for i in range(n_plot):\n",
" ax[i].plot(M.t/ms,M.v[i])\n",
" ax[i].set_xlabel(\"Time (ms)\")\n",
" ax[i].set_title(f\"Neuron {i}\")\n",
"ax[0].set_ylabel(\"V\")"
"n_plot = 3\n",
"fig, axes = plt.subplots(nrows = n_plot, ncols = 1, sharex=True, figsize=(9,6))\n",
"for idx, ax in enumerate(axes):\n",
" ax.plot(M.t/ms,M.v[idx])\n",
" ax.set_ylabel(f\"V_{idx}\")\n",
"ax.set_xlabel(\"Time (ms)\");"
]
},
{
Expand Down Expand Up @@ -258,7 +283,7 @@
"source": [
"## Spike-Timing-Dependent Plasticity\n",
"\n",
"Synapses in the nervous system are dynamic and we may want our SNNs to reflect this property. A common model of Hebbian learning is spike-timing-dependent plasticity (STDP). If the pre-synaptic neuron fires before the post-synaptic neuron, the strength of that synapses increases. If the opposite occurs, the synapse will decrease in strength. A simple way to model STDP is with the following equations:\n",
"One of the ways the brain implements learning is by modifying the strength of synapses. A common model of Hebbian learning is spike-timing-dependent plasticity (STDP). If the pre-synaptic neuron fires before the post-synaptic neuron, the strength of that synapses increases. If the opposite occurs, the synapse will decrease in strength. A simple way to model STDP is with the following equations:\n",
"\n",
"$\\Delta w = \\sum_{t_{pre}} \\sum_{t_{post}} W(t_{post} - t_{pre})$\n",
"\n",
Expand Down Expand Up @@ -363,26 +388,20 @@
"metadata": {},
"outputs": [],
"source": [
"brian_plot(Rates)\n",
"plt.title(\"Population PSTH\")\n",
"plt.show()\n",
"\n",
"brian_plot(S.w)\n",
"plt.title(\"Connectivity Matrix\")\n",
"plt.show()\n",
"\n",
"plt.plot(spkM.t/ms,spkM.i, '|',color = 'k')\n",
"plt.title(\"Population Raster Plot\")\n",
"plt.yticks(np.arange(20))\n",
"plt.xlabel(\"Time (ms)\")\n",
"plt.ylabel(\"Neuron Index\")"
"fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(9, 4))\n",
"ax[0].plot(spkM.t/ms,spkM.i, '|',color = 'k')\n",
"ax[0].set_title(\"Population Raster Plot\")\n",
"ax[0].set_ylabel(\"Neuron Index\")\n",
"\n",
"brian_plot(Rates, axes=ax[1])\n",
"ax[1].set_title(\"Population PSTH\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The connectivity matrix plotted above is the final state of the network synapses. We can access the connectivity matrix at every time point by indexing the synapse monitor that we created. Below shows an alternative way to plot the connectivity matrix of the network, as well as plotting the matrix at different time points in the simulation."
"Let's take a look at how the connectivity matrix changes as a result of the plasticity we added to the network. We can access the connectivity matrix at every time point by indexing the synapse monitor that we created. "
]
},
{
Expand All @@ -391,6 +410,7 @@
"metadata": {},
"outputs": [],
"source": [
"# We have to do some fancy indexing because Syn_M only contains the connected synapses (i.e. no self-synapses)\n",
"W_int = np.zeros((n_neurons,n_neurons)) #initial connectivity matrix\n",
"W_middle = np.zeros((n_neurons,n_neurons)) #halfway through the simulation\n",
"W_final = np.zeros((n_neurons,n_neurons)) #final connectivity matrix\n",
Expand All @@ -406,22 +426,19 @@
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(nrows = 1, ncols = 3, figsize=(15,5))\n",
"fig, ax = plt.subplots(nrows = 1, ncols = 3, sharey=True, figsize=(15,5))\n",
"ax[0].imshow(W_int.T,origin='lower')\n",
"ax[0].set_title(\"Connectivity Matrix (0ms)\")\n",
"ax[0].set_xlabel(\"Source Neuron Index\")\n",
"ax[0].set_ylabel(\"Target Neuron Index\")\n",
"\n",
"\n",
"ax[1].imshow(W_middle.T,origin='lower')\n",
"ax[1].set_title(\"Connectivity Matrix (250ms)\")\n",
"ax[1].set_xlabel(\"Source Neuron Index\")\n",
"ax[1].set_ylabel(\"Target Neuron Index\")\n",
"\n",
"ax[2].imshow(W_final.T,origin='lower')\n",
"ax[2].set_title(\"Connectivity Matrix (500ms)\")\n",
"ax[2].set_xlabel(\"Source Neuron Index\")\n",
"ax[2].set_ylabel(\"Target Neuron Index\")"
"ax[2].set_xlabel(\"Source Neuron Index\")"
]
},
{
Expand All @@ -437,87 +454,9 @@
"\n",
"*Hint*: You may need to adjust the STDP parameters to prevent your network for blowing up or becoming silent.\n",
"\n",
"**B**. Plot the PSTH, rasters, and the synaptic connectivity matricies at times $0$, $100$, $300$, and $500ms$."
"**B**. Plot the PSTH, rasters, and the synaptic connectivity matrices at $t=$ 0, 100, 300, and 500 ms."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Liquid State Machines\n",
"\n",
"Training SNNs is often difficult since the outputs of most spiking models of neurons are not differentiable. One easy method is to use a liquid state machine (LSM) architecture. An LSM is a reservoir computing framework where a collection (i.e., reservoir) of neurons are randomly connected and a linear model is trained on the output of the neurons in response to some input. A simple example would be to reconstruct the input into the reservoir based on the responses of the neurons (i.e., a decoding model).\n",
"\n",
"For a univariate stimulus $s(t)$ and neuron responses $r(t)$, a linear decoding model can be specificed by the following equation:\n",
"\n",
"$s(t) = \\sum_{i = 0}^{N} \\sum_{k = 0}^{\\tau} r_i(t+k)w_i(k)$\n",
"\n",
"where, \n",
"\n",
"+ $N$ is the number of neurons in the reservoir\n",
"+ $\\tau$ is some time horizon for hypothesized stimulus saliency (e.g., 50ms)\n",
"+ $r_i(t)$ is the response of neuron $i$ at time $t$\n",
"+ $w_i(k)$ is the decoder weight for neuron $i$ at time lag $k$\n",
"\n",
"For example, suppose we had a stimulus with a duration of 3 time units and recorded from two neurons. The decoding model could then be written in the matrix form (with $\\tau = 2$):\n",
"\n",
"$\\begin{bmatrix}s(0)\\\\s(1)\\\\s(2)\\end{bmatrix} = $\n",
"$\\begin{bmatrix} r_1(0) & r_1(1) & r_1(2) & r_2(0) & r_2(1) & r_2(2)\\\\ r_1(2) & r_1(3) & r_1(4) & r_2(2) & r_2(3) & r_2(4)\\\\ r_1(3) & r_1(4) & r_1(5) & r_2(3) & r_2(4) & r_2(5)\\end{bmatrix}$\n",
"$\\begin{bmatrix}w_1(0)\\\\w_1(1)\\\\w_1(2)\\\\w_2(0)\\\\w_2(1)\\\\w_2(2)\\end{bmatrix}$\n",
"\n",
"I encourage you to justify to yourself why these two version of the decoding framework are equivalent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 4\n",
"\n",
"In the above example, why did we need to record longer than the duration of the stimulus? What would happen if the two durations were equal? Is there a relationship between the duration of the stimulus and the time lag $\\tau$ on the length of time we need to record from the neurons?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 5\n",
"\n",
"**A**. Build a LSM to reconstruct the input signal given below (simulation dt = $0.1ms$). I will leave it to you to decide how to set up the reservoir (it's part of the fun). Some hints:\n",
"\n",
"+ A small reservoir will probably give poor performance. I would use around 40 neurons or more to start with.\n",
"+ Add variablitiy to the parameters of the neurons in the reservoir.\n",
"+ Don't use STDP.\n",
"+ After creating the stimulus column vector $S$ and response matrix $R$, use either the linear regression or ridge regression functions from *sklearn*.\n",
"\n",
"**B**. Create a new time-varying signal and input it into the reservoir. Does your trained decoder perform well on this novel input signal?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sim_length = 100*ms\n",
"\n",
"np.random.seed(138)\n",
"Input_Signal = np.sin(np.arange(1000)*0.01)\n",
"for i in range(5):\n",
" new_freq = np.sin(np.arange(1000)*np.random.normal(0,.1,1))\n",
" Input_Signal += new_freq\n",
"\n",
"plt.plot(Input_Signal)\n",
"#Use this signal for your LSM"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
Expand Down

0 comments on commit 322b3cf

Please sign in to comment.