You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try a LegendreDelay(theta=0.3, order=10) with a 1 Hz sine input, I get the orange line below :
With a larger theta or higher order, it goes horribly unstable and heads off to infinity.
However, if I try implementing the LegendreDelay myself, I get the blue line, which seems to work perfectly well. Here's my implementation:
importnengoimportnumpyasnpimportmatplotlib.pyplotaspltimportscipy.linalgfromscipy.specialimportlegendreclassLegendreDelay(nengo.synapses.Synapse):
def__init__(self, theta, q):
self.q=q# number of internal state dimensions per inputself.theta=theta# size of time window (in seconds)# Do Aaron's math to generate the matrices# https://github.com/arvoelke/nengolib/blob/master/nengolib/synapses/analog.py#L536A=np.zeros((q, q))
B=np.zeros((q, 1))
foriinrange(q):
B[i] = (-1.)**i* (2*i+1)
forjinrange(q):
A[i,j] = (2*i+1)*(-1ifi<jelse (-1.)**(i-j+1))
self.A=A/thetaself.B=B/thetasuper().__init__(default_size_in=1, default_size_out=1)
defmake_step(self, shape_in, shape_out, dt, rng, state=None):
state=np.zeros((self.q, shape_in[0]))
w=self.get_weights_for_delays(1)
# Handle the fact that we're discretizing the time step# https://en.wikipedia.org/wiki/Discretization#Discretization_of_linear_state_space_modelsAd=scipy.linalg.expm(self.A*dt)
Bd=np.dot(np.dot(np.linalg.inv(self.A), (Ad-np.eye(self.q))), self.B)
# this code will be called every timestepdefstep_legendre(t, x, state=state):
state[:] =np.dot(Ad, state) +np.dot(Bd, x[None, :])
returnw.dot(state).T.flatten()
returnstep_legendredefget_weights_for_delays(self, r):
# compute the weights needed to extract the value at time r# from the network (r=0 is right now, r=1 is theta seconds ago)r=np.asarray(r)
m=np.asarray([legendre(i)(2*r-1) foriinrange(self.q)])
returnm.reshape(self.q, -1).Timportnengolibmodel=nengo.Network()
withmodel:
stim=nengo.Node(lambdat: np.sin(t*2*np.pi))
delay=nengo.Node(None, size_in=2)
theta=0.3order=10dt=0.001nengo.Connection(stim, delay[0], synapse=LegendreDelay(theta, order))
nengo.Connection(stim, delay[1], synapse=nengolib.synapses.LegendreDelay(theta, order))
p=nengo.Probe(delay)
sim=nengo.Simulator(model)
withsim:
sim.run(2.0)
plt.figure(figsize=(10,4))
plt.title(f'LegendreDelay(theta={theta}, order={order})')
plt.plot(sim.trange(), sim.data[p]);
The implementations seem to behave identically for smaller values (the orange and blue lines are right on top of each other):
Any ideas what the difference could be between these implementations?
The text was updated successfully, but these errors were encountered:
Is the first part of your post using the Nengo implementation or the Nengolib implementation? If it's the latter I think it is converting back and forth unnecessarily between transfer function and state space forms. I had some issue or TODO for this. (Replying from phone.)
When I try a LegendreDelay(theta=0.3, order=10) with a 1 Hz sine input, I get the orange line below :
With a larger theta or higher order, it goes horribly unstable and heads off to infinity.
However, if I try implementing the LegendreDelay myself, I get the blue line, which seems to work perfectly well. Here's my implementation:
The implementations seem to behave identically for smaller values (the orange and blue lines are right on top of each other):
Any ideas what the difference could be between these implementations?
The text was updated successfully, but these errors were encountered: