I am now learning about LSM (Liquid State Machines), and I am trying to understand how exactly they are used for learning.
I am pretty confused from what I've read over the web. I'll write what I've understood so far, but it might be incorrect, so I'd be glad if you could correct me and explain what's true:
LSMs are not trained at all: They are just initialized with many "temporal neurons" (e.g. Leaky Integrate & Fire neurons), while their thresholds are drawn randomly, and so are the connections between them (i.e. a neuron doesn't have to have a common edge with each of the other neurons).
If we want to "learn" that x time-units after inputting I, the occurrence Y occurs, then we need to "wait" x time-units with the LIF "detectors", and see which neurons fired at this specific moment. Then, we can train a classifier (e.g. FeedForward Network), that this specific subset of firing neurons means that the occurrence Y happened.
We may use many "temporal neurons" in our "liquid", so you may have many possible different subsets of firing neurons, so a specific subset of firing neurons becomes almost unique for the moment after we waited x time-units, after inputting our input I
I don't know whether what I wrote above is true at all. I'd appreciate explanations about the topic.
Liquid State Machine (LSM) is a neural model with real time computations which transforms the time varying inputs stream to a higher dimensional space.
A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly connected to each other.
We propose Neural State Machine, a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interactions. Even a seemingly simple task such as sitting on a chair is notoriously hard to model with supervised learning.
From your questions, it seems that you are on the right track. Anyhow, the Liquid State Machine and Echo State machine are complex topics that deal with computational neuroscience and physics, topics like chaos, dynamic action system, and feedback system and machine learning. So, it’s ok if you feel like it’s hard to wrap your head around it.
To answer your questions:
Regarding LIF (Leaky Integrate & Fire neurons), as I see it (I could be wrong) the big difference between the two approaches is the individual unit. In liquid state machine uses biological like neurons, and in the Echo state uses more analog units. So, in terms of “very short term memory” the Liquid State approach each individual neuron remembers its own history, where in the Echo state approach each individual neuron reacts based only on the current state, and therefore the memory stored in the activity between the units.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With