Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Liquid State Machine: How it works and how to use it?

Tags:

I am now learning about LSM (Liquid State Machines), and I am trying to understand how exactly they are used for learning.

I am pretty confused from what I've read over the web. I'll write what I've understood so far, but it might be incorrect, so I'd be glad if you could correct me and explain what's true:

  1. LSMs are not trained at all: They are just initialized with many "temporal neurons" (e.g. Leaky Integrate & Fire neurons), while their thresholds are drawn randomly, and so are the connections between them (i.e. a neuron doesn't have to have a common edge with each of the other neurons).

  2. If we want to "learn" that x time-units after inputting I, the occurrence Y occurs, then we need to "wait" x time-units with the LIF "detectors", and see which neurons fired at this specific moment. Then, we can train a classifier (e.g. FeedForward Network), that this specific subset of firing neurons means that the occurrence Y happened.

  3. We may use many "temporal neurons" in our "liquid", so you may have many possible different subsets of firing neurons, so a specific subset of firing neurons becomes almost unique for the moment after we waited x time-units, after inputting our input I

I don't know whether what I wrote above is true at all. I'd appreciate explanations about the topic.

like image 232
SomethingSomething Avatar asked Feb 04 '15 16:02

SomethingSomething


People also ask

What is LSM in machine learning?

Liquid State Machine (LSM) is a neural model with real time computations which transforms the time varying inputs stream to a higher dimensional space.

What is LSM network?

A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly connected to each other.

What is a neural State Machine?

We propose Neural State Machine, a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interactions. Even a seemingly simple task such as sitting on a chair is notoriously hard to model with supervised learning.


1 Answers

From your questions, it seems that you are on the right track. Anyhow, the Liquid State Machine and Echo State machine are complex topics that deal with computational neuroscience and physics, topics like chaos, dynamic action system, and feedback system and machine learning. So, it’s ok if you feel like it’s hard to wrap your head around it.

To answer your questions:

  1. Most implementations of Liquid State Machines using the reservoir of neurons untrained. There have been some attempts to train the reservoir but they haven't had the dramatic success that justifies the computational power that is needed for this aim. (See: Reservoir Computing Approaches to Recurrent Neural Network Training) or (The p-Delta Learning Rule for Parallel Perceptrons )

    My opinion is that if you want to use the Liquid as classifier in terms of separability or generalization of pattern, you can gain much more from the way the neurons connect between each other (see Hazan, H. and Manevitz, L., Topological constraints and robustness in liquid state machines, Expert Systems with Applications, Volume 39, Issue 2, Pages 1597-1606, February 2012.) or (Which Model to Use for the Liquid State Machine?) The biological approach (in my opinion the most interesting one) (What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? )
  2. You are right, you need to wait at least until you finish giving the input, otherwise you risk in detect your input, and not the activity that occurs as a result from your input as it should be.
  3. Yes, you can imagine that your liquid complexity is a kernel in SVM that try to project the data points to some hyperspace and the detector in the liquid as the part that try to separate between the classes in the dataset. As a rule of the thumb, the number of neurons and the way they connect between each other determine the degree of complexity of the liquid.

Regarding LIF (Leaky Integrate & Fire neurons), as I see it (I could be wrong) the big difference between the two approaches is the individual unit. In liquid state machine uses biological like neurons, and in the Echo state uses more analog units. So, in terms of “very short term memory” the Liquid State approach each individual neuron remembers its own history, where in the Echo state approach each individual neuron reacts based only on the current state, and therefore the memory stored in the activity between the units.

like image 158
Hananel Avatar answered Sep 24 '22 15:09

Hananel