Home | Trees | Indices | Help |
|
---|
|
Perform Independent Slow Feature Analysis on the input data.
Note: They are not cleared after convergence. If you need to free some memory, you can safely delete them with:: >>> del self.covsNote: If you intend to use this node for large datasets please have a look at the ``stop_training`` method documentation for speeding things up.Reference
Blaschke, T. , Zito, T., and Wiskott, L. (2007). Independent Slow Feature Analysis and Nonlinear Blind Source Separation. Neural Computation 19(4):994-1021 (2007) http://itb.biologie.hu-berlin.de/~wiskott/Publications/BlasZitoWisk2007-ISFA-NeurComp.pdf
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
Inherited from Node | |||
---|---|---|---|
|
|||
|
|
|||
RP The global rotation-permutation matrix. This is the filter applied on input_data to get output_data |
|||
RPC The complete global rotation-permutation matrix. This is a matrix of dimension input_dim x input_dim (the 'outer space' is retained) |
|||
covs A mdp.utils.MultipleCovarianceMatrices instance input_data. After convergence the uppermost output_dim x output_dim submatrices should be almost diagonal. self.covs[n-1] is the covariance matrix relative to the
n -th time-lag
|
|||
final_contrast Like the above but after convergence. |
|||
initial_contrast A dictionary with the starting contrast and the SFA and ICA parts of it. |
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples: |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
Initializes an object of type 'ISFANode' to perform Independent Slow Feature Analysis. The notation is the same used in the paper by Blaschke et al. Please refer to the paper for more information.
|
|
|
|
|
|
|
|
|
|
Return the list of dtypes supported by this node. Support floating point types with size larger or equal than 64 bits.
|
|
|
|
|
|
|
|
Stop the training phase. Note: If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white):: >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example:: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
|
|
Process the data contained in If the object is still in the training phase, the function
stop_training will be called.
By default, subclasses should overwrite _execute to implement their execution phase. The docstring of the _execute method overwrites this docstring.
|
Invert If the node is invertible, compute the input By default, subclasses should overwrite _inverse to implement their inverse function. The docstring of the inverse method overwrites this docstring.
|
Stop the training phase. Note: If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white):: >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example:: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
|
Update the internal structures according to the input data
By default, subclasses should overwrite _train to implement their training phase. The docstring of the _train method overwrites this docstring. Note: a subclass supporting multiple training phases should implement the same signature for all the training phases and document the meaning of the arguments in the _train method doc-string. Having consistent signatures is a requirement to use the node in a flow.
|
|
RPThe global rotation-permutation matrix. This is the filter applied on input_data to get output_data |
RPCThe complete global rotation-permutation matrix. This is a matrix of dimension input_dim x input_dim (the 'outer space' is retained) |
covsA mdp.utils.MultipleCovarianceMatrices instance input_data. After convergence the uppermost output_dim x output_dim submatrices should be almost diagonal.self.covs[n-1] is the covariance matrix relative to the
n -th time-lag
|
final_contrastLike the above but after convergence. |
initial_contrastA dictionary with the starting contrast and the SFA and ICA parts of it. |
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1-MDP on Mon Apr 27 21:56:19 2020 | http://epydoc.sourceforge.net |