Package mdp :: Package nodes :: Class ISFANode
[hide private]
[frames] | no frames]

Class ISFANode


Perform Independent Slow Feature Analysis on the input data.

Note: They are not cleared after convergence. If you need to free some memory, you can safely delete them with:: >>> del self.covsNote: If you intend to use this node for large datasets please have a look at the ``stop_training`` method documentation for speeding things up.

Reference

Blaschke, T. , Zito, T., and Wiskott, L. (2007). Independent Slow Feature Analysis and Nonlinear Blind Source Separation. Neural Computation 19(4):994-1021 (2007) http://itb.biologie.hu-berlin.de/~wiskott/Publications/BlasZitoWisk2007-ISFA-NeurComp.pdf

Instance Methods [hide private]
 
__init__(self, lags=1, sfa_ica_coeff=(1.0, 1.0), icaweights=None, sfaweights=None, whitened=False, white_comp=None, white_parm=None, eps_contrast=1e-06, max_iter=10000, RP=None, verbose=False, input_dim=None, output_dim=None, dtype=None)
Initializes an object of type 'ISFANode' to perform Independent Slow Feature Analysis.
 
_adjust_ica_sfa_coeff(self)
Adjust SFA/ICA ratio. The ICA and SFA terms are scaled differently because SFA accounts for the diagonal terms whereas ICA accounts for the off-diagonal terms.
 
_do_sweep(self, covs, Q, prev_contrast)
Perform a single sweep.
 
_execute(self, x)
 
_fix_covs(self, covs=None)
 
_fmt_prog_info(self, sweep, pert, contrast, sfa=None, ica=None)
 
_get_contrast(self, covs, bica_bsfa=None)
 
_get_eye(self)
 
_get_rnd_permutation(self, dim)
 
_get_rnd_rotation(self, dim)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node.
 
_givens_angle(self, i, j, covs, bica_bsfa=None, complete=0)
 
_givens_angle_case1(self, m, n, covs, bica_bsfa, complete=0)
 
_givens_angle_case2(self, m, n, covs, bica_bsfa, complete=0)
 
_inverse(self, y)
 
_optimize(self)
Optimizes the contrast function. :return: The optimal rotation matrix.
 
_set_dtype(self, dtype)
 
_set_input_dim(self, n)
 
_stop_training(self, covs=None)
Stop the training phase.
 
_train(self, x)
 
execute(self, x)
Process the data contained in x.
 
inverse(self, y)
Invert y.
 
stop_training(self, covs=None)
Stop the training phase.
 
train(self, x)
Update the internal structures according to the input data x.

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of numpy.dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
    Inherited from Node
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Instance Variables [hide private]
  RP
The global rotation-permutation matrix. This is the filter applied on input_data to get output_data
  RPC
The complete global rotation-permutation matrix. This is a matrix of dimension input_dim x input_dim (the 'outer space' is retained)
  covs
A mdp.utils.MultipleCovarianceMatrices instance input_data. After convergence the uppermost output_dim x output_dim submatrices should be almost diagonal. self.covs[n-1] is the covariance matrix relative to the n-th time-lag
  final_contrast
Like the above but after convergence.
  initial_contrast
A dictionary with the starting contrast and the SFA and ICA parts of it.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, lags=1, sfa_ica_coeff=(1.0, 1.0), icaweights=None, sfaweights=None, whitened=False, white_comp=None, white_parm=None, eps_contrast=1e-06, max_iter=10000, RP=None, verbose=False, input_dim=None, output_dim=None, dtype=None)
(Constructor)

 

Initializes an object of type 'ISFANode' to perform Independent Slow Feature Analysis.

The notation is the same used in the paper by Blaschke et al. Please refer to the paper for more information.

Parameters:
  • lags (list or int) - A list of time-lags to generate the time-delayed covariance matrices (in the paper this is the set of au). If lags is an integer, time-lags 1,2,...,'lags' are used. Note that time-lag == 0 (instantaneous correlation) is always implicitly used.
  • sfa_ica_coeff (list) - A list of float with two entries, which defines the weights of the SFA and ICA part of the objective function. They are called b_{SFA} and b_{ICA} in the paper.
  • icaweights (int, list or array) - Weighting factors for the cov matrices relative to the ICA part of the objective function (called kappa_{ICA}^{ au} in the paper). Default is 1. Possible values are:

    • An integer n: All matrices are weighted the same (note that it does not make sense to have n != 1).
    • A list or array of floats of len == len(lags): Each element of the list is used for weighting the corresponding matrix.
    • None: Use the default values.
  • sfaweights (int, list or array) - Weighting factors for the covariance matrices relative to the SFA part of the objective function (called kappa_{SFA}^{ au} in the paper). Default is [1., 0., ..., 0.] For possible values see the description of icaweights.
  • whitened (bool) - True if input data is already white, False otherwise (the data will be whitened internally).
  • white_comp (int) - If whitened is false, you can set white_comp to the number of whitened components to keep during the calculation (i.e., the input dimensions are reduced to white_comp` by keeping the components of largest variance).
  • white_parm (dict) - A dictionary with additional parameters for whitening. It is passed directly to the WhiteningNode constructor. Ex: white_parm = { 'svd' : True }
  • eps_contrast (float) - Convergence is achieved when the relative improvement in the contrast is below this threshold. Values in the range [1E-4, 1E-10] are usually reasonable.
  • max_iter (int) - If the algorithms does not achieve convergence within max_iter iterations raise an Exception. Should be larger than 100.
  • RP - Starting rotation-permutation matrix. It is an input_dim x input_dim matrix used to initially rotate the input components. If not set, the identity matrix is used. In the paper this is used to start the algorithm at the SFA solution (which is often quite near to the optimum).
  • verbose (bool) - Print progress information during convergence. This can slow down the algorithm, but it's the only way to see the rate of improvement and immediately spot if something is going wrong.
  • input_dim (int) - The input dimensionality.
  • output_dim (int) - Sets the number of independent components that have to be extracted. Note that if this is not smaller than input_dim, the problem is solved linearly and SFA would give the same solution only much faster.
  • dtype (numpy.dtype or str) - Datatype to be used.
Overrides: object.__init__

_adjust_ica_sfa_coeff(self)

 
Adjust SFA/ICA ratio. The ICA and SFA terms are scaled differently because SFA accounts for the diagonal terms whereas ICA accounts for the off-diagonal terms.

_do_sweep(self, covs, Q, prev_contrast)

 
Perform a single sweep.
Parameters:
  • covs - The covariance matrices.
  • Q - he rotation matrix.
  • prev_contrast - The previous contrast.
Returns:
The maximum improvement in contrast, rotated matrices, the current contrast.

_execute(self, x)

 
Overrides: Node._execute

_fix_covs(self, covs=None)

 

_fmt_prog_info(self, sweep, pert, contrast, sfa=None, ica=None)

 

_get_contrast(self, covs, bica_bsfa=None)

 

_get_eye(self)

 

_get_rnd_permutation(self, dim)

 

_get_rnd_rotation(self, dim)

 

_get_supported_dtypes(self)

 

Return the list of dtypes supported by this node.

Support floating point types with size larger or equal than 64 bits.

Overrides: Node._get_supported_dtypes

_givens_angle(self, i, j, covs, bica_bsfa=None, complete=0)

 

_givens_angle_case1(self, m, n, covs, bica_bsfa, complete=0)

 

_givens_angle_case2(self, m, n, covs, bica_bsfa, complete=0)

 

_inverse(self, y)

 
Overrides: Node._inverse

_optimize(self)

 
Optimizes the contrast function. :return: The optimal rotation matrix.

_set_dtype(self, dtype)

 
Overrides: Node._set_dtype

_set_input_dim(self, n)

 
Overrides: Node._set_input_dim

_stop_training(self, covs=None)

 

Stop the training phase.

Note: If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white):: >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example:: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
Parameters:
  • covs - The covariance matrices.
Overrides: Node._stop_training

_train(self, x)

 
Overrides: Node._train

execute(self, x)

 

Process the data contained in x.

If the object is still in the training phase, the function stop_training will be called. x is a matrix having different variables on different columns and observations on the rows.

By default, subclasses should overwrite _execute to implement their execution phase. The docstring of the _execute method overwrites this docstring.

Overrides: Node.execute

inverse(self, y)

 

Invert y.

If the node is invertible, compute the input x such that y = execute(x).

By default, subclasses should overwrite _inverse to implement their inverse function. The docstring of the inverse method overwrites this docstring.

Overrides: Node.inverse

stop_training(self, covs=None)

 

Stop the training phase.

Note: If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white):: >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example:: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
Parameters:
  • covs - The covariance matrices.
Overrides: Node.stop_training

train(self, x)

 

Update the internal structures according to the input data x.

x is a matrix having different variables on different columns and observations on the rows.

By default, subclasses should overwrite _train to implement their training phase. The docstring of the _train method overwrites this docstring.

Note: a subclass supporting multiple training phases should implement the same signature for all the training phases and document the meaning of the arguments in the _train method doc-string. Having consistent signatures is a requirement to use the node in a flow.

Overrides: Node.train

Instance Variable Details [hide private]

RP

The global rotation-permutation matrix. This is the filter applied on input_data to get output_data

RPC

The complete global rotation-permutation matrix. This is a matrix of dimension input_dim x input_dim (the 'outer space' is retained)

covs

A mdp.utils.MultipleCovarianceMatrices instance input_data. After convergence the uppermost output_dim x output_dim submatrices should be almost diagonal. self.covs[n-1] is the covariance matrix relative to the n-th time-lag

final_contrast

Like the above but after convergence.

initial_contrast

A dictionary with the starting contrast and the SFA and ICA parts of it.