Package mdp :: Package nodes :: Class TDSEPNode
[hide private]
[frames] | no frames]

Class TDSEPNode


Perform Independent Component Analysis using the TDSEP algorithm.

Note: That TDSEP, as implemented in this Node, is an online algorithm, i.e. it is suited to be trained on huge data sets, provided that the training is done sending small chunks of data for each time.

Reference

Ziehe, Andreas and Muller, Klaus-Robert (1998). TDSEP an efficient algorithm for blind separation using time structure. in Niklasson, L, Boden, M, and Ziemke, T (Editors), Proc. 8th Int. Conf. Artificial Neural Networks (ICANN 1998).

Instance Methods [hide private]
 
__init__(self, lags=1, limit=1e-05, max_iter=10000, verbose=False, whitened=False, white_comp=None, white_parm=None, input_dim=None, dtype=None)
Initializes an object of type 'TDSEPNode'.
 
_stop_training(self, covs=None)
Stop the training phase.
 
stop_training(self, covs=None)
Stop the training phase.

Inherited from unreachable.ProjectMatrixMixin: get_projmatrix, get_recmatrix

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from ISFANode
 
_adjust_ica_sfa_coeff(self)
Adjust SFA/ICA ratio. The ICA and SFA terms are scaled differently because SFA accounts for the diagonal terms whereas ICA accounts for the off-diagonal terms.
 
_do_sweep(self, covs, Q, prev_contrast)
Perform a single sweep.
 
_execute(self, x)
 
_fix_covs(self, covs=None)
 
_fmt_prog_info(self, sweep, pert, contrast, sfa=None, ica=None)
 
_get_contrast(self, covs, bica_bsfa=None)
 
_get_eye(self)
 
_get_rnd_permutation(self, dim)
 
_get_rnd_rotation(self, dim)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node.
 
_givens_angle(self, i, j, covs, bica_bsfa=None, complete=0)
 
_givens_angle_case1(self, m, n, covs, bica_bsfa, complete=0)
 
_givens_angle_case2(self, m, n, covs, bica_bsfa, complete=0)
 
_inverse(self, y)
 
_optimize(self)
Optimizes the contrast function. :return: The optimal rotation matrix.
 
_set_dtype(self, dtype)
 
_set_input_dim(self, n)
 
_train(self, x)
 
execute(self, x)
Process the data contained in x.
 
inverse(self, y)
Invert y.
 
train(self, x)
Update the internal structures according to the input data x.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of numpy.dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
    Inherited from Node
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Instance Variables [hide private]
  convergence
The value of the convergence threshold.
  filters
The ICA filters matrix (this is the transposed of the projection matrix after whitening).
  white
The whitening node used for preprocessing.
    Inherited from ISFANode
  RP
The global rotation-permutation matrix. This is the filter applied on input_data to get output_data
  RPC
The complete global rotation-permutation matrix. This is a matrix of dimension input_dim x input_dim (the 'outer space' is retained)
  covs
A mdp.utils.MultipleCovarianceMatrices instance input_data. After convergence the uppermost output_dim x output_dim submatrices should be almost diagonal. self.covs[n-1] is the covariance matrix relative to the n-th time-lag
  final_contrast
Like the above but after convergence.
  initial_contrast
A dictionary with the starting contrast and the SFA and ICA parts of it.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, lags=1, limit=1e-05, max_iter=10000, verbose=False, whitened=False, white_comp=None, white_parm=None, input_dim=None, dtype=None)
(Constructor)

 

Initializes an object of type 'TDSEPNode'.

Note: Time-lag == 0 (instantaneous correlation) is always implicitly used.
Parameters:
  • lags (list or int) - List of time-lags to generate the time-delayed covariance matrices. If lags is an integer, time-lags 1,2,...,'lags' are used.
  • limit (float) - Convergence threshold.
  • max_iter (int) - If the algorithms does not achieve convergence within max_iter iterations raise an Exception. Should be larger than 100.
  • verbose (bool) - Idicates whether information is to be reported about the operation.
  • whitened (bool) - Set whitened is True if input data are already whitened. Otherwise the node will whiten the data itself.
  • white_comp (int) - If whitened is False, you can set 'white_comp' to the number of whitened components to keep during the calculation (i.e., the input dimensions are reduced to white_comp by keeping the components of largest variance).
  • white_parm (dict) - A dictionary with additional parameters for whitening. It is passed directly to the WhiteningNode constructor. For example:

    >>> white_parm = { 'svd' : True }
    
  • input_dim (int) - The input dimensionality.
  • dtype (numpy.dtype or str) - The datatype.
Overrides: object.__init__

_stop_training(self, covs=None)

 

Stop the training phase.

Note: If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white):: >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example:: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
Parameters:
  • covs - The covariance matrices.
Overrides: Node._stop_training

stop_training(self, covs=None)

 

Stop the training phase.

Note: If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white):: >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example:: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
Parameters:
  • covs - The covariance matrices.
Overrides: Node.stop_training

Instance Variable Details [hide private]

convergence

The value of the convergence threshold.

filters

The ICA filters matrix (this is the transposed of the projection matrix after whitening).

white

The whitening node used for preprocessing.