Package mdp :: Package nodes :: Class PCAScikitsLearnNode
[hide private]
[frames] | no frames]

Class PCAScikitsLearnNode



Principal component analysis (PCA)
This node has been automatically generated by wrapping the ``scikits.learn.decomposition.pca.PCA`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using Singular Value Decomposition of the
data and keeping only the most significant singular vectors to project the
data to a lower dimensional space.

This implementation uses the scipy.linalg implementation of the singular
value decomposition. It only works for dense arrays and is not scalable to
large dimensional data.

The time complexity of this implementation is O(n ** 3) assuming
n ~ n_samples ~ n_features.

**Parameters**

n_components: int, none or string
    Number of components to keep.
    if n_components is not set all components are kept:

        - n_components == min(n_samples, n_features)


    if n_components == 'mle', Minka's MLE is used to guess the dimension

    if 0 < n_components < 1, select the number of components such that
                             the explained variance ratio is greater
                             than n_components

copy: bool
    If False, data passed to fit are overwritten

whiten: bool, optional
    When True (False by default) the ``components_`` vectors are divided
    by n_samples times singular values to ensure uncorrelated outputs
    with unit component-wise variances.

    Whitening will remove some information from the transformed signal
    (the relative variance scales of the components) but can sometime
    improve the predictive accuracy of the downstream estimators by
    making there data respect some hard-wired assumptions.

**Attributes**

components_: array, [n_components, n_features]
    Components with maximum variance.

explained_variance_ratio_: array, [n_components]
    Percentage of variance explained by each of the selected components.
    k is not set then all components are stored and the sum of
    explained variances is equal to 1.0

**Notes**

For n_components='mle', this class uses the method of Thomas P. Minka:

Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604

Due to implementation subtleties of the Singular Value Decomposition (SVD),
which is used in this implementation, running fit twice on the same matrix
can lead to principal components with signs flipped (change in direction).
For this reason, it is important to always use the same estimator object to
transform data in a consistent fashion.

**Examples**

>>> import numpy as np
>>> from scikits.learn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(copy=True, n_components=2, whiten=False)
>>> print pca.explained_variance_ratio_
[ 0.99244289  0.00755711]

See also

ProbabilisticPCA
RandomizedPCA

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Principal component analysis (PCA) This node has been automatically generated by wrapping the ``scikits.learn.decomposition.pca.PCA`` class from the ``sklearn`` library.
 
_execute(self, x)
list
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
This node has been automatically generated by wrapping the scikits.learn.decomposition.pca.PCA class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.
 
stop_training(self, **kwargs)
Fit the model from data in X. This node has been automatically generated by wrapping the scikits.learn.decomposition.pca.PCA class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of numpy.dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
bool
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Principal component analysis (PCA)
This node has been automatically generated by wrapping the ``scikits.learn.decomposition.pca.PCA`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using Singular Value Decomposition of the
data and keeping only the most significant singular vectors to project the
data to a lower dimensional space.

This implementation uses the scipy.linalg implementation of the singular
value decomposition. It only works for dense arrays and is not scalable to
large dimensional data.

The time complexity of this implementation is O(n ** 3) assuming
n ~ n_samples ~ n_features.

**Parameters**

n_components: int, none or string
    Number of components to keep.
    if n_components is not set all components are kept:

        - n_components == min(n_samples, n_features)


    if n_components == 'mle', Minka's MLE is used to guess the dimension

    if 0 < n_components < 1, select the number of components such that
                             the explained variance ratio is greater
                             than n_components

copy: bool
    If False, data passed to fit are overwritten

whiten: bool, optional
    When True (False by default) the ``components_`` vectors are divided
    by n_samples times singular values to ensure uncorrelated outputs
    with unit component-wise variances.

    Whitening will remove some information from the transformed signal
    (the relative variance scales of the components) but can sometime
    improve the predictive accuracy of the downstream estimators by
    making there data respect some hard-wired assumptions.

**Attributes**

components_: array, [n_components, n_features]
    Components with maximum variance.

explained_variance_ratio_: array, [n_components]
    Percentage of variance explained by each of the selected components.
    k is not set then all components are stored and the sum of
    explained variances is equal to 1.0

**Notes**

For n_components='mle', this class uses the method of Thomas P. Minka:

Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604

Due to implementation subtleties of the Singular Value Decomposition (SVD),
which is used in this implementation, running fit twice on the same matrix
can lead to principal components with signs flipped (change in direction).
For this reason, it is important to always use the same estimator object to
transform data in a consistent fashion.

**Examples**

>>> import numpy as np
>>> from scikits.learn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(copy=True, n_components=2, whiten=False)
>>> print pca.explained_variance_ratio_
[ 0.99244289  0.00755711]

See also

ProbabilisticPCA
RandomizedPCA

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Returns: list
The list of dtypes supported by this node.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 
This node has been automatically generated by wrapping the scikits.learn.decomposition.pca.PCA class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Returns: bool
A boolean indication whether the node can be trained.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Fit the model from data in X. This node has been automatically generated by wrapping the scikits.learn.decomposition.pca.PCA class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

X: array-like, shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.

Returns

self : object
Returns the instance itself.
Overrides: Node.stop_training