| Home | Trees | Indices | Help |
|
|---|
|
|
Principal component analysis (PCA)
This node has been automatically generated by wrapping the ``scikits.learn.decomposition.pca.PCA`` class
from the ``sklearn`` library. The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using Singular Value Decomposition of the
data and keeping only the most significant singular vectors to project the
data to a lower dimensional space.
This implementation uses the scipy.linalg implementation of the singular
value decomposition. It only works for dense arrays and is not scalable to
large dimensional data.
The time complexity of this implementation is O(n ** 3) assuming
n ~ n_samples ~ n_features.
**Parameters**
n_components: int, none or string
Number of components to keep.
if n_components is not set all components are kept:
- n_components == min(n_samples, n_features)
if n_components == 'mle', Minka's MLE is used to guess the dimension
if 0 < n_components < 1, select the number of components such that
the explained variance ratio is greater
than n_components
copy: bool
If False, data passed to fit are overwritten
whiten: bool, optional
When True (False by default) the ``components_`` vectors are divided
by n_samples times singular values to ensure uncorrelated outputs
with unit component-wise variances.
Whitening will remove some information from the transformed signal
(the relative variance scales of the components) but can sometime
improve the predictive accuracy of the downstream estimators by
making there data respect some hard-wired assumptions.
**Attributes**
components_: array, [n_components, n_features]
Components with maximum variance.
explained_variance_ratio_: array, [n_components]
Percentage of variance explained by each of the selected components.
k is not set then all components are stored and the sum of
explained variances is equal to 1.0
**Notes**
For n_components='mle', this class uses the method of Thomas P. Minka:
Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604
Due to implementation subtleties of the Singular Value Decomposition (SVD),
which is used in this implementation, running fit twice on the same matrix
can lead to principal components with signs flipped (change in direction).
For this reason, it is important to always use the same estimator object to
transform data in a consistent fashion.
**Examples**
>>> import numpy as np
>>> from scikits.learn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(copy=True, n_components=2, whiten=False)
>>> print pca.explained_variance_ratio_
[ 0.99244289 0.00755711]
See also
ProbabilisticPCA
RandomizedPCA
|
|||
|
|||
|
|||
| list |
|
||
|
|||
|
|||
|
|||
|
Inherited from Inherited from |
|||
| Inherited from Cumulator | |||
|---|---|---|---|
|
|||
|
|||
| Inherited from Node | |||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
| bool |
|
||
|
|||
|
Inherited from |
|||
| Inherited from Node | |||
|---|---|---|---|
|
_train_seq List of tuples: |
|||
|
dtype dtype |
|||
|
input_dim Input dimensions |
|||
|
output_dim Output dimensions |
|||
|
supported_dtypes Supported dtypes |
|||
|
|||
Principal component analysis (PCA)
This node has been automatically generated by wrapping the ``scikits.learn.decomposition.pca.PCA`` class
from the ``sklearn`` library. The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using Singular Value Decomposition of the
data and keeping only the most significant singular vectors to project the
data to a lower dimensional space.
This implementation uses the scipy.linalg implementation of the singular
value decomposition. It only works for dense arrays and is not scalable to
large dimensional data.
The time complexity of this implementation is O(n ** 3) assuming
n ~ n_samples ~ n_features.
**Parameters**
n_components: int, none or string
Number of components to keep.
if n_components is not set all components are kept:
- n_components == min(n_samples, n_features)
if n_components == 'mle', Minka's MLE is used to guess the dimension
if 0 < n_components < 1, select the number of components such that
the explained variance ratio is greater
than n_components
copy: bool
If False, data passed to fit are overwritten
whiten: bool, optional
When True (False by default) the ``components_`` vectors are divided
by n_samples times singular values to ensure uncorrelated outputs
with unit component-wise variances.
Whitening will remove some information from the transformed signal
(the relative variance scales of the components) but can sometime
improve the predictive accuracy of the downstream estimators by
making there data respect some hard-wired assumptions.
**Attributes**
components_: array, [n_components, n_features]
Components with maximum variance.
explained_variance_ratio_: array, [n_components]
Percentage of variance explained by each of the selected components.
k is not set then all components are stored and the sum of
explained variances is equal to 1.0
**Notes**
For n_components='mle', this class uses the method of Thomas P. Minka:
Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604
Due to implementation subtleties of the Singular Value Decomposition (SVD),
which is used in this implementation, running fit twice on the same matrix
can lead to principal components with signs flipped (change in direction).
For this reason, it is important to always use the same estimator object to
transform data in a consistent fashion.
**Examples**
>>> import numpy as np
>>> from scikits.learn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(copy=True, n_components=2, whiten=False)
>>> print pca.explained_variance_ratio_
[ 0.99244289 0.00755711]
See also
ProbabilisticPCA
RandomizedPCA
|
|
|
|
scikits.learn.decomposition.pca.PCA class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
|
|
|
Fit the model from data in X.
This node has been automatically generated by wrapping the
Returns
|
| Home | Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1-MDP on Mon Apr 27 21:56:21 2020 | http://epydoc.sourceforge.net |