| Home | Trees | Indices | Help |
|
|---|
|
|
Principal component analysis (PCA) using randomized SVD
This node has been automatically generated by wrapping the ``scikits.learn.decomposition.pca.RandomizedPCA`` class
from the ``sklearn`` library. The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using approximated Singular Value
Decomposition of the data and keeping only the most significant
singular vectors to project the data to a lower dimensional space.
This implementation uses a randomized SVD implementation and can
handle both scipy.sparse and numpy dense arrays as input.
**Parameters**
n_components: int
Maximum number of components to keep: default is 50.
copy: bool
If False, data passed to fit are overwritten
iterated_power: int, optional
Number of iteration for the power method. 3 by default.
whiten: bool, optional
When True (False by default) the ``components_`` vectors are divided
by the singular values to ensure uncorrelated outputs with unit
component-wise variances.
Whitening will remove some information from the transformed signal
(the relative variance scales of the components) but can sometime
improve the predictive accuracy of the downstream estimators by
making there data respect some hard-wired assumptions.
**Attributes**
components_: array, [n_components, n_features]
Components with maximum variance.
explained_variance_ratio_: array, [n_components]
Percentage of variance explained by each of the selected components.
k is not set then all components are stored and the sum of
explained variances is equal to 1.0
**Examples**
>>> import numpy as np
>>> from scikits.learn.decomposition import RandomizedPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = RandomizedPCA(n_components=2)
>>> pca.fit(X)
RandomizedPCA(copy=True, n_components=2, iterated_power=3, whiten=False)
>>> print pca.explained_variance_ratio_
[ 0.99244289 0.00755711]
See also
PCA
ProbabilisticPCA
**Notes**
References:
* Finding structure with randomness: Stochastic algorithms for
constructing approximate matrix decompositions Halko, et al., 2009
(arXiv:909)
* A randomized algorithm for the decomposition of matrices
Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert
|
|||
|
|||
|
|||
| list |
|
||
|
|||
|
|||
|
|||
|
Inherited from Inherited from |
|||
| Inherited from Cumulator | |||
|---|---|---|---|
|
|||
|
|||
| Inherited from Node | |||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
| bool |
|
||
|
|||
|
Inherited from |
|||
| Inherited from Node | |||
|---|---|---|---|
|
_train_seq List of tuples: |
|||
|
dtype dtype |
|||
|
input_dim Input dimensions |
|||
|
output_dim Output dimensions |
|||
|
supported_dtypes Supported dtypes |
|||
|
|||
Principal component analysis (PCA) using randomized SVD
This node has been automatically generated by wrapping the ``scikits.learn.decomposition.pca.RandomizedPCA`` class
from the ``sklearn`` library. The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using approximated Singular Value
Decomposition of the data and keeping only the most significant
singular vectors to project the data to a lower dimensional space.
This implementation uses a randomized SVD implementation and can
handle both scipy.sparse and numpy dense arrays as input.
**Parameters**
n_components: int
Maximum number of components to keep: default is 50.
copy: bool
If False, data passed to fit are overwritten
iterated_power: int, optional
Number of iteration for the power method. 3 by default.
whiten: bool, optional
When True (False by default) the ``components_`` vectors are divided
by the singular values to ensure uncorrelated outputs with unit
component-wise variances.
Whitening will remove some information from the transformed signal
(the relative variance scales of the components) but can sometime
improve the predictive accuracy of the downstream estimators by
making there data respect some hard-wired assumptions.
**Attributes**
components_: array, [n_components, n_features]
Components with maximum variance.
explained_variance_ratio_: array, [n_components]
Percentage of variance explained by each of the selected components.
k is not set then all components are stored and the sum of
explained variances is equal to 1.0
**Examples**
>>> import numpy as np
>>> from scikits.learn.decomposition import RandomizedPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = RandomizedPCA(n_components=2)
>>> pca.fit(X)
RandomizedPCA(copy=True, n_components=2, iterated_power=3, whiten=False)
>>> print pca.explained_variance_ratio_
[ 0.99244289 0.00755711]
See also
PCA
ProbabilisticPCA
**Notes**
References:
* Finding structure with randomness: Stochastic algorithms for
constructing approximate matrix decompositions Halko, et al., 2009
(arXiv:909)
* A randomized algorithm for the decomposition of matrices
Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert
|
|
|
|
scikits.learn.decomposition.pca.RandomizedPCA class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
|
|
|
Fit the model to the data X.
This node has been automatically generated by wrapping the
Returns
|
| Home | Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1-MDP on Mon Apr 27 21:56:22 2020 | http://epydoc.sourceforge.net |