__init__(self,
input_dim=None,
output_dim=None,
dtype=None,
**kwargs)
(Constructor)
|
|
Gaussian Mixture Model
This node has been automatically generated by wrapping the scikits.learn.mixture.GMM class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
Representation of a Gaussian mixture model probability distribution.
This class allows for easy evaluation of, sampling from, and
maximum-likelihood estimation of the parameters of a GMM distribution.
Initializes parameters such that every mixture component has zero
mean and identity covariance.
Parameters
- n_states : int, optional
- Number of mixture components. Defaults to 1.
- cvtype : string (read-only), optional
- String describing the type of covariance parameters to
use. Must be one of 'spherical', 'tied', 'diag', 'full'.
Defaults to 'diag'.
Attributes
- cvtype : string (read-only)
- String describing the type of covariance parameters used by
the GMM. Must be one of 'spherical', 'tied', 'diag', 'full'.
- n_features : int
- Dimensionality of the Gaussians.
- n_states : int (read-only)
- Number of mixture components.
- weights : array, shape (
n_states ,)
- Mixing weights for each mixture component.
- means : array, shape (
n_states , n_features )
- Mean parameters for each mixture component.
- covars : array
Covariance parameters for each mixture component. The shape
depends on cvtype :
- (
n_states ,) if 'spherical',
- (
n_features , n_features ) if 'tied',
- (
n_states , n_features ) if 'diag',
- (
n_states , n_features , n_features ) if 'full'
converged_ : bool
- True when convergence was reached in fit(), False
otherwise.
Methods
- decode(X)
- Find most likely mixture components for each point in
X .
- eval(X)
- Compute the log likelihood of
X under the model and the
posterior distribution over mixture components.
- fit(X)
- Estimate model parameters from
X using the EM algorithm.
- predict(X)
- Like decode, find most likely mixtures components for each
observation in
X .
- rvs(n=1)
- Generate
n samples from the model.
- score(X)
- Compute the log likelihood of
X under the model.
Examples
>>> import numpy as np
>>> from scikits.learn import mixture
>>> g = mixture.GMM(n_states=2)
>>>
>>>
>>> np.random.seed(0)
>>> obs = np.concatenate((np.random.randn(100, 1),
... 10 + np.random.randn(300, 1)))
>>> g.fit(obs)
GMM(cvtype='diag', n_states=2)
>>> g.weights
array([ 0.25, 0.75])
>>> g.means
array([[ 0.05980802],
[ 9.94199467]])
>>> g.covars
[array([[ 1.01682662]]), array([[ 0.96080513]])]
>>> np.round(g.weights, 2)
array([ 0.25, 0.75])
>>> np.round(g.means, 2)
array([[ 0.06],
[ 9.94]])
>>> np.round(g.covars, 2)
...
array([[[ 1.02]],
[[ 0.96]]])
>>> g.predict([[0], [2], [9], [10]])
array([0, 0, 1, 1])
>>> np.round(g.score([[0], [2], [9], [10]]), 2)
array([-2.32, -4.16, -1.65, -1.19])
>>>
>>>
>>> g.fit(20 * [[0]] + 20 * [[10]])
GMM(cvtype='diag', n_states=2)
>>> np.round(g.weights, 2)
array([ 0.5, 0.5])
- Overrides:
object.__init__
|