Package mdp :: Package nodes :: Class GaussianHMMScikitsLearnNode
[hide private]
[frames] | no frames]

Class GaussianHMMScikitsLearnNode


Hidden Markov Model with Gaussian emissions This node has been automatically generated by wrapping the scikits.learn.hmm.GaussianHMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Representation of a hidden Markov model probability distribution. This class allows for easy evaluation of, sampling from, and maximum-likelihood estimation of the parameters of a HMM.

Attributes

cvtype : string (read-only)
String describing the type of covariance parameters used by the model. Must be one of 'spherical', 'tied', 'diag', 'full'.
n_features : int (read-only)
Dimensionality of the Gaussian emissions.
n_states : int (read-only)
Number of states in the model.
transmat : array, shape (n_states, n_states)
Matrix of transition probabilities between states.
startprob : array, shape ('n_states`,)
Initial state occupation distribution.
means : array, shape (n_states, n_features)
Mean parameters for each state.
covars : array

Covariance parameters for each state. The shape depends on cvtype:

  • (n_states,) if 'spherical',
  • (n_features, n_features) if 'tied',
  • (n_states, n_features) if 'diag',
  • (n_states, n_features, n_features) if 'full'

Methods

eval(X)
Compute the log likelihood of X under the HMM.
decode(X)
Find most likely state sequence for each point in X using the Viterbi algorithm.
rvs(n=1)
Generate n samples from the HMM.
init(X)
Initialize HMM parameters from X.
fit(X)
Estimate HMM parameters from X using the Baum-Welch algorithm.
predict(X)
Like decode, find most likely state sequence corresponding to X.
score(X)
Compute the log likelihood of X under the model.

Examples

>>> from scikits.learn.hmm import GaussianHMM
>>> GaussianHMM(n_states=2)
GaussianHMM(cvtype='diag', n_states=2, means_weight=0, startprob_prior=1.0,
      startprob=array([ 0.5,  0.5]),
      transmat=array([[ 0.5,  0.5],
       [ 0.5,  0.5]]),
      transmat_prior=1.0, means_prior=None, covars_weight=1,
      covars_prior=0.01)

See Also

GMM : Gaussian mixture model

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Create a hidden Markov model with Gaussian emissions. This node has been automatically generated by wrapping the scikits.learn.hmm.GaussianHMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Initializes parameters such that every state has zero mean and identity covariance.
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
Find most likely state sequence corresponding to obs. This node has been automatically generated by wrapping the scikits.learn.hmm.GaussianHMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters
 
stop_training(self, **kwargs)
Estimate model parameters. This node has been automatically generated by wrapping the scikits.learn.hmm.GaussianHMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. An initialization step is performed before entering the EM algorithm. If you want to avoid this step, set the keyword argument init_params to the empty string ''. Likewise, if you would like just to do an initialization, call this method with n_iter=0.

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of numpy.dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
bool
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Create a hidden Markov model with Gaussian emissions. This node has been automatically generated by wrapping the scikits.learn.hmm.GaussianHMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Initializes parameters such that every state has zero mean and identity covariance.

Parameters

n_states : int
Number of states.
cvtype : string
String describing the type of covariance parameters to use. Must be one of 'spherical', 'tied', 'diag', 'full'. Defaults to 'diag'.
Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

Find most likely state sequence corresponding to obs. This node has been automatically generated by wrapping the scikits.learn.hmm.GaussianHMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

obs : array_like, shape (n, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.
maxrank : int
Maximum rank to evaluate for rank pruning. If not None, only consider the top maxrank states in the inner sum of the forward algorithm recursion. Defaults to None (no rank pruning). See The HTK Book for more details.
beamlogprob : float
Width of the beam-pruning beam in log-probability units. Defaults to -numpy.Inf (no beam pruning). See The HTK Book for more details.

Returns

states : array_like, shape (n,)
Index of the most likely states for each observation
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Returns: bool
A boolean indicating whether the node can be trained.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Estimate model parameters. This node has been automatically generated by wrapping the scikits.learn.hmm.GaussianHMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. An initialization step is performed before entering the EM algorithm. If you want to avoid this step, set the keyword argument init_params to the empty string ''. Likewise, if you would like just to do an initialization, call this method with n_iter=0.

Parameters

obs : list
List of array-like observation sequences (shape (n_i, n_features)).
n_iter : int, optional
Number of iterations to perform.
thresh : float, optional
Convergence threshold.
params : string, optional
Controls which parameters are updated in the training process. Can contain any combination of 's' for startprob, 't' for transmat, 'm' for means, and 'c' for covars, etc. Defaults to all parameters.
init_params : string, optional
Controls which parameters are initialized prior to training. Can contain any combination of 's' for startprob, 't' for transmat, 'm' for means, and 'c' for covars, etc. Defaults to all parameters.
maxrank : int, optional
Maximum rank to evaluate for rank pruning. If not None, only consider the top maxrank states in the inner sum of the forward algorithm recursion. Defaults to None (no rank pruning). See "The HTK Book" for more details.
beamlogprob : float, optional
Width of the beam-pruning beam in log-probability units. Defaults to -numpy.Inf (no beam pruning). See "The HTK Book" for more details.

Notes

In general, logprob should be non-decreasing unless aggressive pruning is used. Decreasing logprob is generally a sign of overfitting (e.g. a covariance parameter getting too small). You can fix this by getting more training data, or decreasing covars_prior.

Overrides: Node.stop_training