Package mdp :: Package nodes :: Class SGDClassifierScikitsLearnNode
[hide private]
[frames] | no frames]

Class SGDClassifierScikitsLearnNode



Linear model fitted by minimizing a regularized empirical loss with SGD.
This node has been automatically generated by wrapping the ``scikits.learn.linear_model.stochastic_gradient.SGDClassifier`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
SGD stands for Stochastic Gradient Descent: the gradient of the loss is
estimated each sample at a time and the model is updated along the way with
a decreasing strength schedule (aka learning rate).

The regularizer is a penalty added to the loss function that shrinks model
parameters towards the zero vector using either the squared euclidean norm
L2 or the absolute norm L1 or a combination of both (Elastic Net). If the
parameter update crosses the 0.0 value because of the regularizer, the
update is truncated to 0.0 to allow for learning sparse models and achieve
online feature selection.

This implementation works with data represented as dense numpy arrays of
floating point values for the features.

**Parameters**

loss : str, 'hinge' or 'log' or 'modified_huber'
    The loss function to be used. Defaults to 'hinge'. The hinge loss is
    a margin loss used by standard linear SVM models. The 'log' loss is
    the loss of logistic regression models and can be used for
    probability estimation in binary classifiers. 'modified_huber'
    is another smooth loss that brings tolerance to outliers.

penalty : str, 'l2' or 'l1' or 'elasticnet'
    The penalty (aka regularization term) to be used. Defaults to 'l2'
    which is the standard regularizer for linear SVM models. 'l1' and
    'elasticnet' migh bring sparsity to the model (feature selection)
    not achievable with 'l2'.

alpha : float
    Constant that multiplies the regularization term. Defaults to 0.0001

rho : float
    The Elastic Net mixing parameter, with 0 < rho <= 1.
    Defaults to 0.85.

fit_intercept: bool
    Whether the intercept should be estimated or not. If False, the
    data is assumed to be already centered. Defaults to True.

n_iter: int, optional
    The number of passes over the training data (aka epochs).
    Defaults to 5.

shuffle: bool, optional
    Whether or not the training data should be shuffled after each epoch.
    Defaults to False.

seed: int, optional
    The seed of the pseudo random number generator to use when
    shuffling the data.

verbose: integer, optional
    The verbosity level

n_jobs: integer, optional
    The number of CPUs to use to do the OVA (One Versus All, for
    multi-class problems) computation. -1 means 'all CPUs'. Defaults
    to 1.

learning_rate : int
    The learning rate:

    - constant: eta = eta0
    - optimal: eta = 1.0/(t+t0) [default]
    - invscaling: eta = eta0 / pow(t, power_t)


eta0 : double
    The initial learning rate [default 0.01].

power_t : double
    The exponent for inverse scaling learning rate [default 0.25].


**Attributes**

`coef_` : array, shape = [1, n_features] if n_classes == 2 else [n_classes,
n_features]
    Weights assigned to the features.

`intercept_` : array, shape = [1] if n_classes == 2 else [n_classes]
    Constants in decision function.

**Examples**

>>> import numpy as np
>>> from scikits.learn import linear_model
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> Y = np.array([1, 1, 2, 2])
>>> clf = linear_model.SGDClassifier()
>>> clf.fit(X, Y)
SGDClassifier(loss='hinge', n_jobs=1, shuffle=False, verbose=0, n_iter=5,
       learning_rate='optimal', fit_intercept=True, penalty='l2',
       power_t=0.5, seed=0, eta0=0.0, rho=1.0, alpha=0.0001)
>>> print clf.predict([[-0.8, -1]])
[ 1.]

See also

LinearSVC, LogisticRegression

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Linear model fitted by minimizing a regularized empirical loss with SGD.
list
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_label(self, x)
 
_stop_training(self, **kwargs)
Transform the data and labels lists to array objects and reshape them.
 
label(self, x)
Predict using the linear model This node has been automatically generated by wrapping the scikits.learn.linear_model.stochastic_gradient.SGDClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters
 
stop_training(self, **kwargs)
Fit linear model with Stochastic Gradient Descent. This node has been automatically generated by wrapping the scikits.learn.linear_model.stochastic_gradient.SGDClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from ClassifierCumulator
 
_check_train_args(self, x, labels)
 
_train(self, x, labels)
Cumulate all input data in a one dimensional list.
 
train(self, x, labels)
Cumulate all input data in a one dimensional list.
    Inherited from ClassifierNode
 
_execute(self, x)
 
_prob(self, x, *args, **kargs)
 
execute(self, x)
Process the data contained in x.
 
prob(self, x, *args, **kwargs)
This function does classification or regression on a test vector T given a model with probability information. This node has been automatically generated by wrapping the scikits.learn.svm.classes.SVC class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters
 
rank(self, x, threshold=None)
Returns ordered list with all labels ordered according to prob(x) (e.g., [[3 1 2], [2 1 3], ...]).
    Inherited from PreserveDimNode
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of numpy.dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
bool
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Linear model fitted by minimizing a regularized empirical loss with SGD.
This node has been automatically generated by wrapping the ``scikits.learn.linear_model.stochastic_gradient.SGDClassifier`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
SGD stands for Stochastic Gradient Descent: the gradient of the loss is
estimated each sample at a time and the model is updated along the way with
a decreasing strength schedule (aka learning rate).

The regularizer is a penalty added to the loss function that shrinks model
parameters towards the zero vector using either the squared euclidean norm
L2 or the absolute norm L1 or a combination of both (Elastic Net). If the
parameter update crosses the 0.0 value because of the regularizer, the
update is truncated to 0.0 to allow for learning sparse models and achieve
online feature selection.

This implementation works with data represented as dense numpy arrays of
floating point values for the features.

**Parameters**

loss : str, 'hinge' or 'log' or 'modified_huber'
    The loss function to be used. Defaults to 'hinge'. The hinge loss is
    a margin loss used by standard linear SVM models. The 'log' loss is
    the loss of logistic regression models and can be used for
    probability estimation in binary classifiers. 'modified_huber'
    is another smooth loss that brings tolerance to outliers.

penalty : str, 'l2' or 'l1' or 'elasticnet'
    The penalty (aka regularization term) to be used. Defaults to 'l2'
    which is the standard regularizer for linear SVM models. 'l1' and
    'elasticnet' migh bring sparsity to the model (feature selection)
    not achievable with 'l2'.

alpha : float
    Constant that multiplies the regularization term. Defaults to 0.0001

rho : float
    The Elastic Net mixing parameter, with 0 < rho <= 1.
    Defaults to 0.85.

fit_intercept: bool
    Whether the intercept should be estimated or not. If False, the
    data is assumed to be already centered. Defaults to True.

n_iter: int, optional
    The number of passes over the training data (aka epochs).
    Defaults to 5.

shuffle: bool, optional
    Whether or not the training data should be shuffled after each epoch.
    Defaults to False.

seed: int, optional
    The seed of the pseudo random number generator to use when
    shuffling the data.

verbose: integer, optional
    The verbosity level

n_jobs: integer, optional
    The number of CPUs to use to do the OVA (One Versus All, for
    multi-class problems) computation. -1 means 'all CPUs'. Defaults
    to 1.

learning_rate : int
    The learning rate:

    - constant: eta = eta0
    - optimal: eta = 1.0/(t+t0) [default]
    - invscaling: eta = eta0 / pow(t, power_t)


eta0 : double
    The initial learning rate [default 0.01].

power_t : double
    The exponent for inverse scaling learning rate [default 0.25].


**Attributes**

`coef_` : array, shape = [1, n_features] if n_classes == 2 else [n_classes,
n_features]
    Weights assigned to the features.

`intercept_` : array, shape = [1] if n_classes == 2 else [n_classes]
    Constants in decision function.

**Examples**

>>> import numpy as np
>>> from scikits.learn import linear_model
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> Y = np.array([1, 1, 2, 2])
>>> clf = linear_model.SGDClassifier()
>>> clf.fit(X, Y)
SGDClassifier(loss='hinge', n_jobs=1, shuffle=False, verbose=0, n_iter=5,
       learning_rate='optimal', fit_intercept=True, penalty='l2',
       power_t=0.5, seed=0, eta0=0.0, rho=1.0, alpha=0.0001)
>>> print clf.predict([[-0.8, -1]])
[ 1.]

See also

LinearSVC, LogisticRegression

Overrides: object.__init__

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Returns: list
The list of dtypes supported by this node.
Overrides: Node._get_supported_dtypes

_label(self, x)

 
Overrides: ClassifierNode._label

_stop_training(self, **kwargs)

 
Transform the data and labels lists to array objects and reshape them.

Overrides: Node._stop_training

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Returns: bool
A boolean indicating whether the node can be trained.
Overrides: Node.is_trainable

label(self, x)

 

Predict using the linear model This node has been automatically generated by wrapping the scikits.learn.linear_model.stochastic_gradient.SGDClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

X : array or scipy.sparse matrix of shape [n_samples, n_features]
Whether the numpy.array or scipy.sparse matrix is accepted dependes on the actual implementation

Returns

array, shape = [n_samples]
Array containing the predicted class labels.
Overrides: ClassifierNode.label

stop_training(self, **kwargs)

 

Fit linear model with Stochastic Gradient Descent. This node has been automatically generated by wrapping the scikits.learn.linear_model.stochastic_gradient.SGDClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

X : numpy array of shape [n_samples,n_features]
Training data
y : numpy array of shape [n_samples]
Target values
coef_init : array, shape = [n_classes,n_features]
The initial coeffients to warm-start the optimization.
intercept_init : array, shape = [n_classes]
The initial intercept to warm-start the optimization.
class_weight : dict, {class_label : weight} or "auto"

Weights associated with classes. If not given, all classes are supposed to have weight one.

The "auto" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies.

sample_weight : array-like, shape = [n_samples], optional
Weights applied to individual samples (1. for unweighted).

Returns

self : returns an instance of self.

Overrides: Node.stop_training