Package mdp :: Package nodes :: Class SGDRegressorScikitsLearnNode
[hide private]
[frames] | no frames]

Class SGDRegressorScikitsLearnNode


Linear model fitted by minimizing a regularized empirical loss with SGD This node has been automatically generated by wrapping the scikits.learn.linear_model.sparse.stochastic_gradient.SGDRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. SGD stands for Stochastic Gradient Descent: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate).

The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection.

This implementation works with data represented as dense numpy arrays of floating point values for the features.

Parameters

loss : str, 'squared_loss' or 'huber'
The loss function to be used. Defaults to 'squared_loss' which refers to the ordinary least squares fit. 'huber' is an epsilon insensitive loss function for robust regression.
penalty : str, 'l2' or 'l1' or 'elasticnet'
The penalty (aka regularization term) to be used. Defaults to 'l2' which is the standard regularizer for linear SVM models. 'l1' and 'elasticnet' migh bring sparsity to the model (feature selection) not achievable with 'l2'.
alpha : float
Constant that multiplies the regularization term. Defaults to 0.0001
rho : float
The Elastic Net mixing parameter, with 0 < rho <= 1. Defaults to 0.85.
fit_intercept: bool
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True.
n_iter: int
The number of passes over the training data (aka epochs). Defaults to 5.
shuffle: bool
Whether or not the training data should be shuffled after each epoch. Defaults to False.
seed: int, optional
The seed of the pseudo random number generator to use when shuffling the data.
verbose: integer, optional
The verbosity level
p : float
Epsilon in the epsilon insensitive huber loss function; only if loss=='huber'.
learning_rate : string, optional

The learning rate:

eta0 : double, optional
The initial learning rate [default 0.01].
power_t : double, optional
The exponent for inverse scaling learning rate [default 0.25].

Attributes

coef_ : array, shape = [n_features]
Weights asigned to the features.
intercept_ : array, shape = [1]
The intercept term.

Examples

>>> import numpy as np
>>> from scikits.learn import linear_model
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> clf = linear_model.sparse.SGDRegressor()
>>> clf.fit(X, y)
SGDRegressor(loss='squared_loss', power_t=0.25, shuffle=False, verbose=0,
       n_iter=5, learning_rate='invscaling', fit_intercept=True,
       penalty='l2', p=0.1, seed=0, eta0=0.01, rho=1.0, alpha=0.0001)

See also

RidgeRegression, ElasticNet, Lasso, SVR

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Linear model fitted by minimizing a regularized empirical loss with SGD This node has been automatically generated by wrapping the scikits.learn.linear_model.sparse.stochastic_gradient.SGDRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. SGD stands for Stochastic Gradient Descent: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate).
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
Predict using the linear model This node has been automatically generated by wrapping the scikits.learn.linear_model.sparse.stochastic_gradient.SGDRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters
 
stop_training(self, **kwargs)
Fit linear model with Stochastic Gradient Descent. This node has been automatically generated by wrapping the scikits.learn.linear_model.sparse.stochastic_gradient.SGDRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of numpy.dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
bool
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Linear model fitted by minimizing a regularized empirical loss with SGD This node has been automatically generated by wrapping the scikits.learn.linear_model.sparse.stochastic_gradient.SGDRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. SGD stands for Stochastic Gradient Descent: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate).

The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection.

This implementation works with data represented as dense numpy arrays of floating point values for the features.

Parameters

loss : str, 'squared_loss' or 'huber'
The loss function to be used. Defaults to 'squared_loss' which refers to the ordinary least squares fit. 'huber' is an epsilon insensitive loss function for robust regression.
penalty : str, 'l2' or 'l1' or 'elasticnet'
The penalty (aka regularization term) to be used. Defaults to 'l2' which is the standard regularizer for linear SVM models. 'l1' and 'elasticnet' migh bring sparsity to the model (feature selection) not achievable with 'l2'.
alpha : float
Constant that multiplies the regularization term. Defaults to 0.0001
rho : float
The Elastic Net mixing parameter, with 0 < rho <= 1. Defaults to 0.85.
fit_intercept: bool
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True.
n_iter: int
The number of passes over the training data (aka epochs). Defaults to 5.
shuffle: bool
Whether or not the training data should be shuffled after each epoch. Defaults to False.
seed: int, optional
The seed of the pseudo random number generator to use when shuffling the data.
verbose: integer, optional
The verbosity level
p : float
Epsilon in the epsilon insensitive huber loss function; only if loss=='huber'.
learning_rate : string, optional

The learning rate:

  • constant: eta = eta0
  • optimal: eta = 1.0/(t+t0)
  • invscaling: eta = eta0 / pow(t, power_t) [default]
eta0 : double, optional
The initial learning rate [default 0.01].
power_t : double, optional
The exponent for inverse scaling learning rate [default 0.25].

Attributes

coef_ : array, shape = [n_features]
Weights asigned to the features.
intercept_ : array, shape = [1]
The intercept term.

Examples

>>> import numpy as np
>>> from scikits.learn import linear_model
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> clf = linear_model.sparse.SGDRegressor()
>>> clf.fit(X, y)
SGDRegressor(loss='squared_loss', power_t=0.25, shuffle=False, verbose=0,
       n_iter=5, learning_rate='invscaling', fit_intercept=True,
       penalty='l2', p=0.1, seed=0, eta0=0.01, rho=1.0, alpha=0.0001)

See also

RidgeRegression, ElasticNet, Lasso, SVR

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

Predict using the linear model This node has been automatically generated by wrapping the scikits.learn.linear_model.sparse.stochastic_gradient.SGDRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

X : array or scipy.sparse matrix of shape [n_samples, n_features]
Whether the numpy.array or scipy.sparse matrix is accepted dependes on the actual implementation

Returns

array, shape = [n_samples]
Array containing the predicted class labels.
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Returns: bool
A boolean indicating whether the node can be trained.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Fit linear model with Stochastic Gradient Descent. This node has been automatically generated by wrapping the scikits.learn.linear_model.sparse.stochastic_gradient.SGDRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters

X : numpy array of shape [n_samples,n_features]
Training data
y : numpy array of shape [n_samples]
Target values
coef_init : array, shape = [n_features]
The initial coeffients to warm-start the optimization.
intercept_init : array, shape = [1]
The initial intercept to warm-start the optimization.
sample_weight : array-like, shape = [n_samples], optional
Weights applied to individual samples (1. for unweighted).

Returns

self : returns an instance of self.

Overrides: Node.stop_training