__init__(self,
input_dim=None,
output_dim=None,
dtype=None,
**kwargs)
(Constructor)
|
|
The Gaussian Process model class.
This node has been automatically generated by wrapping the scikits.learn.gaussian_process.gaussian_process.GaussianProcess class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
Parameters
- regr : string or callable, optional
A regression function returning an array of outputs of the linear
regression functional basis. The number of observations n_samples
should be greater than the size p of this basis.
Default assumes a simple constant regression trend.
Here is the list of built-in regression models:
- 'constant', 'linear', 'quadratic'
- corr : string or callable, optional
A stationary autocorrelation function returning the autocorrelation
between two points x and x'.
Default assumes a squared-exponential autocorrelation model.
Here is the list of built-in correlation models:
- 'absolute_exponential', 'squared_exponential',
- 'generalized_exponential', 'cubic', 'linear'
- beta0 : double array_like, optional
- The regression weight vector to perform Ordinary Kriging (OK).
Default assumes Universal Kriging (UK) so that the vector beta of
regression weights is estimated using the maximum likelihood
principle.
- storage_mode : string, optional
- A string specifying whether the Cholesky decomposition of the
correlation matrix should be stored in the class (storage_mode =
'full') or not (storage_mode = 'light').
Default assumes storage_mode = 'full', so that the
Cholesky decomposition of the correlation matrix is stored.
This might be a useful parameter when one is not interested in the
MSE and only plan to estimate the BLUP, for which the correlation
matrix is not required.
- verbose : boolean, optional
- A boolean specifying the verbose level.
Default is verbose = False.
- theta0 : double array_like, optional
- An array with shape (n_features, ) or (1, ).
The parameters in the autocorrelation model.
If thetaL and thetaU are also specified, theta0 is considered as
the starting point for the maximum likelihood rstimation of the
best set of parameters.
Default assumes isotropic autocorrelation model with theta0 = 1e-1.
- thetaL : double array_like, optional
- An array with shape matching theta0's.
Lower bound on the autocorrelation parameters for maximum
likelihood estimation.
Default is None, so that it skips maximum likelihood estimation and
it uses theta0.
- thetaU : double array_like, optional
- An array with shape matching theta0's.
Upper bound on the autocorrelation parameters for maximum
likelihood estimation.
Default is None, so that it skips maximum likelihood estimation and
it uses theta0.
- normalize : boolean, optional
- Input X and observations y are centered and reduced wrt
means and standard deviations estimated from the n_samples
observations provided.
Default is normalize = True so that data is normalized to ease
maximum likelihood estimation.
- nugget : double, optional
- Introduce a nugget effect to allow smooth predictions from noisy
data.
Default assumes a nugget close to machine precision for the sake of
robustness (nugget = 10. * MACHINE_EPSILON).
- optimizer : string, optional
A string specifying the optimization algorithm to be used.
Default uses 'fmin_cobyla' algorithm from scipy.optimize.
Here is the list of available optimizers:
'Welch' optimizer is dued to Welch et al., see reference [2]. It
consists in iterating over several one-dimensional optimizations
instead of running one single multi-dimensional optimization.
- random_start : int, optional
- The number of times the Maximum Likelihood Estimation should be
performed from a random starting point.
The first MLE always uses the specified starting point (theta0),
the next starting points are picked at random according to an
exponential distribution (log-uniform on [thetaL, thetaU]).
Default does not use random starting point (random_start = 1).
Example
>>> import numpy as np
>>> from scikits.learn.gaussian_process import GaussianProcess
>>> X = np.atleast_2d([1., 3., 5., 6., 7., 8.]).T
>>> y = (X * np.sin(X)).ravel()
>>> gp = GaussianProcess(theta0=0.1, thetaL=.001, thetaU=1.)
>>> gp.fit(X, y)
GaussianProcess(normalize=True, ...)
Implementation details
The presentation implementation is based on a translation of the DACE
Matlab toolbox, see reference [1].
References
- [1] H.B. Nielsen, S.N. Lophaven, H. B. Nielsen and J. Sondergaard (2002).
- DACE - A MATLAB Kriging Toolbox.
http://www2.imm.dtu.dk/~hbn/dace/dace.pdf
- [2] W.J. Welch, R.J. Buck, J. Sacks, H.P. Wynn, T.J. Mitchell, and M.D.
- Morris (1992). Screening, predicting, and computer experiments.
Technometrics, 34(1) 15--25.
http://www.jstor.org/pss/1269548
- Overrides:
object.__init__
|