spreg.ML_Error¶
- class spreg.ML_Error(y, x, w, slx_lags=0, slx_vars='All', method='full', epsilon=1e-07, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, latex=False)[source]¶
ML estimation of the spatial error model with all results and diagnostics; [Ans88]
- Parameters:
- y
numpy.ndarray
orpandas.Series
nx1 array for dependent variable
- x
numpy.ndarray
orpandas
object
Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant
- w
Sparse
matrix
Spatial weights sparse matrix
- slx_lags
integer
Number of spatial lags of X to include in the model specification. If slx_lags>0, the specification becomes of the SLX-Error type.
- slx_vars
either
“All” (default
)or
list
of
booleans
to
select
x
variables
to be lagged
- method
str
if ‘full’, brute force calculation (full matrix expressions) if ‘ord’, Ord eigenvalue method if ‘LU’, LU sparse matrix decomposition
- epsilon
float
tolerance criterion in mimimize_scalar function and inverse_product
- vmbool
if True, include variance-covariance matrix in summary results
- name_y
str
Name of dependent variable for use in output
- name_x
list
of
strings
Names of independent variables for use in output
- name_w
str
Name of weights matrix for use in output
- name_ds
str
Name of dataset for use in output
- latexbool
Specifies if summary is to be printed in latex format
- y
- Attributes:
- output
dataframe
regression results pandas dataframe
- betas
array
(k+1)x1 array of estimated coefficients (rho first)
- lam
float
estimate of spatial autoregressive coefficient
- u
array
nx1 array of residuals
- e_filtered
array
nx1 array of spatially filtered residuals
- predy
array
nx1 array of predicted y values
- n
integer
Number of observations
- k
integer
Number of variables for which coefficients are estimated (including the constant, excluding lambda)
- y
array
nx1 array for dependent variable
- x
array
Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant
- method
str
log Jacobian method if ‘full’: brute force (full matrix computations)
- epsilon
float
tolerance criterion used in minimize_scalar function and inverse_product
- mean_y
float
Mean of dependent variable
- std_y
float
Standard deviation of dependent variable
- varb
array
Variance covariance matrix (k+1 x k+1) - includes var(lambda)
- vm1
array
variance covariance matrix for lambda, sigma (2 x 2)
- sig2
float
Sigma squared used in computations
- logll
float
maximized log-likelihood (including constant terms)
- pr2
float
Pseudo R squared (squared correlation between y and ypred)
- utu
float
Sum of squared residuals
- std_err
array
1xk array of standard errors of the betas
- z_stat
list
of
tuples
z statistic; each tuple contains the pair (statistic, p-value), where each is a float
- name_y
str
Name of dependent variable for use in output
- name_x
list
of
strings
Names of independent variables for use in output
- name_w
str
Name of weights matrix for use in output
- name_ds
str
Name of dataset for use in output
- title
str
Name of the regression method used
- output
Examples
>>> import numpy as np >>> import libpysal >>> from libpysal.examples import load_example >>> from libpysal.weights import Queen >>> from spreg import ML_Error >>> np.set_printoptions(suppress=True) #prevent scientific format >>> south = load_example('South') >>> db = libpysal.io.open(south.get_path("south.dbf"),'r') >>> y_name = "HR90" >>> y = np.array(db.by_col(y_name)) >>> y.shape = (len(y),1) >>> x_names = ["RD90","PS90","UE90","DV90"] >>> x = np.array([db.by_col(var) for var in x_names]).T >>> w = Queen.from_shapefile(south.get_path("south.shp")) >>> w_name = "south_q.gal" >>> w.transform = 'r' >>> mlerr = ML_Error(y,x,w,name_y=y_name,name_x=x_names, name_w=w_name,name_ds=ds_name) >>> np.around(mlerr.betas, decimals=4) array([[ 6.1492], [ 4.4024], [ 1.7784], [-0.3781], [ 0.4858], [ 0.2991]]) >>> "{0:.4f}".format(mlerr.lam) '0.2991' >>> "{0:.4f}".format(mlerr.mean_y) '9.5493' >>> "{0:.4f}".format(mlerr.std_y) '7.0389' >>> np.around(np.diag(mlerr.vm), decimals=4) array([ 1.0648, 0.0555, 0.0454, 0.0061, 0.0148, 0.0014]) >>> np.around(mlerr.sig2, decimals=4) array([[ 32.4069]]) >>> "{0:.4f}".format(mlerr.logll) '-4471.4071' >>> "{0:.4f}".format(mlerr.aic) '8952.8141' >>> "{0:.4f}".format(mlerr.schwarz) '8979.0779' >>> "{0:.4f}".format(mlerr.pr2) '0.3058' >>> "{0:.4f}".format(mlerr.utu) '48534.9148' >>> np.around(mlerr.std_err, decimals=4) array([ 1.0319, 0.2355, 0.2132, 0.0784, 0.1217, 0.0378]) >>> np.around(mlerr.z_stat, decimals=4) array([[ 5.9593, 0. ], [ 18.6902, 0. ], [ 8.3422, 0. ], [ -4.8233, 0. ], [ 3.9913, 0.0001], [ 7.9089, 0. ]]) >>> mlerr.name_y 'HR90' >>> mlerr.name_x ['CONSTANT', 'RD90', 'PS90', 'UE90', 'DV90', 'lambda'] >>> mlerr.name_w 'south_q.gal' >>> mlerr.name_ds 'south.dbf' >>> mlerr.title 'MAXIMUM LIKELIHOOD SPATIAL ERROR (METHOD = FULL)'
- __init__(y, x, w, slx_lags=0, slx_vars='All', method='full', epsilon=1e-07, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, latex=False)[source]¶
Methods
__init__
(y, x, w[, slx_lags, slx_vars, ...])get_x_lag
(w, regimes_att)Attributes
- get_x_lag(w, regimes_att)¶
- property mean_y¶
- property sig2n¶
- property sig2n_k¶
- property std_y¶
- property utu¶
- property vm¶