spreg.ML_Error

class spreg.ML_Error(y, x, w, slx_lags=0, slx_vars='All', method='full', epsilon=1e-07, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, latex=False)[source]

ML estimation of the spatial error model with all results and diagnostics; [Ans88]

Parameters:
ynumpy.ndarray or pandas.Series

nx1 array for dependent variable

xnumpy.ndarray or pandas object

Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant

wSparse matrix

Spatial weights sparse matrix

slx_lagsinteger

Number of spatial lags of X to include in the model specification. If slx_lags>0, the specification becomes of the SLX-Error type.

slx_varseither “All” (default) or list of booleans to select x variables

to be lagged

methodstr

if ‘full’, brute force calculation (full matrix expressions) if ‘ord’, Ord eigenvalue method if ‘LU’, LU sparse matrix decomposition

epsilonfloat

tolerance criterion in mimimize_scalar function and inverse_product

vmbool

if True, include variance-covariance matrix in summary results

name_ystr

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_wstr

Name of weights matrix for use in output

name_dsstr

Name of dataset for use in output

latexbool

Specifies if summary is to be printed in latex format

Attributes:
outputdataframe

regression results pandas dataframe

betasarray

(k+1)x1 array of estimated coefficients (rho first)

lamfloat

estimate of spatial autoregressive coefficient

uarray

nx1 array of residuals

e_filteredarray

nx1 array of spatially filtered residuals

predyarray

nx1 array of predicted y values

ninteger

Number of observations

kinteger

Number of variables for which coefficients are estimated (including the constant, excluding lambda)

yarray

nx1 array for dependent variable

xarray

Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant

methodstr

log Jacobian method if ‘full’: brute force (full matrix computations)

epsilonfloat

tolerance criterion used in minimize_scalar function and inverse_product

mean_yfloat

Mean of dependent variable

std_yfloat

Standard deviation of dependent variable

varbarray

Variance covariance matrix (k+1 x k+1) - includes var(lambda)

vm1array

variance covariance matrix for lambda, sigma (2 x 2)

sig2float

Sigma squared used in computations

logllfloat

maximized log-likelihood (including constant terms)

pr2float

Pseudo R squared (squared correlation between y and ypred)

utufloat

Sum of squared residuals

std_errarray

1xk array of standard errors of the betas

z_statlist of tuples

z statistic; each tuple contains the pair (statistic, p-value), where each is a float

name_ystr

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_wstr

Name of weights matrix for use in output

name_dsstr

Name of dataset for use in output

titlestr

Name of the regression method used

Examples

>>> import numpy as np
>>> import libpysal
>>> from libpysal.examples import load_example
>>> from libpysal.weights import Queen
>>> from spreg import ML_Error
>>> np.set_printoptions(suppress=True) #prevent scientific format
>>> south = load_example('South')
>>> db = libpysal.io.open(south.get_path("south.dbf"),'r')
>>> y_name = "HR90"
>>> y = np.array(db.by_col(y_name))
>>> y.shape = (len(y),1)
>>> x_names = ["RD90","PS90","UE90","DV90"]
>>> x = np.array([db.by_col(var) for var in x_names]).T
>>> w = Queen.from_shapefile(south.get_path("south.shp"))
>>> w_name = "south_q.gal"
>>> w.transform = 'r'
>>> mlerr = ML_Error(y,x,w,name_y=y_name,name_x=x_names,               name_w=w_name,name_ds=ds_name) 
>>> np.around(mlerr.betas, decimals=4) 
array([[ 6.1492],
       [ 4.4024],
       [ 1.7784],
       [-0.3781],
       [ 0.4858],
       [ 0.2991]])
>>> "{0:.4f}".format(mlerr.lam) 
'0.2991'
>>> "{0:.4f}".format(mlerr.mean_y) 
'9.5493'
>>> "{0:.4f}".format(mlerr.std_y) 
'7.0389'
>>> np.around(np.diag(mlerr.vm), decimals=4) 
array([ 1.0648,  0.0555,  0.0454,  0.0061,  0.0148,  0.0014])
>>> np.around(mlerr.sig2, decimals=4) 
array([[ 32.4069]])
>>> "{0:.4f}".format(mlerr.logll) 
'-4471.4071'
>>> "{0:.4f}".format(mlerr.aic) 
'8952.8141'
>>> "{0:.4f}".format(mlerr.schwarz) 
'8979.0779'
>>> "{0:.4f}".format(mlerr.pr2) 
'0.3058'
>>> "{0:.4f}".format(mlerr.utu) 
'48534.9148'
>>> np.around(mlerr.std_err, decimals=4) 
array([ 1.0319,  0.2355,  0.2132,  0.0784,  0.1217,  0.0378])
>>> np.around(mlerr.z_stat, decimals=4) 
array([[  5.9593,   0.    ],
       [ 18.6902,   0.    ],
       [  8.3422,   0.    ],
       [ -4.8233,   0.    ],
       [  3.9913,   0.0001],
       [  7.9089,   0.    ]])
>>> mlerr.name_y 
'HR90'
>>> mlerr.name_x 
['CONSTANT', 'RD90', 'PS90', 'UE90', 'DV90', 'lambda']
>>> mlerr.name_w 
'south_q.gal'
>>> mlerr.name_ds 
'south.dbf'
>>> mlerr.title 
'MAXIMUM LIKELIHOOD SPATIAL ERROR (METHOD = FULL)'
__init__(y, x, w, slx_lags=0, slx_vars='All', method='full', epsilon=1e-07, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, latex=False)[source]

Methods

__init__(y, x, w[, slx_lags, slx_vars, ...])

get_x_lag(w, regimes_att)

Attributes

mean_y

sig2n

sig2n_k

std_y

utu

vm

get_x_lag(w, regimes_att)
property mean_y
property sig2n
property sig2n_k
property std_y
property utu
property vm