spreg.GM_Error_Het¶
- class spreg.GM_Error_Het(y, x, w, slx_lags=0, slx_vars='All', max_iter=1, epsilon=1e-05, step1c=False, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, latex=False, hard_bound=False)[source]¶
GMM method for a spatial error model with heteroskedasticity, with results and diagnostics; based on [ADKP10], following [Ans11].
- Parameters:
- y
numpy.ndarray
orpandas.Series
nx1 array for dependent variable
- x
numpy.ndarray
orpandas
object
Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant
- w
pysal
W
object
Spatial weights object
- slx_lags
integer
Number of spatial lags of X to include in the model specification. If slx_lags>0, the specification becomes of the SLX-Error type.
- slx_vars
either
“All” (default
)or
list
of
booleans
to
select
x
variables
to be lagged
- max_iter
int
Maximum number of iterations of steps 2a and 2b from [ADKP10]. Note: epsilon provides an additional stop condition.
- epsilon
float
Minimum change in lambda required to stop iterations of steps 2a and 2b from [ADKP10]. Note: max_iter provides an additional stop condition.
- step1cbool
If True, then include Step 1c from [ADKP10].
- vmbool
If True, include variance-covariance matrix in summary results
- name_y
str
Name of dependent variable for use in output
- name_x
list
of
strings
Names of independent variables for use in output
- name_w
str
Name of weights matrix for use in output
- name_ds
str
Name of dataset for use in output
- latexbool
Specifies if summary is to be printed in latex format
- hard_boundbool
If true, raises an exception if the estimated spatial autoregressive parameter is outside the maximum/minimum bounds.
- Attributes
- ———-
- output
dataframe
regression results pandas dataframe
- summary
str
Summary of regression results and diagnostics (note: use in conjunction with the print command)
- betas
array
kx1 array of estimated coefficients
- u
array
nx1 array of residuals
- e_filtered
array
nx1 array of spatially filtered residuals
- predy
array
nx1 array of predicted y values
- n
integer
Number of observations
- k
integer
Number of variables for which coefficients are estimated (including the constant)
- y
array
nx1 array for dependent variable
- x
array
Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant
- iter_stop
str
Stop criterion reached during iteration of steps 2a and 2b from [ADKP10].
- iteration
integer
Number of iterations of steps 2a and 2b from [ADKP10].
- mean_y
float
Mean of dependent variable
- std_y
float
Standard deviation of dependent variable
- pr2
float
Pseudo R squared (squared correlation between y and ypred)
- vm
array
Variance covariance matrix (kxk)
- std_err
array
1xk array of standard errors of the betas
- z_stat
list
of
tuples
z statistic; each tuple contains the pair (statistic, p-value), where each is a float
- xtx
float
\(X'X\)
- name_y
str
Name of dependent variable for use in output
- name_x
list
of
strings
Names of independent variables for use in output
- name_w
str
Name of weights matrix for use in output
- name_ds
str
Name of dataset for use in output
- title
str
Name of the regression method used
- y
Examples
We first need to import the needed modules, namely numpy to convert the data we read into arrays that
spreg
understands andpysal
to perform all the analysis.>>> import numpy as np >>> import libpysal
Open data on Columbus neighborhood crime (49 areas) using libpysal.io.open(). This is the DBF associated with the Columbus shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.
>>> db = libpysal.io.open(libpysal.examples.get_path('columbus.dbf'),'r')
Extract the HOVAL column (home values) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.
>>> y = np.array(db.by_col("HOVAL")) >>> y = np.reshape(y, (49,1))
Extract INC (income) and CRIME (crime) vectors from the DBF to be used as independent variables in the regression. Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this class adds a vector of ones to the independent variables passed in.
>>> X = [] >>> X.append(db.by_col("INC")) >>> X.append(db.by_col("CRIME")) >>> X = np.array(X).T
Since we want to run a spatial error model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations into the error component of the model. To do that, we can open an already existing gal file or create a new one. In this case, we will create one from
columbus.shp
.>>> w = libpysal.weights.Rook.from_shapefile(libpysal.examples.get_path("columbus.shp"))
Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, his allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:
>>> w.transform = 'r'
We are all set with the preliminaries, we are good to run the model. In this case, we will need the variables and the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.
>>> from spreg import GM_Error_Het >>> reg = GM_Error_Het(y, X, w=w, step1c=True, name_y='home value', name_x=['income', 'crime'], name_ds='columbus')
Once we have run the model, we can explore a little bit the output. The regression object we have created has many attributes so take your time to discover them. This class offers an error model that explicitly accounts for heteroskedasticity and that unlike the models from
spreg.error_sp
, it allows for inference on the spatial parameter.>>> print(reg.name_x) ['CONSTANT', 'income', 'crime', 'lambda']
Hence, we find the same number of betas as of standard errors, which we calculate taking the square root of the diagonal of the variance-covariance matrix:
>>> print(np.around(np.hstack((reg.betas,np.sqrt(reg.vm.diagonal()).reshape(4,1))),4)) [[47.9963 11.479 ] [ 0.7105 0.3681] [-0.5588 0.1616] [ 0.4118 0.168 ]]
Alternatively, we can have a summary of the output by typing: print(reg.summary)
- __init__(y, x, w, slx_lags=0, slx_vars='All', max_iter=1, epsilon=1e-05, step1c=False, vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, latex=False, hard_bound=False)[source]¶
Methods
__init__
(y, x, w[, slx_lags, slx_vars, ...])Attributes
- property mean_y¶
- property std_y¶