spreg.GMM_Error¶
- class spreg.GMM_Error(y, x, w, yend=None, q=None, estimator='het', add_wy=False, slx_lags=0, slx_vars='All', vm=False, name_y=None, name_x=None, name_w=None, name_yend=None, name_q=None, name_ds=None, latex=False, hard_bound=False, spat_diag=False, **kwargs)[source]¶
Wrapper function to call any of the GMM methods for a spatial error model available in spreg
- Parameters:
- y
numpy.ndarray
orpandas.Series
nx1 array for dependent variable
- x
numpy.ndarray
orpandas
object
Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant
- w
pysal
W
object
Spatial weights object (always needed)
- yend
numpy.ndarray
orpandas
object
Two dimensional array with n rows and one column for each endogenous variable (if any)
- q
numpy.ndarray
orpandas
object
Two dimensional array with n rows and one column for each external exogenous variable to use as instruments (if any) (note: this should not contain any variables from x)
- estimator
str
Choice of estimator to be used. Options are: ‘het’, which is robust to heteroskedasticity, ‘hom’, which assumes homoskedasticity, and ‘kp98’, which does not provide inference on the spatial parameter for the error term.
- add_wybool
If True, then a spatial lag of the dependent variable is included.
- slx_lags
integer
Number of spatial lags of X to include in the model specification. If slx_lags>0, the specification becomes of the SLX-Error or GNSM type.
- slx_vars
either
“All” (default
)or
list
of
booleans
to
select
x
variables
to be lagged
- vmbool
If True, include variance-covariance matrix in summary results
- name_y
str
Name of dependent variable for use in output
- name_x
list
of
strings
Names of independent variables for use in output
- name_w
str
Name of weights matrix for use in output
- name_yend
list
of
strings
Names of endogenous variables for use in output
- name_q
list
of
strings
Names of instruments for use in output
- name_ds
str
Name of dataset for use in output
- latexbool
Specifies if summary is to be printed in latex format
- hard_boundbool
If true, raises an exception if the estimated spatial autoregressive parameter is outside the maximum/minimum bounds.
- spat_diagbool,
ignored
,included
for
compatibility
with
other
models
- **kwargs
keywords
Additional arguments to pass on to the estimators. See the specific functions for details on what can be used.
- y
- Attributes:
- output
dataframe
regression results pandas dataframe
- summary
str
Summary of regression results and diagnostics (note: use in conjunction with the print command)
- betas
array
kx1 array of estimated coefficients
- u
array
nx1 array of residuals
- e_filtered
array
nx1 array of spatially filtered residuals
- predy
array
nx1 array of predicted y values
- n
integer
Number of observations
- k
integer
Number of variables for which coefficients are estimated (including the constant)
- y
array
nx1 array for dependent variable
- x
array
Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant
- mean_y
float
Mean of dependent variable
- std_y
float
Standard deviation of dependent variable
- pr2
float
Pseudo R squared (squared correlation between y and ypred)
- vm
array
Variance covariance matrix (kxk)
- sig2
float
Sigma squared used in computations
- std_err
array
1xk array of standard errors of the betas
- z_stat
list
of
tuples
z statistic; each tuple contains the pair (statistic, p-value), where each is a float
- name_y
str
Name of dependent variable for use in output
- name_x
list
of
strings
Names of independent variables for use in output
- name_w
str
Name of weights matrix for use in output
- name_ds
str
Name of dataset for use in output
- title
str
Name of the regression method used
- name_yend
list
of
strings
(optional
) Names of endogenous variables for use in output
- name_z
list
of
strings
(optional
) Names of exogenous and endogenous variables for use in output
- name_q
list
of
strings
(optional
) Names of external instruments
- name_h
list
of
strings
(optional
) Names of all instruments used in ouput
- output
Examples
We first need to import the needed modules, namely numpy to convert the data we read into arrays that
spreg
understands andlibpysal
to handle the weights and file management.>>> import numpy as np >>> import libpysal >>> from libpysal.examples import load_example
Open data on NCOVR US County Homicides (3085 areas) using libpysal.io.open(). This is the DBF associated with the NAT shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.
>>> nat = load_example('Natregimes') >>> db = libpysal.io.open(nat.get_path("natregimes.dbf"),'r')
Extract the HR90 column (homicide rates in 1990) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.
>>> y_var = 'HR90' >>> y = np.array([db.by_col(y_var)]).reshape(3085,1)
Extract UE90 (unemployment rate) and PS90 (population structure) vectors from the DBF to be used as independent variables in the regression. Other variables can be inserted by adding their names to x_var, such as x_var = [‘Var1’,’Var2’,’…] Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this model adds a vector of ones to the independent variables passed in.
>>> x_var = ['PS90','UE90'] >>> x = np.array([db.by_col(name) for name in x_var]).T
Since we want to run a spatial error model, we need to specify
the spatial weights matrix that includes the spatial configuration of the observations. To do that, we can open an already existing gal file or create a new one. In this case, we will create one from
NAT.shp
.>>> w = libpysal.weights.Rook.from_shapefile(nat.get_path("natregimes.shp"))
Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, this allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:
>>> w.transform = 'r'
The GMM_Error class can run error models and SARAR models, that is a spatial lag+error model. In this example we will run a simple version of the latter, where we have the spatial effects as well as exogenous variables. Since it is a spatial model, we have to pass in the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.
>>> from spreg import GMM_Error >>> model = GMM_Error(y, x, w=w, add_wy=True, name_y=y_var, name_x=x_var, name_ds='NAT')
Once we have run the model, we can explore a little bit the output. The regression object we have created has many attributes so take your time to discover them.
>>> print(model.output) var_names coefficients std_err zt_stat prob 0 CONSTANT 2.176007 1.115807 1.950165 0.051156 1 PS90 1.108054 0.207964 5.328096 0.0 2 UE90 0.664362 0.061294 10.83893 0.0 3 W_HR90 -0.066539 0.154395 -0.430964 0.666494 4 lambda 0.765087 0.04268 17.926245 0.0
This class also allows the user to run a spatial lag+error model with the extra feature of including non-spatial endogenous regressors. This means that, in addition to the spatial lag and error, we consider some of the variables on the right-hand side of the equation as endogenous and we instrument for this. In this case we consider RD90 (resource deprivation) as an endogenous regressor. We use FP89 (families below poverty) for this and hence put it in the instruments parameter, ‘q’.
>>> yd_var = ['RD90'] >>> yd = np.array([db.by_col(name) for name in yd_var]).T >>> q_var = ['FP89'] >>> q = np.array([db.by_col(name) for name in q_var]).T
And then we can run and explore the model analogously to the previous combo:
>>> model = GMM_Error(y, x, yend=yd, q=q, w=w, add_wy=True, name_y=y_var, name_x=x_var, name_yend=yd_var, name_q=q_var, name_ds='NAT') >>> print(model.output) var_names coefficients std_err zt_stat prob 0 CONSTANT 5.44035 0.560476 9.706652 0.0 1 PS90 1.427042 0.1821 7.836572 0.0 2 UE90 -0.075224 0.050031 -1.503544 0.132699 3 RD90 3.316266 0.261269 12.692924 0.0 4 W_HR90 0.200314 0.057433 3.487777 0.000487 5 lambda 0.136933 0.070098 1.953457 0.050765
The class also allows for estimating a GNS model by adding spatial lags of the exogenous variables, using the argument slx_lags:
>>> model = GMM_Error(y, x, w=w, add_wy=True, slx_lags=1, name_y=y_var, name_x=x_var, name_ds='NAT') >>> print(model.output) var_names coefficients std_err zt_stat prob 0 CONSTANT -0.554756 0.551765 -1.00542 0.314695 1 PS90 1.09369 0.225895 4.841583 0.000001 2 UE90 0.697393 0.082744 8.428291 0.0 3 W_PS90 -1.533378 0.396651 -3.865811 0.000111 4 W_UE90 -1.107944 0.33523 -3.305028 0.00095 5 W_HR90 1.529277 0.389354 3.927728 0.000086 6 lambda -0.917928 0.079569 -11.53625 0.0
- __init__(y, x, w, yend=None, q=None, estimator='het', add_wy=False, slx_lags=0, slx_vars='All', vm=False, name_y=None, name_x=None, name_w=None, name_yend=None, name_q=None, name_ds=None, latex=False, hard_bound=False, spat_diag=False, **kwargs)[source]¶
Methods
__init__
(y, x, w[, yend, q, estimator, ...])Attributes
- property mean_y¶
- property std_y¶