spreg.TSLS¶
- class spreg.TSLS(y, x, yend, q, w=None, robust=None, gwk=None, slx_lags=0, slx_vars='All', sig2n_k=False, spat_diag=False, nonspat_diag=True, regimes=None, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_gwk=None, name_ds=None, latex=False, **kwargs)[source]¶
Two stage least squares with results and diagnostics.
- Parameters:
- y
numpy.ndarrayorpandas.Series nx1 array for dependent variable
- x
numpy.ndarrayorpandasobject Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant
- yend
numpy.ndarrayorpandasobject Two dimensional array with n rows and one column for each endogenous variable
- q
numpy.ndarrayorpandasobject Two dimensional array with n rows and one column for each external exogenous variable to use as instruments (note: this should not contain any variables from x)
- w
pysalWobject Spatial weights object (required if running spatial diagnostics)
- robust
str If ‘white’, then a White consistent estimator of the variance-covariance matrix is given. If ‘hac’, then a HAC consistent estimator of the variance-covariance matrix is given. Default set to None.
- gwk
pysalWobject Kernel spatial weights needed for HAC estimation. Note: matrix must have ones along the main diagonal.
- slx_lags
integer Number of spatial lags of X to include in the model specification. If slx_lags>0, the specification becomes of the SLX type.
- slx_vars
either“All” (default)orlistofbooleanstoselectxvariables to be lagged
- regimes
listorpandas.Series List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’. For other regimes-specific arguments, see TSLS_Regimes
- sig2n_kbool
If True, then use n-k to estimate sigma^2. If False, use n.
- spat_diagbool
If True, then compute Anselin-Kelejian test (requires w)
- nonspat_diagbool
If True, then compute non-spatial diagnostics
- vmbool
If True, include variance-covariance matrix in summary results
- name_y
str Name of dependent variable for use in output
- name_x
listofstrings Names of independent variables for use in output
- name_yend
listofstrings Names of endogenous variables for use in output
- name_q
listofstrings Names of instruments for use in output
- name_w
str Name of weights matrix for use in output
- name_gwk
str Name of kernel weights matrix for use in output
- name_ds
str Name of dataset for use in output
- latexbool
Specifies if summary is to be printed in latex format
- y
- Attributes:
- output
dataframe regression results pandas dataframe
- summary
str Summary of regression results and diagnostics (note: use in conjunction with the print command)
- betas
array kx1 array of estimated coefficients
- u
array nx1 array of residuals
- predy
array nx1 array of predicted y values
- n
integer Number of observations
- k
integer Number of variables for which coefficients are estimated (including the constant)
- kstar
integer Number of endogenous variables.
- y
array nx1 array for dependent variable
- x
array Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant
- yend
array Two dimensional array with n rows and one column for each endogenous variable
- q
array Two dimensional array with n rows and one column for each external exogenous variable used as instruments
- z
array nxk array of variables (combination of x and yend)
- h
array nxl array of instruments (combination of x and q)
- robust
str Adjustment for robust standard errors
- mean_y
float Mean of dependent variable
- std_y
float Standard deviation of dependent variable
- vm
array Variance covariance matrix (kxk)
- pr2
float Pseudo R squared (squared correlation between y and ypred)
- utu
float Sum of squared residuals
- sig2
float Sigma squared used in computations
- std_err
array 1xk array of standard errors of the betas
- z_stat
listoftuples z statistic; each tuple contains the pair (statistic, p-value), where each is a float
- ak_test
tuple Anselin-Kelejian test; tuple contains the pair (statistic, p-value)
- dwh
tuple Durbin-Wu-Hausman test; tuple contains the pair (statistic, p-value). Only returned if dwh=True.
- name_y
str Name of dependent variable for use in output
- name_x
listofstrings Names of independent variables for use in output
- name_yend
listofstrings Names of endogenous variables for use in output
- name_z
listofstrings Names of exogenous and endogenous variables for use in output
- name_q
listofstrings Names of external instruments
- name_h
listofstrings Names of all instruments used in ouput
- name_w
str Name of weights matrix for use in output
- name_gwk
str Name of kernel weights matrix for use in output
- name_ds
str Name of dataset for use in output
- title
str Name of the regression method used
- sig2n
float Sigma squared (computed with n in the denominator)
- sig2n_k
float Sigma squared (computed with n-k in the denominator)
- hth
float \(H'H\)
- hthi
float \((H'H)^{-1}\)
- varb
array \((Z'H (H'H)^{-1} H'Z)^{-1}\)
- zthhthi
array \(Z'H(H'H)^{-1}\)
- pfora1a2
array \(n(zthhthi)'varb\)
- output
Examples
We first need to import the needed modules, namely numpy to convert the data we read into arrays that
spregunderstands andpysalto perform all the analysis.>>> import numpy as np >>> import libpysal
Open data on Columbus neighborhood crime (49 areas) using libpysal.io.open(). This is the DBF associated with the Columbus shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.
>>> db = libpysal.io.open(libpysal.examples.get_path("columbus.dbf"),'r')
Extract the CRIME column (crime rates) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.
>>> y = np.array(db.by_col("CRIME")) >>> y = np.reshape(y, (49,1))
Extract INC (income) vector from the DBF to be used as independent variables in the regression. Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this model adds a vector of ones to the independent variables passed in, but this can be overridden by passing constant=False.
>>> X = [] >>> X.append(db.by_col("INC")) >>> X = np.array(X).T
In this case we consider HOVAL (home value) is an endogenous regressor. We tell the model that this is so by passing it in a different parameter from the exogenous variables (x).
>>> yd = [] >>> yd.append(db.by_col("HOVAL")) >>> yd = np.array(yd).T
Because we have endogenous variables, to obtain a correct estimate of the model, we need to instrument for HOVAL. We use DISCBD (distance to the CBD) for this and hence put it in the instruments parameter, ‘q’.
>>> q = [] >>> q.append(db.by_col("DISCBD")) >>> q = np.array(q).T
We are all set with the preliminars, we are good to run the model. In this case, we will need the variables (exogenous and endogenous) and the instruments. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.
>>> from spreg import TSLS >>> reg = TSLS(y, X, yd, q, name_x=['inc'], name_y='crime', name_yend=['hoval'], name_q=['discbd'], name_ds='columbus') >>> print(reg.betas.T) [[88.46579584 0.5200379 -1.58216593]]
- __init__(y, x, yend, q, w=None, robust=None, gwk=None, slx_lags=0, slx_vars='All', sig2n_k=False, spat_diag=False, nonspat_diag=True, regimes=None, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_gwk=None, name_ds=None, latex=False, **kwargs)[source]¶
Methods
__init__(y, x, yend, q[, w, robust, gwk, ...])Attributes
- property mean_y¶
- property pfora1a2¶
- property sig2n¶
- property sig2n_k¶
- property std_y¶
- property utu¶
- property vm¶