spreg.GM_Error_Hom_Regimes¶
- class spreg.GM_Error_Hom_Regimes(y, x, regimes, w, max_iter=1, epsilon=1e-05, A1='het', cores=False, constant_regi='many', cols2regi='all', regime_err_sep=False, regime_lag_sep=False, slx_lags=0, slx_vars='all', vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, name_regimes=None, latex=False, hard_bound=False)[source]¶
- GMM method for a spatial error model with homoskedasticity, with regimes, results and diagnostics; based on Drukker et al. (2013) [DEP13], following Anselin (2011) [Ans11]. - Parameters:
- ynumpy.ndarrayorpandas.Series
- nx1 array for dependent variable 
- xnumpy.ndarrayorpandasobject
- Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant 
- regimeslistorpandas.Series
- List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’. 
- wpysalWobject
- Spatial weights object 
- constant_regi: string
- Switcher controlling the constant term setup. It may take the following values: - ‘one’: a vector of ones is appended to x and held constant across regimes. 
- ‘many’: a vector of ones is appended to x and considered different per regime (default). 
 
- cols2regilist, ‘all’
- Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’ (default), all the variables vary by regime. 
- regime_err_sep: boolean
- If True, a separate regression is run for each regime. 
- regime_lag_sep: boolean
- Always False, kept for consistency, ignored. 
- slx_lagsinteger
- Number of spatial lags of X to include in the model specification. If slx_lags>0, the specification becomes of the SLX-Error type. 
- slx_varseither“all” (default)orlistofbooleanstoselectxvariables
- to be lagged 
- max_iterint
- Maximum number of iterations of steps 2a and 2b from [ADKP10]. Note: epsilon provides an additional stop condition. 
- epsilonfloat
- Minimum change in lambda required to stop iterations of steps 2a and 2b from [ADKP10]. Note: max_iter provides an additional stop condition. 
- A1str
- If A1=’het’, then the matrix A1 is defined as in [ADKP10]. If A1=’hom’, then as in [Ans11]. If A1=’hom_sc’, then as in [DEP13] and [DPR13]. 
- vmbool
- If True, include variance-covariance matrix in summary results 
- coresbool
- Specifies if multiprocessing is to be used Default: no multiprocessing, cores = False Note: Multiprocessing may not work on all platforms. 
- name_ystr
- Name of dependent variable for use in output 
- name_xlistofstrings
- Names of independent variables for use in output 
- name_wstr
- Name of weights matrix for use in output 
- name_dsstr
- Name of dataset for use in output 
- name_regimesstr
- Name of regime variable for use in the output 
- latexbool
- Specifies if summary is to be printed in latex format 
- hard_boundbool
- If true, raises an exception if the estimated spatial autoregressive parameter is outside the maximum/minimum bounds. 
- Attributes
- ———-
- outputdataframe
- regression results pandas dataframe 
- summarystr
- Summary of regression results and diagnostics (note: use in conjunction with the print command) 
- betasarray
- kx1 array of estimated coefficients 
- uarray
- nx1 array of residuals 
- e_filteredarray
- nx1 array of spatially filtered residuals 
- predyarray
- nx1 array of predicted y values 
- ninteger
- Number of observations 
- kinteger
- Number of variables for which coefficients are estimated (including the constant) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- yarray
- nx1 array for dependent variable 
- xarray
- Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- iter_stopstr
- Stop criterion reached during iteration of steps 2a and 2b from [ADKP10]. Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- iterationinteger
- Number of iterations of steps 2a and 2b from [ADKP10]. Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- mean_yfloat
- Mean of dependent variable 
- std_yfloat
- Standard deviation of dependent variable 
- pr2float
- Pseudo R squared (squared correlation between y and ypred) Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- vmarray
- Variance covariance matrix (kxk) 
- sig2float
- Sigma squared used in computations Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- std_errarray
- 1xk array of standard errors of the betas Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- z_statlistoftuples
- z statistic; each tuple contains the pair (statistic, p-value), where each is a float Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- xtxfloat
- \(X'X\). Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- name_ystr
- Name of dependent variable for use in output 
- name_xlistofstrings
- Names of independent variables for use in output 
- name_wstr
- Name of weights matrix for use in output 
- name_dsstr
- Name of dataset for use in output 
- name_regimesstr
- Name of regime variable for use in the output 
- titlestr
- Name of the regression method used Only available in dictionary ‘multi’ when multiple regressions (see ‘multi’ below for details) 
- regimeslist
- List of n values with the mapping of each observation to a regime. Assumed to be aligned with ‘x’. 
- constant_regi: string
- Ignored if regimes=False. Constant option for regimes. Switcher controlling the constant term setup. It may take the following values: - ‘one’: a vector of ones is appended to x and held constant across regimes. 
- ‘many’: a vector of ones is appended to x and considered different per regime (default). 
 
- cols2regilist, ‘all’
- Ignored if regimes=False. Argument indicating whether each column of x should be considered as different per regime or held constant across regimes (False). If a list, k booleans indicating for each variable the option (True if one per regime, False to be held constant). If ‘all’, all the variables vary by regime. 
- regime_err_sep: boolean
- If True, a separate regression is run for each regime. 
- krint
- Number of variables/columns to be “regimized” or subject to change by regime. These will result in one parameter estimate by regime for each variable (i.e. nr parameters per variable) 
- kfint
- Number of variables/columns to be considered fixed or global across regimes and hence only obtain one parameter estimate 
- nrint
- Number of different regimes in the ‘regimes’ list 
- multidictionary
- Only available when multiple regressions are estimated, i.e. when regime_err_sep=True and no variable is fixed across regimes. Contains all attributes of each individual regression 
 
- y
 - Examples - We first need to import the needed modules, namely numpy to convert the data we read into arrays that - spregunderstands and- pysalto perform all the analysis.- >>> import numpy as np >>> import libpysal - Open data on NCOVR US County Homicides (3085 areas) using libpysal.io.open(). This is the DBF associated with the NAT shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method. - >>> db = libpysal.io.open(libpysal.examples.get_path("NAT.dbf"),'r') - Extract the HR90 column (homicide rates in 1990) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept. - >>> y_var = 'HR90' >>> y = np.array([db.by_col(y_var)]).reshape(3085,1) - Extract UE90 (unemployment rate) and PS90 (population structure) vectors from the DBF to be used as independent variables in the regression. Other variables can be inserted by adding their names to x_var, such as x_var = [‘Var1’,’Var2’,’…] Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this model adds a vector of ones to the independent variables passed in. - >>> x_var = ['PS90','UE90'] >>> x = np.array([db.by_col(name) for name in x_var]).T - The different regimes in this data are given according to the North and South dummy (SOUTH). - >>> r_var = 'SOUTH' >>> regimes = db.by_col(r_var) - Since we want to run a spatial lag model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations. To do that, we can open an already existing gal file or create a new one. In this case, we will create one from - NAT.shp.- >>> w = libpysal.weights.Rook.from_shapefile(libpysal.examples.get_path("NAT.shp")) - Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, this allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way: - >>> w.transform = 'r' - We are all set with the preliminaries, we are good to run the model. In this case, we will need the variables and the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional. - >>> from spreg import GM_Error_Hom_Regimes >>> reg = GM_Error_Hom_Regimes(y, x, regimes, w=w, name_y=y_var, name_x=x_var, name_ds='NAT') - Once we have run the model, we can explore a little bit the output. The regression object we have created has many attributes so take your time to discover them. This class offers an error model that assumes homoskedasticity but that unlike the models from - spreg.error_sp, it allows for inference on the spatial parameter. This is why you obtain as many coefficient estimates as standard errors, which you calculate taking the square root of the diagonal of the variance-covariance matrix of the parameters. Alternatively, we can have a summary of the output by typing: model.summary >>> print(reg.name_x) [‘0_CONSTANT’, ‘0_PS90’, ‘0_UE90’, ‘1_CONSTANT’, ‘1_PS90’, ‘1_UE90’, ‘lambda’]- >>> print(np.around(reg.betas,4)) [[0.069 ] [0.7885] [0.5398] [5.0948] [1.1965] [0.6018] [0.4104]] - >>> print(np.sqrt(reg.vm.diagonal())) [0.39105853 0.15664624 0.05254328 0.48379958 0.20018799 0.05834139 0.01882401] - __init__(y, x, regimes, w, max_iter=1, epsilon=1e-05, A1='het', cores=False, constant_regi='many', cols2regi='all', regime_err_sep=False, regime_lag_sep=False, slx_lags=0, slx_vars='all', vm=False, name_y=None, name_x=None, name_w=None, name_ds=None, name_regimes=None, latex=False, hard_bound=False)[source]¶
 - Methods - __init__(y, x, regimes, w[, max_iter, ...])- Attributes - property mean_y¶
 - property std_y¶