How To Interpret ARDL Results?

How To Interpret ARDL Results? This is a very frequently asked question on our social media groups for Econometrics, Statistics and Research. Some of the students ask about How To Interpret ARDL Results using Stata? and many others ask about How To Interpret ARDL Results using Eviews. We will use Eviews to estimate Eviews for this tutorial but the interpretation does not depend on softwares but the statistic/calculated estimates we have. So, the results from Stata can equally be interpreted relevantly.

How To Write ARDL Equations?

We can help you learn how to interpret ARDL Results in the following few steps. But first we should understand what is ARDL in few lines and how we can estimate ARDL in Eviews. The ARDL is a modified regression model to test and estimate the cointegrated relationships between time series variables. Using bounds test instead of Johenson Jusuleus Cointegration Test, the presence of long run relation between the time series variables can be predicted and that too using the F statistic. The ARDL equation is given in the following:

yt = β0 + β1yt-1 + .......+ βkyt-p + α0xt + α1xt-1 + α2xt-2 + ......... + αqxt-q + ε,      (1)

The ECM equation from ARDL setup is:

Δyt = β0 + Σ βiΔyt-i + ΣγjΔx1t-j + ΣδkΔx2t-k + φzt-1 + et    ;        (2)

The conditional ECM (Pesaran et al. 2001) is written like:

Δyt = β0 + Σ βiΔyt-i + ΣγjΔx1t-j + ΣδkΔx2t-k + θ0yt-1 + θ1x1t-1 + θ2 x2t-1 + et   ;    (3)

The Cointegrated Equation can be written as:

yt = α0 + α1x1t + α2x2t + vt       ;       (4)

and the bounds test can be conducted on the coefficients (H0:  θ= θ1 = θ2 = 0) from the equation 3. The critical values of this F Statistic should be looked upon from the Tables in (Pesaran et al. 2001) or Narayan (2005):

Now, we will interpret the estimated results from an actual Eviews output. To estimate ARDL using Eviews (Learn Time Series Analysis Theory Here), one needs to open the WORKFILE or data in Eviews. Then click on Quick, then click on Estimate Equation and a new small window will appear. We select ARDL from the Method section of this small window which is near the bottom and below the list of variables textbox. Once ARDL is selected the options on this Window changes. We enter our list of variables in order of DV and IVs and intercept. The rest of the options can be selected from same Window like whether to include intercept, trend or both, lags for both the dependent variables and regressors and

Eviews allows us to specify the equation in form of regression models with general list of coefficients and estimated values in form the regression equation like this:

DEBT = C(1)*DEBT(-1) + C(2)*DEBT(-2) + C(3)*DEBT(-3) + C(4)*GDP + C(5)*GDP(-1) + C(6)*GFC + C(7)*GFC(-1) + C(8)*GFC(-2) + C(9)*GFC(-3) + C(10)*TRADE + C(11)

DEBT = 0.9172*DEBT(-1) - 0.4375*DEBT(-2) + 0.3484*DEBT(-3) - 0.0614*GDP - 0.0955*GDP(-1) + 0.3101*GFC + 0.1751*GFC(-1) + 0.7765*GFC(-2) + 0.3646*GFC(-3) - 2864*TRADE + 1375446667.23

The Cointegrated equation from above model becomes where we truncted the coefficients at 4 decimal points:

D(DEBT) = 1375446667.2237 -0.1718*DEBT(-1) -0.1569*GDP(-1) + 1.6265*GFC(-1) -28649248.9413*TRADE** + 0.0890*D(DEBT(-1)) -0.3484*D(DEBT(-2)) -0.0614*D(GDP) + 0.3101*D(GFC) -1.1411*(DEBT - (-0.9135*GDP(-1) + 9.4649*GFC(-1) -166714193.3055*TRADE(-1) + 8003926456.4992 ) -0.3646*D(GFC(-2)) )

Then we can get the first estimation table which looks like this:

Interpret ARDL Results Main Table

In this results, the first part is summary of the information, Eviews has worked on. We can see that Dependent variable and the method has been reported in the first two lines respectively. Then the time and information about the sample time period and number of observations is given. Also, we can see that Eviews estimated a few regressions models to come up to the selected model based on information criteria. AIC selected the ARDL with Model with 3 lags for the dependent (DEBT) and the independent variables (GDP GFC and DEBT)were included with their level and selected lags of 1, 3 and 0 respectively. 0 lags of an independent variable means it will be added to list of regressor in level only. The second part of the main equation gives values of coefficients, standard errors and t statistics with p-values. We can interpret this an AR and DL equations like we do in ADL models. We can either interpret the coefficients as simple regression coefficients keeping mind the nature of X and Y variables like in log or percentage etc or we can conduct an F test to jointly determine if the X with its lags has any effect on the Y which works like causality test (not what Granger Causality is). We can generalize this step to conduct Granger Causality test on given set of coefficients as well upon confirmation that the hypothesis test matches the one which Granger causality is based upon (hint for those who wish to conduct Granger Causality after ARDL).

The main results of the ARDL Regression model is given in the central table in Eviews results output window. We can see this portion contain few columns each on list of variables in the model with their lags, coefficients values, standard errors, t or Z statistics and corresponding p-values. We can interpret these as conventional regression models coefficients are interpreted like if the variables are in logs or simple measurement. One has to keep in mind the sign and size of coefficients for interpretation if the objective is inferential study of predictive study respectively. A positive coefficient on the level variable means that the current change in X affects current level of Y positively and negative sign of the level variable means the current change in X affects current level of Y negatively. The coefficient of a lag 1 of X means the effect of changes in past years values of X results in a change in current values of Y or the current changes in X affects values of Y in next time periods. This can also be positive or negative. The coefficient of second lag means the change in X today will cause changes in Y two years from today or a change in X two years ago will affect Y today. It can be positive and negative as well as we can see it in level changes or first lag changes.

The Standard Error of the coefficients are given the sampling distribution of the coefficients. We can see these value the are the standard deviation of the coefficients from different samples of the data if the same coefficients are estimated from each sample and taken as a sample itself. This gives us a margin of error or limits within which the value of coefficients can vary within a limit on average. We will need this determine the t-Statistic and Z statistic to define the hypothesis testing for the given coefficients.

The next column is t Statistics. We use this column to test the null hypothesis that

 

Interpret ARDL Results: Some Basic Results:

 

Interpret ARDL Results for Short Run Relationship

Interpret ARDL Results for Long Run Relationship

References:

Pesaran, M. H. and Y. Shin, 1999. An autoregressive distributed lag modelling approach to cointegration analysis. Chapter 11 in S. Strom (ed.), Econometrics and Economic Theory in the 20th Century: The Ragnar Frisch Centennial Symposium. Cambridge University Press, Cambridge. (Discussion Paper version.)
Pesaran, M. H., Shin, Y. and Smith, R. J., 2001. Bounds testing approaches to the analysis of level relationships. Journal of Applied Econometrics, 16, 289–326.

Pesaran, M. H. and R. P. Smith, 1998. Structural analysis of cointegrating VARs. Journal of Economic Surveys, 12, 471-505.

Toda, H. Y and T. Yamamoto (1995). Statistical inferences in vector autoregressions with possibly integrated processes. Journal of Econometrics, 66, 225-250.

ARDL Lag Selection

There are commonly asked question that about ARDL Lag Selection or how to select lag structure for ARDL models. In this simple tutorial using Eviews, we demonstrate and explain how to select the ARDL model based on various criteria. Note, that Eviews produces a the ARDL model automatically so we have to explain the model selection criteria in a little more details so we are clear as to what Lag selection is.

Information Criteria

Eviews reports Log Likelihood, Akaike Information Criteria, Bayesian Information Criteria, Hannan-Quin Information Criteria and Adj. R-sq (not specifically an information criteria). We should note that many models are proposed in different settings so selection of an information criteria for lag selection in a time series models should be carefully dealt with.

One can read more about AIC vs BIC in a reply to Adrift by Methodology Center (Read more about Latent Class Analysis here):

Dear Adrift,

As you know, AIC and BIC are both penalized-likelihood criteria. They are sometimes used for choosing best predictor subsets in regression and often used for comparing nonnested models, which ordinary statistical tests cannot do. The AIC or BIC for a model is usually written in the form [-2logL + kp], where L is the likelihood function, p is the number of parameters in the model, and k is 2 for AIC and log(n) for BIC.

AIC is an estimate of a constant plus the relative distance between the unknown true likelihood function of the data and the fitted likelihood function of the model, so that a lower AIC means a model is considered to be closer to the truth. BIC is an estimate of a function of the posterior probability of a model being true, under a certain Bayesian setup, so that a lower BIC means that a model is considered to be more likely to be the true model. Both criteria are based on various assumptions and asymptotic approximations. Each, despite its heuristic usefulness, has therefore been criticized as having questionable validity for real world data. But despite various subtle theoretical differences, their only difference in practice is the size of the penalty; BIC penalizes model complexity more heavily. The only way they should disagree is when AIC chooses a larger model than BIC.

AIC and BIC are both approximately correct according to a different goal and a different set of asymptotic assumptions. Both sets of assumptions have been criticized as unrealistic. Understanding the difference in their practical behavior is easiest if we consider the simple case of comparing two nested models. In such a case, several authors have pointed out that IC’s become equivalent to likelihood ratio tests with different alpha levels. Checking a chi-squared table, we see that AIC becomes like a significance test at alpha=.16, and BIC becomes like a significance test with alpha depending on sample size, e.g., .13 for n = 10, .032 for n = 100, .0086 for n = 1000, .0024 for n = 10000. Remember that power for any given alpha is increasing in n. Thus, AIC always has a chance of choosing too big a model, regardless of n. BIC has very little chance of choosing too big a model if n is sufficient, but it has a larger chance than AIC, for any given n, of choosing too small a model.

So what’s the bottom line? In general, it might be best to use AIC and BIC together in model selection. For example, in selecting the number of latent classes in a model, if BIC points to a three-class model and AIC points to a five-class model, it makes sense to select from models with 3, 4 and 5 latent classes. AIC is better in situations when a false negative finding would be considered more misleading than a false positive, and BIC is better in situations where a false positive is as misleading as, or more misleading than, a false negative.

So we can conclude that between AIC and BIA for ARDL Lag Selection, one has to think not mere on some references from literature but also a reason to why the model has been proposed.

ARDL Lag Selection using Eviews

So in Eviews, we can proceed in the following to ARDL lag selection for proposed an ARDL model.

  1. Open the time series data in Eviews workfile
  2. Test all variables to be included in the model for unit root to make sure none is I(2) i.e. unit root in first difference or stationary in second difference.
  3. Then estimate the ARDL equation selecting your Dependent variable (DV) and all the independent variables (IVs). Here, we would like to comment down that the stationarity of DV (or DV should be I(0) before estimating the ARDL model is not a must to have condition as has been asked many times. In literature, we can see most of the papers has the same I(1) variables but when we estimate, the model specification itself defines the DV is in difference. So it becomes stationary by default when we estimate an ARDL model using an I(1) variable. One example model has been produced here: Image result for ARDL model (see the full paper here)
  4. Once the ARDL model has been estimated, click on the View menu of the results window.
  5. Click on Model Selection Summary.
  6. Now, we have two options, either to see the ARDL Lag Selection in Table with all the proposed ARDL models with corresponding lag selection criteria like LogL, AIC and BIC etc. Or we can see the Graph of selected models in ranking based on the criteria we chose.
  7. Either way, click on any of the two option, we can find the ARDL Lag Selection criteria and we will be able to determine the best model based on Lower AIC or Lower BIC as we proposed above for when to use AIC or when to use BIC.

I am sure, this simple intro to ARDL Lag Selection will help us in the future to determine a feasible ARDL Model. For more details, enroll to one of our Econometrics Courses here to develop similar critical skills in Applied Econometrics Research.

If you need Assistance in Data Analysis for writing your PhD Thesis or MS Dissertation, hire Top Econometrics Freelancer here.

Nonlinear ARDL using Stata and Eviews

Nonlinear ARDL using Stata and Eviews is a part of our video series in Econometrics Workshops. The video workshops are conducted at AnEc Center for Econometrics Research. The current workshop on Nonlinear ARDL using Stata and Eviews was held on September 10, 2017 and presented by Professor Anees. Professor Anees has been working as a senior econometricians and has been a top freelancer in Econometrics.

The free econometrics workshops are conducted on monthly basis. You can register your email to be alerted for next free workshops to be held through our state of the art and high quality video conference tool. The workshops in Econometrics offers free certificates as well that be attacked to your LinkedIn profiles and are fully verifiable.

Nonlinear ARDL using Stata and Eviews workshop included discussion of the theoretical prospects of using the nonlinear ardl for testing the hypothesis of causality. The key element of motivation for using nardl is that the effect of a positive change in x on a positive change in y is not similar to the effect of negative change in x on the negative change in y. To capture this, the authors of NARDL introduced the concept of asymmetric causality which they initially captured through Nonlinear ARDL.

In this video tutorial, we demonstrate how to use Nonlinear ARDL using Stata and Eviews. Our key motivation for this free workshop in Econometrics is based on the requests from our PhD students in Economics and Finance who are bound to apply mostly the ARDL or NARDL for their time series data analysis to write their thesis or publication quality research papers. We are always happy to help our students learn the latest econometric methods and tools to conduct data analysis for writing high impact research reports.

Using Nonlinear ARDL helps our students to conduct data analysis for writing thesis as well as research papers with higher ratio of acceptability.

Watch this video of the complete Econometrics workshop on Nonlinear ARDL using Stata and Eviews. Follow our Youtube channel for more videos on Stata and Eviews. You can also request private and instructor led online course in Advanced Econometric Modeling here.

Nonlinear ARDL using Eviews

Nonlinear ARDL using Eviews or NARDL using Eviews

This simple video tutorial on Nonlinear ARDL using Eviews or NARDL using Eviews is dedicated to Hassan Hanif who originally wrote an article on NARDL using Eviews on his blog. The key steps in estimating an Nonlinear ARDL using Eviews or NARDL using Eviews is given below video and can also be found in the following sections of this page...
Watch Video

Nonlinear ARDL using Eviews

This simple video tutorial on Nonlinear ARDL using Eviews or NARDL using Eviews is dedicated to Hassan Hanif who originally wrote an article on NARDL using Eviews on his blog. The key steps in estimating an Nonlinear ARDL using Eviews or NARDL using Eviews is given below:
This simple video tutorial on Nonlinear ARDL using Eviews or NARDL using Eviews is dedicated to Hassan Hanif who originally wrote an article on NARDL using Eviews on his blog. The key steps in estimating an Nonlinear ARDL using Eviews or NARDL using Eviews is given below:

1. Test each of the variable is not unit root in second difference.
2. Make the CUSUM of your explanatory variables.
3. Make the differences of these CUSUM and dependent variables.
4. Estimate a Stepwise Least Square Regression model.
5. Test for cointegration using the Wald restrictions and testing it with Pesaran et al. (2001) critical values.
6. Determine the asymmetric causality from the NARDL

We hope this video tutorial will further simplify and save your time to run Nonlinear ARDL using Eviews or NARDL using Eviews.

If you need a complete training in Applied Econometrics Research or Advanced Econometric Modeling, click on www.aneconomist.com or copy paste it to your internet browser and enroll for a private and instructor led online course with verifiable certificate.

The codes you might need to create differences and CUSUMs in Eviews and STEPLS, are here:
genr ddebt=debt-debt(-1)
genr dgfc = gfc-gfc(-1)
genr pos = dgfc >=0
genr dgfc_p = pos*dgfc
genr dgfc_n = (1-pos)*dgfc
genr gfc_p = @cumsum(dgfc_p)
genr gfc_n = @cumsum(dgfc_n)

d(debt) c debt(-1) gfc_p(-1) gfc_n(-1)
ddebt(-1 to -4) dgfc_p(-0 to -4) dgfc_n(-0 to -4)

Enjoy Nonlinear ARDL using Eviews or NARDL using Eviews.

Video Tutorials in Econometrics

Subscribe to our YouTube Channel and watch out other short videos in Econometrics to learn more. Or request private and instructor led courses in Applied Econometrics or Advanced Econometric Modeling from AnEc Center for Econometrics Research

Partial Least Squares Regression using SPSS

Watch a small video on how to install PLS plugin in SPSS, Anaconda Python and its libraries to run PLS in SPSS.

Enroll for online courses here.

Logistic Regression Models using Stata

A simple video tutorial from our online course in Applied Econometrics Research and Writing A Thesis in Quick Time.

Request Data Analysis here.

Tests for Autocorrelated Errors

In ordinary least square regression model, we specify the equation as
y = b0 + b1 x1 + b2 x2 + b3 x3 + b4 x4 + ut
and we can test the assumption of autocorrelation or we can test whether the disturbances are autocorrelated.
To test the autocorrelation, we can follow the steps below:
(i) Estimate the regression model above using ordinary least square approach/OLS:
sysuse auto, clear
gen t=_n
tsset t
reg price rep78 trunk length
. reg price rep78 trunk length
Source |       SS           df       MS      Number of obs   =        69
-------------+----------------------------------   F(3, 65)        =      6.42
Model |   131790806         3  43930268.8   Prob > F        =    0.0007
Residual |   445006152        65   6846248.5   R-squared       =    0.2285
-------------+----------------------------------   Adj R-squared   =    0.1929
Total |   576796959        68  8482308.22   Root MSE        =    2616.5
------------------------------------------------------------------------------
price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
rep78 |   578.7949   348.6211     1.66   0.102    -117.4495    1275.039
trunk |  -31.78264   108.8869    -0.29   0.771    -249.2447    185.6794
length |   70.17701   22.01136     3.19   0.002     26.21729    114.1367
_cons |  -8596.181   3840.351    -2.24   0.029    -16265.89   -926.4697
------------------------------------------------------------------------------

(ii)  Now calculate the residuals from the above regression:
predict errors, res
(iii)  Run another regression by inserting lagged residuals or the lag values of error terms, (errors) as predicted from above regression model into a regression model of the residuals as a dependent variable. Our regression model will be estimated through the following regression code in Stata: reg errors rep78 trunk length l.errors. We can consider this regression as auxiliary regression. The results from this auxiliary regression is given below
. reg errors rep78 trunk length l.errors
Source |       SS           df       MS      Number of obs   =        63
-------------+----------------------------------   F(4, 58)        =      4.66
Model |   104717963         4  26179490.9   Prob > F        =    0.0025
Residual |   325759819        58   5616548.6   R-squared       =    0.2433
-------------+----------------------------------   Adj R-squared   =    0.1911
Total |   430477782        62  6943190.04   Root MSE        =    2369.9
------------------------------------------------------------------------------
errors |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
rep78 |  -47.23696   332.1243    -0.14   0.887    -712.0559     617.582
trunk |   62.68025   103.8779     0.60   0.549    -145.2539    270.6144
length |   9.373147   20.97416     0.45   0.657     -32.6112    51.35749
|
errors |
L1. |   .5273932   .1225638     4.30   0.000      .282055    .7727313
|
_cons |  -2375.671   3741.339    -0.63   0.528    -9864.774    5113.432
------------------------------------------------------------------------------

(iv) Using the estimated results from auxiliary regression above, note the the R-squared value and multiply it by the number of included observations:
scalar N=_result(1)
scalar R2=_result(7)
scalar NR2= N*R2
scalar list N  R2  NR2
N =         63
R2 =  .24325986
NR2 =  15.325371

(v)  Now, the null hypothesis of BP test is that there is no autocorrelation, we can use the standard Chi-Square distribution to find the tabulated values of the Chi-Square to check if the null hypothesis of no autocorrelation needs to be rejected. According to theory, the Chi-Square statistic calculated using the NRSquare approach above, the test statistic NR2 converges asymptotically where degrees of freedom for the test is s which the number of lags of the residuals included in the auxiliary regression and we have included 1 lagged value of errors/residuals so degrees of freedom in this case is 1. We can use Stata conventional functions for distribution to determine the tabulated values at 5% level of significance using the following code:
scalar chi151=invchi2tail(1, .05)
scalar list chi151
chi15 =  3.8414598
We got from the above tutorial in tests for autocorrelation, NR2 = 15.325371 > 3.84 = Chi-Square (1, 5%). As the calculated value of Chi2 is greater than tabulated values of Chi2, so we reject the null hypothesis of no autocorrelation on the disturbances.

ARDL and Unit Root Testing using Eviews

ARDL Cointegration using Eviews 9

To estimate ARDL using Eviews 9 on Time Series Data, first open the data file/workfile, Click on your DV, press control key on keyboard, now left click to select all your IVs one by one, once selected then right click on any selected variables and open these as Equations. Once you get the Methods window in Eviews, go the methodology selection from Estimation Setting near to bottom, select ARDL from the list and click Okay. Now you cans elect the lags of DV and IV and any other options for the the methods. You can click on the OK button to get your estimates.

Augmented Dickey Fuller Unit Root Test using Eviews

Augmented Dickey Fuller Unit Root Test using Eviews We can test a time series variable for Unit Root Test following Augmented Dickey Fuller Approach in Eviews following the steps outlined below. First of all open the Eviews workfile or the Excel data in Eviews, then right click on any of the variables we would like to test for unit root based on Augmented Dickey Fuller Approach and click on Open. The series opens in spreadsheet in Eviews. We can click on View in the left upper corner of the new spreadsheet window in Eviews. Then we can click on Unit Root Test in this list that pops down by clicking on View tab. This opens the dialogue box as shown in the inserted screenshot from Eviews itself, we can see that it has mainly four sections. Main section is related to selecting the test type. The Eviews produces unit root test results following 6 methods. We will select the Augmented Dickey Fuller as test type. The we will select either the Level, Difference or Second Difference. Next we can select either to include intercept or both of trend and intercept or none. On the right side of the same window, we can either ask Eviews to use lags automatically or we can insert manually the maximum lags into the model to base our unit root test on. Once we select everything as per our assumed approach to test a series for unit root using Augmented Dickey Fuller Approach, we can click on OK to get the test results.

Phillips Perron Unit Root Test using Eviews

Phillips Perron (PP) Unit Root Test using Eviews We can test a time series variable for Unit Root Test following Phillips Perron (PP) Approach in Eviews following the steps outlined below. First of all open the Eviews workfile or the Excel data in Eviews, then right click on any of the variables we would like to test for unit root based on Phillips Perron (PP) Approach and click on Open. The series opens in spreadsheet in Eviews. We can click on View in the left upper corner of the new spreadsheet window in Eviews. Then we can click on Unit Root Test in this list that pops down by clicking on View tab. This opens the dialogue box as shown in the inserted screenshot from Eviews itself, we can see that it has mainly four sections. Main section is related to selecting the test type. The Eviews produces unit root test results following 6 methods. We will select the Phillips Perron (PP) as test type. The we will select either the Level, Difference or Second Difference. Next we can select either to include intercept or both of trend and intercept or none. On the right side of the same window, we can either ask Eviews to use lags automatically or we can insert manually the maximum lags into the model to base our unit root test on. Once we select everything as per our assumed approach to test a series for unit root using Phillips Perron (PP) Approach, we can click on OK to get the test results.

KPSS Unit Root Test using Eviews

KPSS Unit Root Test using Eviews We can test a time series variable for Unit Root Test following Kwiatkowski-Phillips-Schmidt-Shin Approach in Eviews following the steps outlined below. First of all open the Eviews workfile or the Excel data in Eviews, then right click on any of the variables we would like to test for unit root based on KPSS Approach and click on Open. The series opens in spreadsheet in Eviews. We can click on View in the left upper corner of the new spreadsheet window in Eviews. Then we can click on Unit Root Test in this list that pops down by clicking on View tab. This opens the dialogue box as shown in the inserted screenshot from Eviews itself, we can see that it has mainly four sections. Main section is related to selecting the test type. The Eviews produces unit root test results following 6 methods. We will select the KPSS as test type. The we will select either the Level, Difference or Second Difference. Next we can select either to include intercept or both of trend and intercept or none. On the right side of the same window, we can either ask Eviews to use lags automatically or we can insert manually the maximum lags into the model to base our unit root test on. Once we select everything as per our assumed approach to test a series for unit root using KPSS Approach, we can click on OK to get the test results.

Ng-Perron Unit Root Test using Eviews

We can test a time series variable for Unit Root Test following Ng-Perron Approach in Eviews following the steps outlined below. First of all open the Eviews workfile or the Excel data in Eviews, then right click on any of the variables we would like to test for unit root based on Ng-Perron Approach and click on Open. The series opens in spreadsheet in Eviews. We can click on View in the left upper corner of the new spreadsheet window in Eviews. Then we can click on Unit Root Test in this list that pops down by clicking on View tab. This opens the dialogue box as shown in the inserted screenshot from Eviews itself, we can see that it has mainly four sections. Main section is related to selecting the test type. The Eviews produces unit root test results following 6 methods. We will select the Ng-Perron as test type. The we will select either the Level, Difference or Second Difference. Next we can select either to include intercept or both of trend and intercept or none. On the right side of the same window, we can either ask Eviews to use lags automatically or we can insert manually the maximum lags into the model to base our unit root test on. Once we select everything as per our assumed approach to test a series for unit root using Ng-Perron Approach, we can click on OK to get the test results.

Cointegration, Unit Root and ARDL

Assume we have three variables. X1, X2 and X3. In all of the following three cases, we can to test all of X variables for unit root by at least two to three different tests. I personally recommend using ADF and KPSS to test the opposite null hypotheses. ADF's null is unit root series and KPSS is stationary series. Case 1. If all variables are I(0), we can use VAR as Johansen-Juselieus Cointegration Pre-condition is not satisfied. Case 2. If Two variables are I(1) but only one is I(0), or Two are I(0) and one is I(1), then ARDL from Pesaran (2001) is a feasible approach. Case 3. If at least one variable is I(2) and others are either I(0) or I(1) or mixed, the Toda-Yamamoto Causality can be applied after estimated the VAR. Note also, Toda-Yamamoto is a causality test not a test of short run or long run relationship and I usually assume Granger type causality by Toda-Yamamoto or Granger Causality itself has no dependent on VECM or Cointegration.

So in nutshell, If you variables all I(0), you can use VAR. If your all variables are I(1) or I(2), use JJ and Granger Causality. If all variables are mixed I(1) and I(0) but none is I(2), use ARDL and you can also use Granger Causality after running a VAR. If you have mixed order I(0), I(1) and I(2), use Toda Yamamoto Causality Test.

Step By Step Instructions for running ARDL in Eviews.

The steps to conduct ARDL cointegration test in Eviews are:

  1. Open your time series in Eviews
  2. Dfuller and KPSS your variables to check no variable is I(2)
  3. Single click on Dependent Variable (DV)
  4. Press Ctrl Key on keyboard, and click one on all Independent variables (IV) one by one
  5. Once DV and IV are are selected, Righ click on them
  6. A small caption open, Click on Open As Equation
  7. Another selection window appear, select maximum lags for DV and IV
  8. Click on Ok go get the ARDL estimates.

The screenshot will explain the required steps in simple to understand instructions.

Cointegration, Unit Root and ARDL

Cointegration, Unit Root and ARDL

We will share the complete the silenced one minute video tutorial in next part of this tutorial.

The step by step instruction of run ARDL using Stata can be:

  1. Open your data in Stata
  2. Tsset your data with the time variable
  3. Dfuller and KPSS your variables
  4. There should be no I(2) in the variables.
  5. Findit ardl code
  6. or scc install ardl
  7. Once installed, run the code as: ardl dv ivs, lags(#) ec
  8. ## should be replaced with a number of lags.

Time Series Econometrics using Stata

Time Series Data is frequently evaluated by faculty and professional researchers to deduce short and long run relationships, to test the trends and volatility, to find any nonlinearities in the relationships or finding any eventual factors affecting the nature of relationships and hence all of the above conditions as affecting the power of their forecasting power. The course introduces and develops from basic to the advanced techniques to deal with different types of  time series data, whether low frequency (yearly, quarterly, monthly, weekly) or high frequency (daily or hourly), identify the exact model to apply and hence deduce the exact relationship for forecasting purposes. Furthermore, the course will help the audience to develop advanced Econometric modeling skills also applicable in a range of areas from Economics to finance, from social to political sciences, from energy to environmental sciences, education and health sciences.

Course Outline

Objectives
The course aims to achieve the following goals:

1.      Introduce you the basic econometric methods for time series data
2.      Introduce you to advanced techniques to deal with these models and their application
3.      Introduce the use of Stata and R more rigorously for time series models
4.      Provide a detailed introduction to latest theoretical development and applications
5.      Provide extension of developed models to new areas of research

Outcomes
Completing the course will enable you to:

1.      Learn how to use a model suitable for specific type of data
2.      Learn how to evaluate a model if suitable for a type of data
3.      Distinguish the modeling strategy between different types of data
4.      Know the features of Stata and R for time series modeling, estimation and forecasting
5.      Learn to Write and Report Results for Papers, Theses and Dissertations

Audience
The course is developed for the following groups:

1.      MS/PhD Students in Economics, Finance, Statistics and Other Social Sciences
2.      Academic Researchers of Universities and Colleges
3.      Researchers of R&D and Policy Research Organizations
4.      Research Officers of NPO and Social Organizations
5.      Consultants, Trainers And Policy Analysts
6.      Management, Exectives of Marketing and Social Research Organizations

Registration
100 place in total available for 5 groups of 20 participants. Groups will be formed on the basis of fields of studies and research. Please fill the registration on https://elearning.aneconomist.com
Contents

·         Topic 1: Stationary Time-Series Models
·         Topic 2: Testing for Trends and Unit Roots
·         Topic 3: Modeling Volatility
·         Topic 4: Multiequation Time-Series Models
·         Topic 5: Cointegration and Error-Correction Models
·         Topic 6: Nonlinear time-series models

Fees
The course fee is £500. 15% discount fee will be charged for each member of a group of registrations up to 5 participants. Groups of higher than 5 registrations will be discounted by 30% for each participant.

Supporting Material
The learning materials include EBooks, examples of MS Excel documents and related lecture materials will be provided in form of datasets.

Certification
There is Test, Assignment after each Week. Passing this will enable you to request for PASS certificate of the course mentioning your topics, assignment topics, grades in tests and assignments. Hard Copy of the Certificate will mailed at additional £10 to any destination across the world. If you do not need to request your certificate in paper, no charges are applicable. Soft/Scanned copies will be provided free of cost via email and downloadable from course portal.

Financial Econometrics Using Matlab

Special Four Weeks long course has been developed and launched on Financial Econometrics using Matlab. The course is equally important to the beginners and advanced learners in Finance, Economics and Financial Econometrics who want to a sense of modeling and analysis. The main objective of the course is to introduce the theory and application of Financial Econometrics and how to use the powerful math tool, the Matlab for analysis and simulation. Objective of the course is specifically to develop strong academic knowledge in Financial Econometrics, its practical use and hands on use of Matlab.

The course is planned in three sections:
·         Registration starts today
·         Registration ends:November 15, 2016
·         Course Begins:November 21, 2016

Main contents are:

Exempted Topics for Advanced Learners in Statistics and Econometrics
·         Introduction to Probability
·         Introduction to Random Variables
·         Random Sequences

Basic Course Topics for All
·         Introduction to Computer Simulation of Random Variables
·         Foundations of Monte Carlo Simulations
·         Fundamentals of Quasi Monte Carlo (QMC) Simulations
·         Introduction to Random Processes
·         Solution of Stochastic Differential Equations
·         General Approach to the Valuation of Contingent Claims
·         Pricing Options using Monte Carlo Simulations
·         Term Structure of Interest Rates and Interest Rate Derivatives
·         Credit Risk and the Valuation of Corporate Securities
·         Valuation of Portfolios of Financial Guarantees
·         Risk Management and Value at Risk (VaR)
·         Value at Risk (VaR) and Principal Components Analysis (PCA)

All these topics will be discussed in context of Matlab

The course fee structure

For Individuals:
·         Default: 500GBP for three months,

For Groups/Institutions:
·         Default: 1000GBP for Three Months,

MatLab M-files, EBooks, Manuals, Example datasets, EBooks and weekly lecture slides will be provided before commencement of each lecture.
The course will be offered to those who register on https://elearning.aneconomist.com. Registration should be confirmed within a week, or the system will delete all those accounts not with a status of confirmed payment and enrolment for a course.