Tests for Autocorrelated Errors
In ordinary least square regression model, we specify the equation as
y = b0 + b1 x1 + b2 x2 + b3 x3 + b4 x4 + ut
and we can test the assumption of autocorrelation or we can test whether the disturbances are autocorrelated.
To test the autocorrelation, we can follow the steps below:
(i) Estimate the regression model above using ordinary least square approach/OLS:
sysuse auto, clear
gen t=_n
tsset t
reg price rep78 trunk length
. reg price rep78 trunk length
Source  SS df MS Number of obs = 69
+ F(3, 65) = 6.42
Model  131790806 3 43930268.8 Prob > F = 0.0007
Residual  445006152 65 6846248.5 Rsquared = 0.2285
+ Adj Rsquared = 0.1929
Total  576796959 68 8482308.22 Root MSE = 2616.5

price  Coef. Std. Err. t P>t [95% Conf. Interval]
+
rep78  578.7949 348.6211 1.66 0.102 117.4495 1275.039
trunk  31.78264 108.8869 0.29 0.771 249.2447 185.6794
length  70.17701 22.01136 3.19 0.002 26.21729 114.1367
_cons  8596.181 3840.351 2.24 0.029 16265.89 926.4697

(ii) Now calculate the residuals from the above regression:
predict errors, res
(iii) Run another regression by inserting lagged residuals or the lag values of error terms, (errors) as predicted from above regression model into a regression model of the residuals as a dependent variable. Our regression model will be estimated through the following regression code in Stata: reg errors rep78 trunk length l.errors. We can consider this regression as auxiliary regression. The results from this auxiliary regression is given below
. reg errors rep78 trunk length l.errors
Source  SS df MS Number of obs = 63
+ F(4, 58) = 4.66
Model  104717963 4 26179490.9 Prob > F = 0.0025
Residual  325759819 58 5616548.6 Rsquared = 0.2433
+ Adj Rsquared = 0.1911
Total  430477782 62 6943190.04 Root MSE = 2369.9

errors  Coef. Std. Err. t P>t [95% Conf. Interval]
+
rep78  47.23696 332.1243 0.14 0.887 712.0559 617.582
trunk  62.68025 103.8779 0.60 0.549 145.2539 270.6144
length  9.373147 20.97416 0.45 0.657 32.6112 51.35749

errors 
L1.  .5273932 .1225638 4.30 0.000 .282055 .7727313

_cons  2375.671 3741.339 0.63 0.528 9864.774 5113.432

(iv) Using the estimated results from auxiliary regression above, note the the Rsquared value and multiply it by the number of included observations:
scalar N=_result(1)
scalar R2=_result(7)
scalar NR2= N*R2
scalar list N R2 NR2
N = 63
R2 = .24325986
NR2 = 15.325371
(v) Now, the null hypothesis of BP test is that there is no autocorrelation, we can use the standard ChiSquare distribution to find the tabulated values of the ChiSquare to check if the null hypothesis of no autocorrelation needs to be rejected. According to theory, the ChiSquare statistic calculated using the NRSquare approach above, the test statistic NR2 converges asymptotically where degrees of freedom for the test is s which the number of lags of the residuals included in the auxiliary regression and we have included 1 lagged value of errors/residuals so degrees of freedom in this case is 1. We can use Stata conventional functions for distribution to determine the tabulated values at 5% level of significance using the following code:
scalar chi151=invchi2tail(1, .05)
scalar list chi151
chi15 = 3.8414598
We got from the above tutorial in tests for autocorrelation, NR2 = 15.325371 > 3.84 = ChiSquare (1, 5%). As the calculated value of Chi2 is greater than tabulated values of Chi2, so we reject the null hypothesis of no autocorrelation on the disturbances.