close

寫在前面:再過來我們回頭先談一點常用的Econometrics,以及替International CAPM鋪路的Economics & Finance,爾後才談ICAPM,與後面的Portfolio Management,最後是談Private Equity Valuation!!

Hello there:

  這裡要談常用的Econometrics!!

(1) Testing the (linear) correlation

   Ho:the (linear) correlation is zero

   vs

   Ha: the (linear) correlation is not zero

  the 雙尾t-test

   T = r*sqrt-root(n-2)/sqrt-root(1-r*r)

  as Ho stands, T should follow

  T-distribution with degree of freedon n-2 and confidence 1-alpha.

(2) Multiple linear regression

  Y = A0 + A1 * X1 + A2 * X2 + ..... + Ak * Xk + Errorterm

 the linear regression means that it is linear in the parameters Ai, i=1,2,....,k.

a. Testing Ho: Ai = 0 vs Ha: Ai is not zero

 The 雙尾t-test statistics

  T = [Est(Ai) - Hyp(Ai)]/SE[Est(Ai)]

  Est(.) is the estimate value,

  Hyp(.) is the hypothesis value, here is zero,

  SE(.) is the standard error.

  under the Ho, we know that it follows the t-distribution

  with degree of freedom (n-k-1) and confidence 1-alpha.

 其他部分請自行參閱相關書籍!!

b. ANOVA: SST = SSR + SSE

  Ho: A1 = A2 = A3 =....= Ak = 0

  vs

  Ha: at least one Aj is not zero, j=1,2,...,k

    F = MSR/MSE

      = [SSR/k][SSE/(n-k-1)]

   under Ho, it follows F-distrubution with degree of freedom (k,n-k-1).

   請記得F-test是one-sided test!!!

   亦即如果實際算出來的F值比查表值大很多,要拒絕虛無假設!!

c. Explanatory power of multiple linear regression

   = R-square

   = SSR/SST

d. SEE = square-root[SSE/(n-k-1)] 

       = square-root[MSE]

e. Adjusted R-square with respect to R-square:

  {1 - [Adj.R-sq.]*[Adj.R-sq.]}

   = [(n-1)/(n-k-1)] * {1 - [R-sq.]*[R-sq.] } 

  as we know that

   {1 - [Adj.R-sq.]*[Adj.R-sq.]}/(n-1)

   = {1 - [R-sq.]*[R-sq.]}/(n-k-1)

  請記得 Adjusted R-square不會如同我們常用的R-square一般為使用參數數目變多而一直增長,它會在使用超過某個特定估計參數總數目之後下降!!

  值得一提的是 adjusted R-square < R-square!!

f. the p-value: the smallest level of significance at which

   we can reject a null hypothesis that the population value

   of the coefficient is zero, for each regression coefficient,

   in a two-sided test.

   the decision rule:

   if the p-value < alpha (given), then we reject the null hypothesis!!

   if the p-value > alpha (given), then we do not reject the null hypothesis!!

   the lower the p-value, the stronger the evidence against the null hypothesis!!

g. Dummy Variables

  使用擬變數的主因是它的數值非零即一,

  今天若有n levels,則需要(n-1)個擬變數來界定!!

h. Testing Heteroskedasticity

   Conditonal Heteroscedasticity: Heteroskedasticity in the error variance

   that is correlated with (conditional on) the value of the independent variables

   in the regression.

 (1)檢定方法為Brusch-Pagan test,它的null hypothesis為no conditional heteroskedasticity.

 而它的計算方式為將從原來回歸是中得到的residuals取平方並將其當作新迴歸式的應變數,而這個迴歸式的自變數群為原回歸式的independent variables!!

 (2)檢定統計量為n*R-square (這個R-square為將Residuals取平方與原自變數群作回歸,而n為樣本總數),在虛無假設下(no conditional heteroskedasticity),我們知道 n*R-square 服從卡方分配 (the chi-square distribution),其自由度為回歸式中自變數群的總個數,k.(the number of independent variables in the regression)!!

 (3)而修正conditonal heterskedasticity的方法就是改變原來自變數的權重,

 將其傳統OLS法修正為GLS法,因使其standard error就會變成robust standard error!!

 在假設檢定時,其分母的部份就會有所修定,使得檢定變精準!!

 (這部分請自行參閱書籍!!)

i.Testing for the serial-correlation

 (1)Serial correlation源自於回歸是跑完之後殘差之間的彼此線性相關性很高!!因此導致在進行F-test時,常發現MSE被低估(underestimate),因此F-test statistics被高估,最後使得我們很容易就錯誤拒絕了虛無假設(因F-test statistics過度高估所致!!).通常我們發現當殘差呈現高度正相關時(positive serial correlation),我們知道OLS法會低估真實的standard errors.這點在t-test時也同樣出現,因此導致t-test statistics也是高估,因此我們也很容易犯型一錯誤(the type-I error),容易錯誤拒絕虛無假設!!

 (2)檢定方式為Durin-Watson test,而檢定統計量為

    DW = Sum of [t = 2,3,4,...,T; Square of (Resudial(t) - Residual(t-1))]/Sum of [t = 1,2,3,....,T; Square of Residual(t)]

    若errors為homoskedastic且沒有serial correlation,則DW值接近2.

   一般說來DW值等於2*(1-r),其中r為sample correlation between nearby two residuals,(Resudial(t), Residual(t-1)).因此如果 r > 0,則DW < 2;但如果 r < 0,則DW > 2!!

   我們知道

   a.殘差呈現很高的正相關,DW值遠遠小於2且接近0.

   b.當殘差呈現很高的負相關,DW值遠遠大於2且接近4.

   c.當殘差沒有甚麼相關,DW值在2附近!!

   因此有兩個值0<d(L)<d(U),當DW值小於d(L)時,我們拒絕沒有正相關的虛無假設;但當DW值大於d(U)時,我們不拒絕虛無假設;當d(L)<DW值<d(U)時,我們沒有結論!!

j. Multicollinearity 

  值得注意的是Multicollinearity源自於回歸式中自變數之間的高度相關性!!在回歸式的預測上面,Multicollinearity並沒有影響到模型的預測能力,因其可能具備非常高的R-square數值,但卻會影響到參數的檢定能力,因為Multicollinearity與positive correlated residuals不同處在於它會高估MSE,使得t-test statistics被低估,因而使得檢定統計量的檢定力(the test power)大打折扣!因此造成type-II error的出現!修正辦法是去掉其中一個或兩個與其他自變數高度線性相關者,或者如同serial correlation,一般修定MSE使其比較robust!!

 (小結)

  簡而言之,Multicollinearity會產生高F-test statistic與低T-test statistic,因為R-square值很高,但MSE被錯估高估,造成T-statistics被低估!!!

 

arrow
arrow
    全站熱搜
    創作者介紹
    創作者 Vegetable 的頭像
    Vegetable

    經濟,財務,統計學,數理科學與政治評論

    Vegetable 發表在 痞客邦 留言(0) 人氣()