欢迎访问 生活随笔!

生活随笔

当前位置: 首页 > 编程资源 > 编程问答 >内容正文

编程问答

判定系数推导 — Coefficient of Determination Derivation

发布时间:2023/12/20 编程问答 48 豆豆
生活随笔 收集整理的这篇文章主要介绍了 判定系数推导 — Coefficient of Determination Derivation 小编觉得挺不错的,现在分享给大家,帮大家做个参考.

通过线性回归得到回归参数后,可以通过计算判定系数R2R^2R2来评估回归函数的拟合优度。判定系数R2R^2R2定义如下:
R2=SSRSST=1−SSESSTR^2 = \frac {SSR}{SST} = 1 - \frac {SSE}{SST} R2=SSTSSR=1SSTSSE
其中,SSR=∑i=1n(y^i−yˉi)2SSR = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2SSR=i=1n(y^iyˉi)2SSE=∑i=1n(yi−y^i)2SSE = \sum\limits_{i=1}^n (y_i - \hat y_i)^2SSE=i=1n(yiy^i)2SST=∑i=1n(yi−yˉ)2SST = \sum\limits_{i=1}^n (y_i - \bar y)^2SST=i=1n(yiyˉ)2R2R^2R2越接近1,回归函数的拟合优度越大。上式可改写成SST=SSR+SSESST = SSR + SSESST=SSR+SSE,即:
∑i=1n(yi−yˉ)2=∑i=1n(y^i−yˉi)2+∑i=1n(yi−y^i)2\sum\limits_{i=1}^n (y_i - \bar y)^2 = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2 + \sum\limits_{i=1}^n (y_i - \hat y_i)^2 i=1n(yiyˉ)2=i=1n(y^iyˉi)2+i=1n(yiy^i)2

为了理解R2R^2R2,我们有必要先回顾一下线性回归的通式:
{y^i=f(x)=θ0+∑j=1nθjxijyi=y^i+ϵi\begin{cases} \hat y_i = f(x) = \theta_0 + \sum\limits_{j=1}^n \theta_j x_i^j \\ y_i = \hat y_i + \epsilon_i \end{cases} y^i=f(x)=θ0+j=1nθjxijyi=y^i+ϵi
其中,yiy_iyi实际上由y^i\hat y_iy^iϵi\epsilon_iϵi组成,y^i\hat y_iy^ixix_ixi变化而变化。令 xi0=1x_i^0 = 1xi0=1y^i=θ0+∑j=1nθjxij\hat y_i = \theta_0 + \sum\limits_{j=1}^n \theta_j x_i^jy^i=θ0+j=1nθjxij可被改写成y^i=θTxi\hat y_i = \theta^Tx_iy^i=θTxi。将上式改写成向量和矩阵的形式:
{[1x11x12…x1n1x21x22…x2n⋮1xm1xm2…xmn][θ0θ1⋮θn]=[y^1y^2⋮y^m][y1y2⋮ym]=[y^1y^2⋮y^m]+[ϵ1ϵ2⋮ϵm]\begin{cases} \begin{bmatrix} 1 & x_1^1 & x_1^2 & \dots & x_1^n \\ 1 & x_2^1 & x_2^2 & \dots & x_2^n \\ \vdots \\ 1 & x_m^1 & x_m^2 & \dots & x_m^n \\ \end{bmatrix} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \vdots \\ \theta_n \end{bmatrix} = \begin{bmatrix} \hat y_1 \\ \hat y_2 \\ \vdots \\ \hat y_m \end{bmatrix} \\ \\ \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_m \end{bmatrix} = \begin{bmatrix} \hat y_1 \\ \hat y_2 \\ \vdots \\ \hat y_m \end{bmatrix} + \begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \vdots \\ \epsilon_m \end{bmatrix} \end{cases} 111x11x21xm1x12x22xm2x1nx2nxmnθ0θ1θn=y^1y^2y^my1y2ym=y^1y^2y^m+ϵ1ϵ2ϵm
θ≠0\theta \neq \mathbf 0θ̸=0时,Y^\hat YY^XXX的一个线性组合,即Y^\hat YY^存在于由XXX的列向量所展开的列空间中。对于一次幂的线形回归,XXX的列空间即是一个超平面,Y^\hat YY^是存在于面内的一个向量(即YYY在面上的投影)。为了使得残差最小化,ϵ\epsilonϵYYY垂直于面方向上的投影。在三维中的几何意义如下图(文中θ\thetaθ即图中β\betaβ,图中XiX_iXi表示列向量,图取自):

因为ϵ\epsilonϵ垂直于XXX的列空间,所以ϵ\epsilonϵ垂直于XXX的所有列向量,即XTϵ=0X^T \epsilon = \mathbf 0XTϵ=0。又因ϵ=Y−Xθ\epsilon = Y - X\thetaϵ=YXθ,得:
XT(Y−Xθ)=0XTY=XTXθθ=(XTX)−1XTYY^=Xθ=X(XTX)−1XTYX^T(Y - X\theta) = \mathbf 0 \\ X^TY = X^TX\theta \\ \theta = (X^TX)^{-1}X^TY \\ \hat Y = X\theta = X(X^TX)^{-1}X^TY XT(YXθ)=0XTY=XTXθθ=(XTX)1XTYY^=Xθ=X(XTX)1XTY
根据Y^=Xθ=X(XTX)−1XTY\hat Y = X\theta = X(X^TX)^{-1}X^TYY^=Xθ=X(XTX)1XTY,我们得到了投影矩阵P=X(XTX)−1XTP = X(X^TX)^{-1}X^TP=X(XTX)1XTY^=PY\hat Y = PYY^=PY,投影矩阵PPP乘以YYY得到了YYY属于XXX列空间的分量Y^\hat YY^。投影矩阵有两个性质需要了解:

  • PPP是对称矩阵;
    PT=(X(XTX)−1XT)T=X((XTX)−1)TXT=X((XTX)T)−1XT=X(XTX)−1XT=PP^T = (X(X^TX)^{-1}X^T)^T = X((X^TX)^{-1})^TX^T = X((X^TX)^T)^{-1}X^T = X(X^TX)^{-1}X^T = P PT=(X(XTX)1XT)T=X((XTX)1)TXT=X((XTX)T)1XT=X(XTX)1XT=P
  • P2=PP^2 = PP2=P
    P2=PTP=X(XTX)−1XTX(XTX)−1XT=X(XTX)−1XTX(XTX)−1⏞XT=X(XTX)−1XT=PP^2 = P^TP = X(X^TX)^{-1}X^TX(X^TX)^{-1}X^T = X(X^TX)^{-1} \overbrace{X^TX(X^TX)^{-1}}X^T = X(X^TX)^{-1}X^T = P P2=PTP=X(XTX)1XTX(XTX)1XT=X(XTX)1XTX(XTX)1XT=X(XTX)1XT=P
  • 现在,我们可以开始推导判定系数公示SST=SSR+SSESST = SSR + SSESST=SSR+SSE了。如下(1∈Rm\mathbf 1 \in R^m1Rm):
    SST=∑i=1n(yi−yˉ)2=∑i=1n[(yi−y^i)+(y^i−yˉ)]2=∑i=1n(y^i−yˉi)2+∑i=1n(yi−y^i)2+∑i=1n2(yi−y^i)(y^i−yˉ)=∑i=1n(y^i−yˉi)2+∑i=1n(yi−y^i)2+∑i=1n2(yi−y^i)(y^i−yˉ)=∑i=1n(y^i−yˉi)2+∑i=1n(yi−y^i)2+2ϵ(Y^−Yˉ1)=∑i=1n(y^i−yˉi)2+∑i=1n(yi−y^i)2+2ϵ(PY−Yˉ1)=∑i=1n(y^i−yˉi)2+∑i=1n(yi−y^i)2+2ϵTY^−2YˉϵT1\begin{aligned} & SST = \sum\limits_{i=1}^n (y_i - \bar y)^2 = \sum\limits_{i=1}^n [(y_i - \hat y_i) + (\hat y_i - \bar y)]^2 \\ & = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2 + \sum\limits_{i=1}^n (y_i - \hat y_i)^2 + \sum\limits_{i=1}^n 2(y_i - \hat y_i)(\hat y_i - \bar y) \\ & = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2 + \sum\limits_{i=1}^n (y_i - \hat y_i)^2 + \sum\limits_{i=1}^n 2(y_i - \hat y_i)(\hat y_i - \bar y) \\ & = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2 + \sum\limits_{i=1}^n (y_i - \hat y_i)^2 + 2\epsilon(\hat Y -\bar Y\mathbf 1) \\ & = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2 + \sum\limits_{i=1}^n (y_i - \hat y_i)^2 + 2\epsilon(PY -\bar Y\mathbf 1) \\ & = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2 + \sum\limits_{i=1}^n (y_i - \hat y_i)^2 + 2\epsilon^T\hat Y - 2\bar Y\epsilon^T\mathbf 1 \end{aligned} SST=i=1n(yiyˉ)2=i=1n[(yiy^i)+(y^iyˉ)]2=i=1n(y^iyˉi)2+i=1n(yiy^i)2+i=1n2(yiy^i)(y^iyˉ)=i=1n(y^iyˉi)2+i=1n(yiy^i)2+i=1n2(yiy^i)(y^iyˉ)=i=1n(y^iyˉi)2+i=1n(yiy^i)2+2ϵ(Y^Yˉ1)=i=1n(y^iyˉi)2+i=1n(yiy^i)2+2ϵ(PYYˉ1)=i=1n(y^iyˉi)2+i=1n(yiy^i)2+2ϵTY^2YˉϵT1
    因为ϵ\epsilonϵ垂直于XXX的列空间,且Y^\hat YY^属于XXX的列空间,所以ϵTY^=0\epsilon^T \hat Y = 0ϵTY^=0;又因为1=xi0∈Rm\mathbf 1 = x_i^0 \in R^m1=xi0Rm1\mathbf 11属于XXX的列空间),所以ϵT1=0\epsilon^T \mathbf 1 = 0ϵT1=0。因此:
    SST=∑i=1n(y^i−yˉi)2+∑i=1n(yi−y^i)2+2ϵTY^−2YˉϵT1=SSR+SSESST = \sum\limits_{i=1}^n (\hat y_i - \bar y_i)^2 + \sum\limits_{i=1}^n (y_i - \hat y_i)^2 + 2\epsilon^T\hat Y - 2\bar Y\epsilon^T\mathbf 1 = SSR + SSE SST=i=1n(y^iyˉi)2+i=1n(yiy^i)2+2ϵTY^2YˉϵT1=SSR+SSE

    总结

    以上是生活随笔为你收集整理的判定系数推导 — Coefficient of Determination Derivation的全部内容,希望文章能够帮你解决所遇到的问题。

    如果觉得生活随笔网站内容还不错,欢迎将生活随笔推荐给好友。