rnn 梯度消失爆炸
文章目录
- 梯度消失和爆炸原理
- 求导知识
- RNN推导
梯度消失和爆炸原理
求导知识
y=x2y = x^2y=x2
dy\mathrm{d} {y}dy 导数
dydx\Large \frac {\mathcal{d} {y}} {\mathcal{d}{x}}dxdy 偏导
RNN推导
正向传播:
at=wxxt+whht−1+bta_t=w_xx_t + w_hh_{t-1} + b_tat=wxxt+whht−1+bt
ht=σ(at)h_t = \sigma(a_t)ht=σ(at)
y^=softmax(wyht+by)\hat{y} =softmax(w_yh_t+b_y)y^=softmax(wyht+by)
定义loss:
用logloss,TODO:多分类的logloss为啥是下面的格式?为啥不是loss=∑[−ylog(y^)−(1−y)log(1−y^)]loss = \sum[-ylog(\hat{y})-(1-y)log(1-\hat{y})]loss=∑[−ylog(y^)−(1−y)log(1−y^)]
loss=L=∑i=1n−yilog(yi^)loss = \mathcal{L} = \displaystyle\sum_{i=1}^{n}-y_ilog(\hat{y_i})loss=L=i=1∑n−yilog(yi^)
dLdwt=dLdatdatdwt=dLdat\Large \frac {\mathrm{d}\mathcal{L}} {\mathrm{d}w_t} = \frac {\mathrm{d}\mathcal{L}} {\mathrm{d}a_t} \frac{\mathrm{d}a_t} {\mathrm{d}w_t}= \frac {\mathrm{d}\mathcal{L}} {\mathrm{d}a_t}dwtdL=datdLdwtdat=datdL
参考1
参考2
参考3
总结
以上是生活随笔为你收集整理的rnn 梯度消失爆炸的全部内容,希望文章能够帮你解决所遇到的问题。
- 上一篇: tf loss logloss
- 下一篇: NLP jieba分词源码解析