chap4 simple neural network

news2024/10/5 13:29:43

全连接神经网络

问题描述

利用numpypytorch搭建全连接神经网络。使用numpy实现此练习需要自己手动求导,而pytorch具有自动求导机制。

我们首先先手动算一下反向传播的过程,使用的模型和初始化权重、偏差和训练用的输入和输出值如下:
在这里插入图片描述
我们看一下正向过程:计算出每个隐藏神经元的输入,通过激活函数(用Sigmoid函数)转换为下一层的输入,直到达到输出层计算最终输出:
先来计算隐藏层h_1的输入,
z h 1 = ω 1 x 1 + ω 2 x 2 + 1 = 1 ∗ 1 + ( − 2 ) ∗ ( − 1 ) + 1 = 4 z_{h_1}=\omega_1 x_1+\omega_2 x_2 + 1=1*1+(-2)*(-1)+1=4 zh1=ω1x1+ω2x2+1=11+(2)(1)+1=4
然后用激活函数激活,得到 h 1 h_1 h1的输出
a h 1 = σ ( z h 1 ) = 1 1 + e − z h 1 = 1 1 + e − 4 = 0.98201379 a_{h_1}=\sigma(z_{h_1})=\frac{1}{1+e^{-z_{h_1}}}=\frac{1}{1+e^{-4}}=0.98201379 ah1=σ(zh1)=1+ezh11=1+e41=0.98201379
同理有
z h 2 = ω 3 x 1 + ω 4 x 2 + 1 = − 1 ∗ 1 + 1 ∗ ( − 1 ) + 1 = − 1 z_{h_2}=\omega_3 x_1+\omega_4 x_2 + 1=-1*1+1*(-1)+1=-1 zh2=ω3x1+ω4x2+1=11+1(1)+1=1
可得 a h 2 = σ ( z h 2 ) = 0.26894142 a_{h_2}=\sigma(z_{h_2})=0.26894142 ah2=σ(zh2)=0.26894142
这两个输出作为下一层的输入,接下来计算输出层 o 1 o_1 o1
z o 1 = ω 5 ∗ a h 1 + ω 6 ∗ a h 2 + 1 = 2 ∗ 0.98201379 + ( − 2 ) ∗ 0.26894142 + 1 = 2.42614474 z_{o_1}=\omega_5*a_{h_1}+\omega_6*a_{h_2}+1=2*0.98201379+(-2)*0.26894142+1=2.42614474 zo1=ω5ah1+ω6ah2+1=20.98201379+(2)0.26894142+1=2.42614474
a o 1 = σ ( z o 1 ) = 1 1 + e − z o 1 = 1 1 + e − 2.42614474 = 0.91879937 a_{o_1}=\sigma(z_{o_1})=\frac{1}{1+e^{-z_{o_1}}}=\frac{1}{1+e^{-2.42614474}}=0.91879937 ao1=σ(zo1)=1+ezo11=1+e2.426144741=0.91879937
同理,得到输出层 o 2 o_2 o2的输出
z o 2 = ω 7 ∗ a h 1 + ω 8 ∗ a h 2 + 1 = − 2 ∗ 0.98201379 + ( − 1 ) ∗ 0.26894142 + 1 = − 1.23296900 z_{o_2}=\omega_7*a_{h_1}+\omega_8*a_{h_2}+1=-2*0.98201379+(-1)*0.26894142+1=-1.23296900 zo2=ω7ah1+ω8ah2+1=20.98201379+(1)0.26894142+1=1.23296900
a o 2 = σ ( z o 2 ) = 1 1 + e − z o 2 = 1 1 + e − 1.23296900 = 0.22566220 a_{o_2}=\sigma(z_{o_2})=\frac{1}{1+e^{-z_{o_2}}}=\frac{1}{1+e^{-1.23296900}}=0.22566220 ao2=σ(zo2)=1+ezo21=1+e1.232969001=0.22566220
可以看到,初始参数上的输出和目标值0.010.99有不小的距离,下面计算一下总误差

使用均方误差来计算总误差:
E total  = 1 2 ∑ ( y ^ − y ) 2 E_{\text {total }}=\frac{1}{2} \sum(\hat{y}-y)^2 Etotal =21(y^y)2
其中 y y y是输出层的实际输出, y ^ \hat y y^是期望输出,比如对于 o 1 o_1 o1神经元,有误差:
E o 1 = 1 2 ( y o 1 ^ − y o 1 ) 2 = 1 2 ( 0.01 − 0.91879937 ) 2 = 0.41295815 E_{o_1}=\frac{1}{2}\left(\hat{y_{o_1}}-y_{o_1}\right)^2=\frac{1}{2}(0.01-0.91879937)^2=0.41295815 Eo1=21(yo1^yo1)2=21(0.010.91879937)2=0.41295815
同理,计算出 E o 2 = 0.29210614 E_{o_2}=0.29210614 Eo2=0.29210614
总误差即为 E total  = E o 1 + E o 2 = 0.41295815 + 0.29210614 = 0.70506429 E_{\text {total }}=E_{o_1}+E_{o_2}=0.41295815+0.29210614=0.70506429 Etotal =Eo1+Eo2=0.41295815+0.29210614=0.70506429

然后进行反向过程,反向传播算法的目的是更高效的计算梯度,从而更新参数值,使得总误差更小,也就是使实际输出更贴近我们期望输出。它是作为一个整体去更新整个神经网络的,反向就是先考虑输出层,然后再考虑上一层,直到输入层。
首先计算输出层:
考虑参数 ω 5 \omega_5 ω5,计算 ω 5 \omega_5 ω5的改变会对总误差有多大的影响,即计算 ∂ E t o t a l ∂ ω 5 \frac{\partial E_{total}}{\partial \omega_5} ω5Etotal,由链式法则有 ∂ E t o t a l ∂ ω 5 = ∂ E t o t a l ∂ a o 1 ∂ a o 1 ∂ z o 1 ∂ z o 1 ∂ ω 5 \frac{\partial E_{total}}{\partial \omega_5}=\frac{\partial E_{total}}{\partial a_{o_1}}\frac{\partial a_{o_1}}{\partial z_{o_1}}\frac{\partial z_{o_1}}{\partial \omega_5} ω5Etotal=ao1Etotalzo1ao1ω5zo1
要计算这个等式中的每个式子,首先计算 a o 1 a_{o_1} ao1如何影响总误差
E total  = 1 2 ∑ (  target  o 1 − a o 1 ) 2 + 1 2 (  target  o 2 − a o 2 ) 2 ∂ E total  ∂ a o 1 = 2 ∗ 1 2 (  target  o 1 − a o 1 ) ∗ ( − 1 ) + 0 = − (  target  o 1 − a o 1 ) = − ( 0.01 − 0.91879937 ) = 0.90879937 \begin{aligned} & E_{\text {total }}=\frac{1}{2} \sum\left(\text { target }_{o_1-a_{o_1}}\right)^2+\frac{1}{2}\left(\text { target }_{o_2}-a_{o_2}\right)^2 \\ & \frac{\partial E_{\text {total }}}{\partial a_{o_1}}=2 * \frac{1}{2}\left(\text { target }_{o_1-a_{o_1}}\right) *(-1)+0=-\left(\text { target }_{o_1}-a_{o_1}\right)=-(0.01-0.91879937)=0.90879937 \end{aligned} Etotal =21( target o1ao1)2+21( target o2ao2)2ao1Etotal =221( target o1ao1)(1)+0=( target o1ao1)=(0.010.91879937)=0.90879937

接下来计算 ∂ a o 1 ∂ z o 1 \frac{\partial a_{o_1}}{\partial z_{o_1}} zo1ao1
我们知道 σ ′ ( z ) = σ ( z ) ( 1 − σ ( z ) ) \sigma'(z)=\sigma(z)(1-\sigma(z)) σ(z)=σ(z)(1σ(z))(对sigmoid函数求导证明)
所以  所以  ∂ a o 1 ∂ z o 1 = a o 1 ( 1 − a o 1 ) = 0.91879937 ( 1 − 0.91879937 ) = 0.07460709 \text { 所以 } \frac{\partial a_{o_1}}{\partial z_{o_1}}=a_{o_1}\left(1-a_{o_1}\right)=0.91879937(1-0.91879937)=0.07460709  所以 zo1ao1=ao1(1ao1)=0.91879937(10.91879937)=0.07460709
最后是 ∂ z o 1 ∂ ω 5 \frac{\partial z_{o_1}}{\partial \omega_5} ω5zo1
z o 1 = ω 5 ∗ a h 1 + ω 6 ∗ a h 2 + b z_o1=\omega_5 * a_{h_1}+\omega_6 * a_{h_2}+b zo1=ω5ah1+ω6ah2+b
∂ z o 1 ∂ ω 5 = a h 1 = 0.98201379 \frac{\partial z_{o_1}}{\partial \omega_5}=a_{h_1}=0.98201379 ω5zo1=ah1=0.98201379
最后放到一起得到:
∂ E t o t a l ∂ ω 5 = ∂ E t o t a l ∂ a o 1 ∂ a o 1 ∂ z o 1 ∂ z o 1 ∂ ω 5 = 0.90879937 ∗ 0.07460709 ∗ 0.98201379 = 0.06658336 \frac{\partial E_{total}}{\partial \omega_5}=\frac{\partial E_{total}}{\partial a_{o_1}}\frac{\partial a_{o_1}}{\partial z_{o_1}}\frac{\partial z_{o_1}}{\partial \omega_5}=0.90879937*0.07460709*0.98201379=0.06658336 ω5Etotal=ao1Etotalzo1ao1ω5zo1=0.908799370.074607090.98201379=0.06658336
通常一般定义 δ o 1 = ∂ E t o t a l ∂ a o 1 ∂ a o 1 ∂ z o 1 = ∂ E t o t a l ∂ z o 1 \delta_{o_1}=\frac{\partial E_{total}}{\partial a_{o_1}}\frac{\partial a_{o_1}}{\partial z_{o_1}}=\frac{\partial E_{total}}{\partial z_{o_1}} δo1=ao1Etotalzo1ao1=zo1Etotal
因此, ∂ E t o t a l ∂ ω 5 = δ o 1 a h 1 \frac{\partial E_{total}}{\partial \omega_5}=\delta_{o_1}a_{h_1} ω5Etotal=δo1ah1
为了减小误差,通常需要更新当前权重,如下:
ω 5 = ω 5 − α ∗ ∂ E t o t a l ∂ ω 5 = 2 − 0.5 ∗ 0.06658336 = 1.96670832 \omega_5 = \omega_5 - \alpha * \frac{\partial E_{total}}{\partial \omega_5}=2-0.5*0.06658336=1.96670832 ω5=ω5αω5Etotal=20.50.06658336=1.96670832
同理可以算出其他权重 ω 6 = − 2.00911750 \omega_6=-2.00911750 ω6=2.00911750 ω 7 = − 1.93442139 \omega_7=-1.93442139 ω7=1.93442139 ω 8 = − 0.98204017 \omega_8=-0.98204017 ω8=0.98204017
这是输出层的所有参数,接下来需要往前推,更新隐藏层的参数。
首先来更新 ω 1 \omega_1 ω1:
∂ E t o t a l ∂ ω 1 = ∂ E t o t a l ∂ a h 1 ∂ a h 1 ∂ z h 1 ∂ z h 1 ∂ ω 1 \frac{\partial E_{total}}{\partial \omega_1}=\frac{\partial E_{total}}{\partial a_{h_1}}\frac{\partial a_{h_1}}{\partial z_{h_1}}\frac{\partial z_{h_1}}{\partial \omega_1} ω1Etotal=ah1Etotalzh1ah1ω1zh1
要用和更新输出层参数类似的步骤来更新隐藏层的参数,但是不同的是,每个隐藏层的神经元都影响了多个输出层(或下一层)神经元的输出, a h 1 a_{h_1} ah1同时影响了 a o 1 a_{o_1} ao1 a o 2 a_{o_2} ao2,因此计算 ∂ E t o t a l ∂ a h 1 \frac{\partial E_{total}}{\partial a_{h_1}} ah1Etotal需要将输出层的两个神经元都考虑在内:
∂ E t o t a l ∂ a h 1 = ∂ E o 1 ∂ a h 1 + ∂ E o 2 ∂ a h 1 \frac{\partial E_{total}}{\partial a_{h_1}}=\frac{\partial E_{o_1}}{\partial a_{h_1}}+\frac{\partial E_{o_2}}{\partial a_{h_1}} ah1Etotal=ah1Eo1+ah1Eo2,从 ∂ E o 1 ∂ a h 1 \frac{\partial E_{o_1}}{\partial a_{h_1}} ah1Eo1开始,有:
∂ E o 1 ∂ a h 1 = ∂ E o 1 ∂ z o 1 ∗ ∂ z o 1 ∂ a h 1 \frac{\partial E_{o_1}}{\partial a_{h_1}}=\frac{\partial E_{o_1}}{\partial z_{o_1}}* \frac{\partial z_{o_1}}{\partial a_{h_1}} ah1Eo1=zo1Eo1ah1zo1
上面已经算过 ∂ E o 1 ∂ z o 1 = 0.06780288 \frac{\partial E_{o_1}}{\partial z_{o_1}}=0.06780288 zo1Eo1=0.06780288了(实际上就是 ∂ E t o t a l ∂ z o 1 \frac{\partial E_{total}}{\partial z_{o_1}} zo1Etotal,因为 E t o t a l E_{total} Etotal只有 E o 1 E_{o_1} Eo1这一项对 z o 1 z_{o_1} zo1求导不为0),而且 ∂ z o 1 ∂ a h 1 = ω 5 = 2 \frac{\partial z_{o_1}}{\partial a_{h_1}}=\omega_5=2 ah1zo1=ω5=2,所以有 ∂ E o 1 ∂ a h 1 = 0.06780288 ∗ 2 = 0.13560576 \frac{\partial E_{o_1}}{\partial a_{h_1}}=0.06780288*2=0.13560576 ah1Eo1=0.067802882=0.13560576
同理,可得 ∂ E o 2 ∂ a h 1 = − 0.13355945 ∗ ( − 2 ) = 0.26711890 \frac{\partial E_{o_2}}{\partial a_{h_1}}=-0.13355945*(-2)=0.26711890 ah1Eo2=0.13355945(2)=0.26711890
因此:
∂ E t o t a l ∂ a h 1 = ∂ E o 1 ∂ a h 1 + ∂ E o 2 ∂ a h 1 = 0.13560576 + 0.26711890 = 0.40272466 \frac{\partial E_{total}}{\partial a_{h_1}}=\frac{\partial E_{o_1}}{\partial a_{h_1}}+\frac{\partial E_{o_2}}{\partial a_{h_1}}=0.13560576+0.26711890=0.40272466 ah1Etotal=ah1Eo1+ah1Eo2=0.13560576+0.26711890=0.40272466
现在已经知道了 ∂ E t o t a l ∂ a h 1 \frac{\partial E_{total}}{\partial a_{h_1}} ah1Etotal,还需要计算 ∂ a h 1 ∂ z h 1 \frac{\partial a_{h_1}}{\partial z_{h_1}} zh1ah1 ∂ z h 1 ∂ ω 1 \frac{\partial z_{h_1}}{\partial \omega_1} ω1zh1
a h 1 = σ ( z h 1 ) = 1 1 + e − z h 1 = 1 1 + e − 4 = 0.98201379 a_{h_1}=\sigma(z_{h_1})=\frac{1}{1+e^{-z_{h_1}}}=\frac{1}{1+e^{-4}}=0.98201379 ah1=σ(zh1)=1+ezh11=1+e41=0.98201379
∂ a h 1 ∂ z h 1 = a h 1 ( 1 − a h 1 ) = 0.98201379 ( 1 − 0.98201379 ) = 0.01766271 \frac{\partial a_{h_1}}{\partial z_{h_1}}=a_{h_1}(1-a_{h_1})=0.98201379(1-0.98201379)=0.01766271 zh1ah1=ah1(1ah1)=0.98201379(10.98201379)=0.01766271
z h 1 = ω 1 x 1 + ω 2 x 2 + b z_{h_1}=\omega_1 x_1+\omega_2 x_2 + b zh1=ω1x1+ω2x2+b
∂ z h 1 ∂ ω 1 = x 1 = 1 \frac{\partial z_{h_1}}{\partial \omega_1}=x_1=1 ω1zh1=x1=1
最后,总式子就可以计算了:
∂ E t o t a l ∂ ω 1 = ∂ E t o t a l ∂ a h 1 ∂ a h 1 ∂ z h 1 ∂ z h 1 ∂ ω 1 = 0.40272466 ∗ 0.01766271 ∗ 1 = 0.00711321 \frac{\partial E_{total}}{\partial \omega_1}=\frac{\partial E_{total}}{\partial a_{h_1}}\frac{\partial a_{h_1}}{\partial z_{h_1}}\frac{\partial z_{h_1}}{\partial \omega_1}=0.40272466*0.01766271*1=0.00711321 ω1Etotal=ah1Etotalzh1ah1ω1zh1=0.402724660.017662711=0.00711321
接下来就可以更新 ω 1 \omega_1 ω1
ω 1 = ω 1 − α ∗ E t o t a l ∂ ω 1 = 0.99644340 \omega_1=\omega_1 - \alpha * \frac{E_{total}}{\partial \omega_1}=0.99644340 ω1=ω1αω1Etotal=0.99644340
同理可以得到: ω 2 = − 1.99644340 \omega_2=-1.99644340 ω2=1.99644340 ω 3 = − 0.99979884 \omega_3=-0.99979884 ω3=0.99979884 ω 4 = 0.99979884 \omega_4=0.99979884 ω4=0.99979884
在执行10000此更新权重的过程后,误差变成了0.000,输出是0.011851540581436764和0.9878060737917571,这和期望输出0.01和0.99十分接近了。

使用numpy来练习上述过程:

import numpy as np

class Network():
    def __init__(self, **kwargs):
        self.w1, self.w2, self.w3, self.w4 = kwargs['w1'], kwargs['w2'], kwargs['w3'], kwargs['w4']
        self.w5, self.w6, self.w7, self.w8 = kwargs['w5'], kwargs['w6'], kwargs['w7'], kwargs['w8']
        self.d_w1, self.d_w2, self.d_w3, self.d_w4 = 0.0, 0.0, 0.0, 0.0
        self.d_w5, self.d_w6, self.d_w7, self.d_w8 = 0.0, 0.0, 0.0, 0.0
        self.x1 = kwargs['x1']
        self.x2 = kwargs['x2']
        self.y1 = kwargs['y1']
        self.y2 = kwargs['y2']
        self.learning_rate = kwargs['learning_rate']

    def sigmoid(self, z):
        a = 1 / (1 + np.exp(-z))
        return a

    def forward_propagate(self):
        loss = 0.0
        b = 1
        in_h1 = self.w1 * self.x1 + self.w2 * self.x2 + b
        out_h1 = self.sigmoid(in_h1)
        in_h2 = self.w3 * self.x1 + self.w4 * self.x2 + b
        out_h2 = self.sigmoid(in_h2)

        in_o1 = self.w5 * out_h1 + self.w6 * out_h2
        out_o1 = self.sigmoid(in_o1)
        in_o2 = self.w7 * out_h1 + self.w8 * out_h2
        out_o2 = self.sigmoid(in_o2)

        loss += (self.y1 - out_o1) ** 2 + (self.y2 - out_o2) ** 2
        loss = loss / 2

        return out_o1, out_o2, out_h1, out_h2, loss

    def back_propagate(self, out_o1, out_o2, out_h1, out_h2):
        d_o1 = (out_o1 - self.y1)
        d_o2 = (out_o2 - self.y2)

        d_w5 = d_o1 * out_o1 * (1 - out_o1) * out_h1
        d_w6 = d_o1 * out_o1 * (1 - out_o1) * out_h2

        d_w7 = d_o2 * out_o2 * (1 - out_o2) * out_h1
        d_w8 = d_o2 * out_o2 * (1 - out_o2) * out_h2

        d_w1 = (d_w5 + d_w6) * out_h1 * (1 - out_h1) * self.x1
        d_w2 = (d_w5 + d_w6) * out_h1 * (1 - out_h1) * self.x2

        d_w3 = (d_w7 + d_w8) * out_h2 * (1 - out_h2) * self.x1
        d_w4 = (d_w7 + d_w8) * out_h2 * (1 - out_h2) * self.x2

        self.d_w1, self.d_w2, self.d_w3, self.d_w4 = d_w1, d_w2, d_w3, d_w4
        self.d_w5, self.d_w6, self.d_w7, self.d_w8 = d_w5, d_w6, d_w7, d_w8
        return

    def update_w(self):
        self.w1 = self.w1 - self.learning_rate * self.d_w1
        self.w2 = self.w2 - self.learning_rate * self.d_w2
        self.w3 = self.w3 - self.learning_rate * self.d_w3
        self.w4 = self.w4 - self.learning_rate * self.d_w4
        self.w5 = self.w5 - self.learning_rate * self.d_w5
        self.w6 = self.w6 - self.learning_rate * self.d_w6
        self.w7 = self.w7 - self.learning_rate * self.d_w7
        self.w8 = self.w8 - self.learning_rate * self.d_w8

if __name__ == "__main__":
    w_key = ['w1', 'w2', 'w3', 'w4', 'w5', 'w6', 'w7', 'w8']
    w_value = [1, -2, -1, 1, 2, -2, -2, -1]
    parameter = dict(zip(w_key, w_value))
    parameter['x1'] = 1
    parameter['x2'] = -1
    parameter['y1'] = 0.01
    parameter['y2'] = 0.99
    parameter['learning_rate'] = 0.5
    network = Network(**parameter)

    for i in range(10000):
        out_o1, out_o2, out_h1, out_h2, loss = network.forward_propagate()
        if (i % 1000 == 0):
            print("第{}轮的loss={}".format(i,loss))
        network.back_propagate(out_o1, out_o2, out_h1, out_h2)
        network.update_w()

    print("更新后的权重")
    print(network.w1, network.w2, network.w3, network.w4, network.w5, network.w6, network.w7, network.w8)

输出为:

0轮的loss=0.71592427504641741000轮的loss=0.00033994112826449472000轮的loss=0.000121845330006650643000轮的loss=6.271954032855594e-054000轮的loss=3.751394416870217e-055000轮的loss=2.438595788224937e-056000轮的loss=1.6716935251649648e-057000轮的loss=1.1889923562720554e-058000轮的loss=8.688471135735563e-069000轮的loss=6.481437220727472e-06
更新后的权重
0.9057590485430621 -1.9057590485430547 0.4873077189729459 -0.4873077189729459 -1.130913420789734 -3.752510764474653 2.7328131233332877 1.948002277914531

使用pytorch来练习上述过程

import torch
from torch import nn


class Network(nn.Module):
    def __init__(self, w_value):
        super().__init__()
        self.sigmoid = nn.Sigmoid()
        self.linear1 = nn.Linear(2, 2, bias=True)
        self.linear1.weight.data = torch.tensor(w_value[:4], dtype=torch.float32).view(2, 2)
        self.linear1.bias.data = torch.ones(2)
        self.linear2 = nn.Linear(2, 2, bias=True)
        self.linear2.weight.data = torch.tensor(w_value[4:], dtype=torch.float32).view(2, 2)
        self.linear2.bias.data = torch.ones(2)

    def forward(self, x):
        x = self.linear1(x)
        x = self.sigmoid(x)
        x = self.linear2(x)
        x = self.sigmoid(x)
        return x

w_value = [1, -2, -1, 1, 2, -2, -2, -1]
network = Network(w_value)

loss_compute = nn.MSELoss()
learning_rate = 0.5
optimizer = torch.optim.SGD(network.parameters(), lr=learning_rate)

x1, x2 = 1, -1
y1, y2 = 0.01, 0.99
inputs = torch.tensor([x1, x2], dtype=torch.float32)
targets = torch.tensor([y1, y2], dtype=torch.float32)

for i in range(10000):
    optimizer.zero_grad()
    outputs = network(inputs)
    loss = loss_compute(outputs, targets)
    if (i % 1000 == 0):
        print("第{}轮的loss={}".format(i, loss))
    loss.backward()
    optimizer.step()

# 最终的权重和偏差
print("权重:")
print(network.linear1.weight)
print(network.linear2.weight)
print("偏差:")
print(network.linear1.bias)
print(network.linear2.bias)
0轮的loss=0.70506429672241211000轮的loss=0.00017875838966574522000轮的loss=5.7717210438568145e-053000轮的loss=2.7112449970445596e-054000轮的loss=1.487863755755825e-055000轮的loss=8.897048246581107e-066000轮的loss=5.617492206511088e-067000轮的loss=3.6816677493334282e-068000轮的loss=2.4793637294351356e-069000轮的loss=1.704187070572516e-06
权重:
Parameter containing:
tensor([[ 0.9223, -1.9223],
        [-0.0543,  0.0543]], requires_grad=True)
Parameter containing:
tensor([[-0.3244, -3.2635],
        [ 0.5369,  0.4165]], requires_grad=True)
偏差:
Parameter containing:
tensor([0.9223, 1.9457], requires_grad=True)
Parameter containing:
tensor([-1.3752,  3.5922], requires_grad=True)

函数拟合

问题描述

理论和实验证明,一个两层的ReLU网络可以模拟任何函数[1~5]。请自行定义一个函数, 并使用基于ReLU的神经网络来拟合此函数。

要求

  • 请自行在函数上采样生成训练集和测试集,使用训练集来训练神经网络,使用测试集来验证拟合效果。
  • 可以使用深度学习框架来编写模型。
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
import torch.nn as nn
import numpy as np
import torch

# 准备数据
x1 = np.linspace(-2 * np.pi, 2 * np.pi, 400)
x2 = np.linspace(np.pi, -np.pi, 400)
y = np.sin(x1) + np.cos(3 * x2)
# 将数据做成数据集的模样
X = np.vstack((x1, x2)).T
Y = y.reshape(400, -1)
# 使用批训练方式
dataset = TensorDataset(torch.tensor(X, dtype=torch.float), torch.tensor(Y, dtype=torch.float))
dataloader = DataLoader(dataset, batch_size=100, shuffle=True)


# 神经网络主要结构,这里就是一个简单的线性结构

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.net = nn.Sequential(
            nn.Linear(in_features=2, out_features=10), nn.ReLU(),
            nn.Linear(10, 100), nn.ReLU(),
            nn.Linear(100, 10), nn.ReLU(),
            nn.Linear(10, 1)
        )

    def forward(self, input: torch.FloatTensor):
        return self.net(input)


net = Net()

# 定义优化器和损失函数
optim = torch.optim.Adam(Net.parameters(net), lr=0.001)
Loss = nn.MSELoss()

# 下面开始训练:
# 一共训练 1000次
for epoch in range(1000):
    loss = None
    for batch_x, batch_y in dataloader:
        y_predict = net(batch_x)
        loss = Loss(y_predict, batch_y)
        optim.zero_grad()
        loss.backward()
        optim.step()
    # 每100次 的时候打印一次日志
    if (epoch + 1) % 100 == 0:
        print("step: {0} , loss: {1}".format(epoch + 1, loss.item()))

# 使用训练好的模型进行预测
predict = net(torch.tensor(X, dtype=torch.float))

# 绘图展示预测的和真实数据之间的差异
import matplotlib.pyplot as plt

plt.plot(x1, y, label="fact")
plt.plot(x1, predict.detach().numpy(), label="predict")
plt.title("function")
plt.xlabel("x1")
plt.ylabel("sin(x1)+cos(3 * x2)")
plt.legend()
plt.show()

输出:

step: 100 , loss: 0.23763391375541687
step: 200 , loss: 0.06673044711351395
step: 300 , loss: 0.044088222086429596
step: 400 , loss: 0.013059427961707115
step: 500 , loss: 0.010913526639342308
step: 600 , loss: 0.003434327431023121
step: 700 , loss: 0.00702542532235384
step: 800 , loss: 0.001976138213649392
step: 900 , loss: 0.0032644111197441816
step: 1000 , loss: 0.003176396246999502

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1717530.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

鹤城杯 2021 流量分析

看分组也知道考http流量 是布尔盲注 过滤器筛选http流量 将流量包过滤分离 http tshark -r timu.pcapng -Y "http" -T json > 1.json这个时候取 http.request.uri 进一步分离 http.request.uri字段是我们需要的数据 tshark -r timu.pcapng -Y "http&quo…

MFC 模态对话框的实现原理

参考自MFC 模态对话框的实现原理 - 西昆仑 - OSCHINA - 中文开源技术交流社区 1. 模态对话框 在涉及 GUI 程序开发的过程中,常常有模态对话框以及非模态对话框的概念 模态对话框:在模态对话框活动期间,父窗口是无法进行消息响应&#xff0…

The book

Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD is the book that forms the basis for this course. We recommend reading the book as you complete the course. There’s a few ways to read the book – you can buy it as a paper bo…

到无穷大和更远,用分形更好

文章目录 一、说明二、分形到底是什么?三、更多更深刻的四、引进无穷小会产生什么样的怪事?五、希尔伯特曲线六、还有什么有趣的要补充的吗? 一、说明 ​​​​​​​数学领域有太多有趣的领域,领域我特别感兴趣。这是一个奇妙的…

【PostgreSQL17新特性之-冗余IS [NOT] NULL限定符的处理优化】

在执行一个带有IS NOT NULL或者NOT NULL的SQL的时候,通常会对表的每一行,都会进行检查以确保列为空/不为空,这是符合常理的。 但是如果本身这个列上有非空(NOT NULL)约束,再次检查就会浪费资源。甚至有时候…

经验分享:如何搭建一个有效的知识库管理系统

打开知乎,发现很多朋友在问如何搭建一个有效的知识库管理系统,所以今天LookLook同学就来跟大家分享一下我是怎么搭建一个既实用又高效的知识库管理系统的。 一、明确需求,定位清晰 首先,你得想清楚你要搭建的知识库管理系统是用来…

时钟、复位与上电初始化

目录 1. 时钟2. 复位2.1. 异步复位 同步释放2.2. Xilinx FPGA复位设计基于PLL锁定(locked)复位设计 3. 上电初始化 1. 时钟 2. 复位 FPGA中复位设计总结 深入理解复位—同步复位,异步复位,异步复位同步释放(含多时钟域&#xff0…

element table表格行列合并span-method,根据数据动态行列合并

表格行列合并需要用到 table的方法 span-method 根据数据来进行动态的行列合并&#xff0c;实例如下&#xff1a; <el-table:data"tableData":span-method"objectSpanMethod" style"width: 100%"><el-table-columnprop"key"l…

【python】OpenCV—Color Detection

学习来自 如何使用 OpenCV Python 检测颜色 import cv2 import numpy as npdef red_hsv(img, saveFalse):lower_hsv1 np.array([0, 175, 20])higher_hsv1 np.array([10, 255, 255])lower_hsv2 np.array([170, 175, 20])higer_hsv2 np.array([10, 255, 255])mask1 cv2.inR…

基于STM32的轻量级Web服务器设计

文章目录 一、前言1.1 开发背景1.2 实现的功能1.3 硬件模块组成1.4 ENC28J60网卡介绍1.5 UIP协议栈【1】目标与特点【2】核心组件【3】应用与优势 1.6 添加UIP协议栈实现创建WEB服务器步骤1.7 ENC28J60添加UIP协议栈实现创建WEB客户端1.8 ENC28J60移植UIP协议并编写服务器测试示…

[代码复现]Self-Attentive Sequential Recommendation(ing)

参考代码&#xff1a;SASRec.pytorch 可参考资料&#xff1a;SASRec代码解析 前言&#xff1a;文中有疑问的地方用?表示了。可以通过ctrlF搜索’?。 环境 conda create -n SASRec python3.9 pip install torch torchvision因为我是mac运行的&#xff0c;所以device是mps 下面…

npm install pubsub-js报错的解决汇总

我在练习谷粒商城P83时&#xff0c;选择分类时触发向后端请求选择分类catId绑定的品牌数据&#xff0c;发现前端控制台报错&#xff1a; "PubSub is not definded",找不到pubsub。 因为缺少pubsub包&#xff0c;所以开始安装此包。 于是在网上一顿搜索猛如虎&…

C# :IQueryable IEnumerable

1. IEnumerable namespace System.Collections: public interface IEnumerable {public IEnumerator GetEnumerator (); }public interface IEnumerator {pubilc object Current { get; }public bool MoveNext ();public void Reset (); }IEnumerable 只有一个方法 GetEnumera…

django使用fetch上传文件

在上一篇文章中&#xff0c;我包装了fetch方法&#xff0c;使其携带cookie。但是之前fetch传递的是json数据&#xff0c;现在有了一个上传文件的需求&#xff0c;因此需要进行修改&#xff1a; const sendRequest (url, method, data) > {const csrftoken Cookies.get(cs…

C++ | Leetcode C++题解之第119题杨辉三角II

题目&#xff1a; 题解&#xff1a; class Solution { public:vector<int> getRow(int rowIndex) {vector<int> row(rowIndex 1);row[0] 1;for (int i 1; i < rowIndex; i) {row[i] 1LL * row[i - 1] * (rowIndex - i 1) / i;}return row;} };

HTML动态响应2-Servlet+Ajax实现HTTP前后台交互方式

作者:私语茶馆 前言 其他涉及到的参考章节: HTML动态响应1—Ajax动态处理服务端响应-CSDN博客 Web应用JSON解析—FastJson1.2.83/Tomcat/IDEA解析案例-CSDN博客 HTML拆分与共享方式——多HTML组合技术-CSDN博客 1.场景: WEb项目经常需要前后端交互数据,并动态修改HTML页…

洛谷 P1194 买礼物

题目来源于&#xff1a;洛谷 题目本质&#xff1a;图论生成树 代码如下&#xff1a; #include<bits/stdc.h> using namespace std; const int N10005; const int M250005; int n,m; int cnt,flag,px,py; int ans0; //ans表示最小花费的钱数 int f[N]; struct Edge{in…

list~模拟实现

目录 list的介绍及使用 list的底层结构 节点类的实现 list的实现 构造函数 拷贝构造 方法一&#xff1a;方法二&#xff1a; 析构函数 赋值重载 insert / erase push_/pop_(尾插/尾删/头插/头删) begin和end&#xff08;在已建立迭代器的基础上&#xff09; 迭代…

Windows【工具 06】mklink创建符号链接和硬链接(实现文件夹不同磁盘存储)

mklink创建符号链接和硬链接 1.创建符号链接1.1 目录符号链接1.2 文件符号链接 2.创建硬链接3.区别3.1 符号链接&#xff08;Symbolic Link&#xff09;3.2 硬链接&#xff08;Hard Link&#xff09; mklink是Windows中用于创建符号链接&#xff08;symbolic links&#xff09;…

数据库管理哪家强?Devart VS Navicat 360°全方位对比解析

今天我们向大家推荐的是两个开发环节的主流数据库管理品牌&#xff0c;那么你知道这两款数据库管理软件品牌与 数据库引擎配套的管理软件有什么区别吗&#xff1f;小编这就360全方位为您解答&#xff1a; ★ 品牌介绍 Devart&#xff1a;拥有超过20年的经验&#xff0c;利用最…