scratch lenet(7): C语言计算可学习参数数量和连接数量

news2024/9/21 14:40:35

scratch lenet(7): C语言计算可学习参数数量和连接数量

1. 目的

按照 LeNet-5 对应的原版论文 LeCun-98.pdf 的网络结构,算出符合原文数据的“网络每层可学习参数数量、连接数量”。

网络上很多人的 LeNet-5 实现仅仅是 “copy” 现有的别人的项目, 缺乏“根据论文手动实现”的“复现”精神。严格对齐到论文结果,对于 LeNet-5 这样的经典论文, 是可以做到的。

具体实现使用 C 语言, 不依赖 PyTorch 或 Caffe 等第三方库, 也不使用 C++ 特性。

在这里插入图片描述

2. C1 层

公式

C1 层是卷积层。

可学习参数数量

$ \text{nk} * (\text{kh} * \text{kw} * \text{kc} + 1)$
其中:

  • nk \text{nk} nk kernel 数量
  • kh \text{kh} kh, kw \text{kw} kw: kernel 的高度、宽度
  • kc \text{kc} kc: kernel 的通道数
  • 1: bias

连接数量

( kh ∗ kw ∗ kc + 1 ) ∗ out h ∗ out w ∗ out c ; (\text{kh} * \text{kw} * \text{kc} + 1) * \text{out}_h * \text{out}_w * \text{out}_c; (khkwkc+1)outhoutwoutc;
其中:

  • kh \text{kh} kh, kw \text{kw} kw: kernel 的高度、宽度
  • kc \text{kc} kc: kernel 的通道数
  • 1: bias
  • out h \text{out}_h outh, out w \text{out}_w outw: 输出 feature map 的高度、宽度
  • out c \text{out}_c outc: 输出 feature map 的通道数,也就是 kernel 的数量 n k nk nk

代码

typedef struct ConvHyper
{
    int in_h;
    int in_w;
    int in_c;
    int kh;
    int kw;
    int number_of_kernel;
    int out_h;
    int out_w;
    int out_c;
} ConvHyper;

ConvHyper C1;
double* C1_kernel[6];
double C1_bias[6];
double* C1_output[6];


void init_lenet()
{
    // C1
    {
        C1.in_h = 32;
        C1.in_w = 32;
        C1.in_c = 1;
        C1.kh = 5;
        C1.kw = 5;
        C1.number_of_kernel = 6;
        C1.out_h = 28;
        C1.out_w = 28;
        C1.out_c = C1.number_of_kernel;

        // unpack
        int kh = C1.kh;
        int kw = C1.kw;
        int in_channel = C1.in_c;
        int number_of_kernel = C1.number_of_kernel;
        double** kernel = C1_kernel;
        int out_h = C1.out_h;
        int out_w = C1.out_w;
        double* bias = C1_bias;
        int in_c = C1.in_c;
        int out_c = C1.out_c;

        int fan_in = get_fan_in(in_channel, kh, kw);
        int fan_out = get_fan_out(number_of_kernel, kh, kw);

        for (int k = 0; k < number_of_kernel; k++)
        {
            kernel[k] = (double*)malloc(kh * kw * sizeof(double));
            init_kernel(kernel[k], kh, kw, fan_in, fan_out);

            C1_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
            bias[k] = 0.0;
        }
        int num_of_train_param = number_of_kernel * (kh * kw * in_c + 1);
        int num_of_conn = (kh * kw * in_c + 1) * out_h * out_w * out_c;
        int expected_num_of_train_param = 156;
        int expected_num_of_conn = 122304;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer C1 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer C1 wrong\n");
        }
    }
    ...
}

3. S2 层

公式

可学习参数数量

n k ∗ 2 nk * 2 nk2
其中

  • n k nk nk 表示 kernel 数量。
  • 2 表示 bias 和 coeff。

bias 好理解, 每个 kernel 对应一个 bias。 coeff 意思是系数, 每个 kernel 也有一个。

The four inputs to a unit in S2 are added, then multiplied by a trianable coefficient, and added to a trainable bias.

连接数量

( k h ∗ k w + 1 ) ∗ k c ∗ o u t h ∗ o u t w (kh * kw + 1) * kc * out_h * out_w (khkw+1)kcouthoutw
其中

  • k h kh kh, k w kw kw: kernel 的高度、宽度
  • 1: bias
  • k c kc kc: kernel 的通道数, 也等于输出的通道数
  • o u t h out_h outh, o u t w out_w outw: 输出 feature map 的高度、宽度

代码

typedef struct SubsampleHyper
{
    int in_h;
    int in_w;
    int in_c;
    int kh;
    int kw;
    int number_of_kernel;
    int stride_h;
    int stride_w;
    int out_h;
    int out_w;
    int out_c;
} SubsampleHyper;

SubsampleHyper S2;
double* S2_output[6];
double S2_coeff[6];
double S2_bias[6];

void init_lenet()
{
    // S2
    {
        S2.in_h = C1.out_h;
        S2.in_w = C1.out_w;
        S2.in_c = C1.out_c;
        S2.kh = 2;
        S2.kw = 2;
        S2.number_of_kernel = C1.number_of_kernel;
        S2.stride_h = 2;
        S2.stride_w = 2;
        S2.out_h = 14;
        S2.out_w = 14;
        S2.out_c = S2.in_c;

        // unpack
        int number_of_kernel = S2.number_of_kernel;
        int out_h = S2.out_h;
        int out_w = S2.out_w;
        int kh = S2.kh;
        int kw = S2.kw;
        int in_c = S2.in_c;

        for (int k = 0; k < number_of_kernel; k++)
        {
            S2_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
        }
        int num_of_train_param = number_of_kernel * 2; // #bias + #coeff
        int num_of_conn = (kh * kw + 1) * in_c * out_h * out_w;
        int expected_num_of_train_param = 12;
        int expected_num_of_conn = 5880;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer S2 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer S2 wrong\n");
        }
    }

    ...
}

4. C3 层

4.1 连接表

这一层虽然是“卷积层”,但是和C1层又不太一样。对于C1层,每个 kernel 的通道数量等于 input feature map 的通道数量;对于C3层, 每个 kernel 的数量小于等于 input feature map 的通道数量, 具体是多少, 展示在了如下的表格中:

在这里插入图片描述

图中 X 表示有连接。

C3 这层有16个 kernel:

  • 对于第0个kernel, 它有3个通道,和 input feature map 的0、1、2通道做卷积。
  • 对于第6个kernel, 它有4个通道,和 input feature map 的第0、1、2、3通道做卷积。
  • 对于第15个kernel,它有6个通道,和 input feature map 的所有通道做卷积。

或者也可以这样理解: 每个 kernel 有6个通道,但是做卷积时, 第0个kernel只允许用0、1、2通道,第6个 kernel 只允许用0、1、2、3通道,以此类推。

换言之,我们需要根据论文的这张表, 确定出 C3 层的16个 kernel 各自的通道数量

        bool X = true;
        bool O = false;
        bool connection_table[6 * 16] =
        {
        //  0  1  2  3  4  5  6  7  8  9  10 11 12 13 14 15
            X, O, O, O, X, X, X, O, O, X, X, X, X, O, X, X,
            X, X, O, O, O, X, X, X, O, O, X, X, X, X, O, X,
            X, X, X, O, O, O, X, X, X, O, O, X, O, X, X, X,
            O, X, X, X, O, O, X, X, X, X, O, O, X, O, X, X,
            O, O, X, X, X, O, O, X, X, X, X, O, X, X, O, X,
            O, O, O, X, X, X, O, O, X, X, X, X, O, X, X, X,
        };
        int num_of_train_param = 0;
        int kc[16] = { 0 };
        for (int i = 0; i < in_c; i++)
        {
            for (int j = 0; j < out_c; j++)
            {
                int idx = i * out_c + j;
                kc[j] += connection_table[idx];
            }
        }

        // 等价于如下的结果:
        //int kc[16] = {
        //    3, 3, 3, 3,
        //    3, 3, 4, 4,
        //    4, 4, 4, 4,
        //    4, 4, 4, 6
        //};

4.2 公式

可学习参数数量

Σ k ( kh ∗ kw ∗ kc [ k ] + 1 ) \Sigma_{k}({\text{kh} * \text{kw} * \text{kc}[k] + 1}) Σk(khkwkc[k]+1)
其中:

  • kh \text{kh} kh, kw \text{kw} kw: kernel 的高度、宽度
  • kc \text{kc} kc: kernel 的通道数
  • Σ \Sigma Σ k k k: 遍历所有 kernel
  • 1: bias

连接数量

Σ k ( kh ∗ kw ∗ kc [ k ] + 1 ) ∗ ( out h ∗ out w ) \Sigma_{k}({\text{kh} * \text{kw} * \text{kc}[k] + 1}) * (\text{out}_h * \text{out}_w) Σk(khkwkc[k]+1)(outhoutw)
其中:

  • kh \text{kh} kh, kw \text{kw} kw: kernel 的高度、宽度
  • kc \text{kc} kc: kernel 的通道数
  • Σ \Sigma Σ k k k: 遍历所有 kernel
  • 1: bias
  • out h \text{out}_h outh, out w \text{out}_w outw: 输出 feature map 的高度、宽度

4.3 代码

ConvHyper C3;
double* C3_kernel[16];
double C3_bias[16];
double* C3_output[16];

void init_lenet()
{
    ...
    // C3
    {
        C3.in_h = S2.out_h;
        C3.in_w = S2.out_w;
        C3.in_c = S2.out_c;
        C3.kh = 5;
        C3.kw = 5;
        C3.number_of_kernel = 16;
        C3.out_h = 10;
        C3.out_w = 10;
        C3.out_c = C3.number_of_kernel;

        // unpack
        int kh = C3.kh;
        int kw = C3.kw;
        int in_channel = C3.in_c;
        int number_of_kernel = C3.number_of_kernel;
        double** kernel = C3_kernel;
        int out_h = C3.out_h;
        int out_w = C3.out_w;
        double* bias = C3_bias;
        int in_c = C3.in_c;
        int out_c = C3.out_c;

        int fan_in = get_fan_in(in_channel, kh, kw);
        int fan_out = get_fan_out(number_of_kernel, kh, kw);

        for (int k = 0; k < number_of_kernel; k++)
        {
            kernel[k] = (double*)malloc(kh * kw * sizeof(double));
            init_kernel(kernel[k], kh, kw, fan_in, fan_out);

            C3_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
            bias[k] = 0.0;
        }

        bool X = true;
        bool O = false;
        bool connection_table[6 * 16] =
        {
        //  0  1  2  3  4  5  6  7  8  9  10 11 12 13 14 15
            X, O, O, O, X, X, X, O, O, X, X, X, X, O, X, X,
            X, X, O, O, O, X, X, X, O, O, X, X, X, X, O, X,
            X, X, X, O, O, O, X, X, X, O, O, X, O, X, X, X,
            O, X, X, X, O, O, X, X, X, X, O, O, X, O, X, X,
            O, O, X, X, X, O, O, X, X, X, X, O, X, X, O, X,
            O, O, O, X, X, X, O, O, X, X, X, X, O, X, X, X,
        };
        int num_of_train_param = 0;
        int kc[16] = { 0 };
        for (int i = 0; i < in_c; i++)
        {
            for (int j = 0; j < out_c; j++)
            {
                int idx = i * out_c + j;
                kc[j] += connection_table[idx];
            }
        }

        //int kc[16] = {
        //    3, 3, 3, 3,
        //    3, 3, 4, 4,
        //    4, 4, 4, 4,
        //    4, 4, 4, 6
        //};

        for (int k = 0; k < out_c; k++)
        {
           num_of_train_param += (kh * kw * kc[k] + 1);
        }

        int num_of_conn = 0;
        for (int k = 0; k < out_c; k++)
        {
            num_of_conn += (kh * kw * kc[k] + 1) * out_h * out_w;
        }

        int expected_num_of_train_param = 1516;
        int expected_num_of_conn = 151600;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer C3 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer C3 wrong\n");
        }
    }
    ...
}

5. S4 层

公式

和 S2 层的计算公式一致, 略。

代码


    // S4
    {
        S4.in_h = C3.out_h;
        S4.in_w = C3.out_w;
        S4.in_c = C3.out_c;
        S4.kh = 2;
        S4.kw = 2;
        S4.number_of_kernel = C3.number_of_kernel;
        S4.stride_h = 2;
        S4.stride_w = 2;
        S4.out_h = 5;
        S4.out_w = 5;
        S4.out_c = S4.in_c;

        // unpack
        int number_of_kernel = S4.number_of_kernel;
        int out_h = S4.out_h;
        int out_w = S4.out_w;
        int kh = S4.kh;
        int kw = S4.kw;
        int in_c = S4.in_c;

        for (int k = 0; k < number_of_kernel; k++)
        {
            S4_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
        }

        int num_of_train_param = number_of_kernel * 2; // #bias + #coeff
        int num_of_conn = (kh * kw + 1) * in_c * out_h * out_w;
        int expected_num_of_train_param = 32;
        int expected_num_of_conn = 2000;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer S4 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer S4 wrong\n");
        }
    }

6. C5 层

公式

和C1层公式一致,略。

代码

    // C5
    {
        C5.in_h = S4.out_h;
        C5.in_w = S4.out_w;
        C5.in_c = S4.out_c;
        C5.kh = 5;
        C5.kw = 5;
        C5.number_of_kernel = 120;
        C5.out_h = 1;
        C5.out_w = 1;
        C5.out_c = C5.number_of_kernel;

        // unpack
        int kh = C5.kh;
        int kw = C5.kw;
        int in_channel = C5.in_c;
        int number_of_kernel = C5.number_of_kernel;
        double** kernel = C5_kernel;
        int out_h = C5.out_h;
        int out_w = C5.out_w;
        double* bias = C5_bias;
        int in_c = C5.in_c;
        int out_c = C5.out_c;

        int fan_in = get_fan_in(in_channel, kh, kw);
        int fan_out = get_fan_out(number_of_kernel, kh, kw);

        for (int k = 0; k < number_of_kernel; k++)
        {
            kernel[k] = (double*)malloc(kh * kw * sizeof(double));
            init_kernel(kernel[k], kh, kw, fan_in, fan_out);

            C5_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
            bias[k] = 0.0;
        }
        int num_of_trainable_conn = (kh * kw * in_c + 1) * out_h * out_w * out_c;
        int expected_num_of_trainable_conn = 48120;
        if (expected_num_of_trainable_conn == num_of_trainable_conn)
        {
            printf("Layer C5 has %d trainable connections\n", num_of_trainable_conn);
        }
        else
        {
            printf("Layer C5 wrong\n");
        }
    }

7. F6层

公式

F6 层是 Fully Connected layer, 全连接层。可学习参数数量,等于连接数量, 等于:

nk ∗ ( feat i n + 1 ) \text{nk} * (\text{feat}_{in} + 1) nk(featin+1)

其中:

  • nk \text{nk} nk: kernel 数量
  • feat i n \text{feat}_{in} featin: 输入 feature map 数量 (看做是1维的向量)
  • 1: bias

代码

    // F6
    {
        F6.in_num = C5.out_h * C5.out_w * C5.out_c;
        F6.out_num = 84;

        // unpack
        int in_num = F6.in_num;
        int out_num = F6.out_num;
        int num_of_kernel = out_num;
        for (int k = 0; k < num_of_kernel; k++)
        {
            F6_kernel[k] = (double*)malloc(in_num * sizeof(double));
            F6_bias[k] = 0;
        }
        int num_of_train_param = num_of_kernel * (in_num + 1);
        int expected_num_of_train_param = 10164;
        if (expected_num_of_train_param == num_of_train_param)
        {
            printf("Layer F6 has %d trainable parameters\n", num_of_train_param);
        }
        else
        {
            printf("Layer F6 wrong\n");
        }
    }

8. C1~F6 层初始化完整代

typedef struct ConvHyper
{
    int in_h;
    int in_w;
    int in_c;
    int kh;
    int kw;
    int number_of_kernel;
    int out_h;
    int out_w;
    int out_c;
} ConvHyper;

ConvHyper C1;
double* C1_kernel[6];
double C1_bias[6];
double* C1_output[6];

typedef struct SubsampleHyper
{
    int in_h;
    int in_w;
    int in_c;
    int kh;
    int kw;
    int number_of_kernel;
    int stride_h;
    int stride_w;
    int out_h;
    int out_w;
    int out_c;
} SubsampleHyper;

SubsampleHyper S2;
double* S2_output[6];
double S2_coeff[6];
double S2_bias[6];

ConvHyper C3;
double* C3_kernel[16];
double C3_bias[16];
double* C3_output[16];

SubsampleHyper S4;
double* S4_output[16];

ConvHyper C5;
double* C5_kernel[120];
double C5_bias[120];
double* C5_output[120];

typedef struct FullyConnectedHyper
{
    int in_num;
    int out_num;
} FullyConnectedHyper;

FullyConnectedHyper F6;
double F6_output[84];
double* F6_kernel[84];
double F6_bias[84];

void init_lenet()
{
    // C1
    {
        C1.in_h = 32;
        C1.in_w = 32;
        C1.in_c = 1;
        C1.kh = 5;
        C1.kw = 5;
        C1.number_of_kernel = 6;
        C1.out_h = 28;
        C1.out_w = 28;
        C1.out_c = C1.number_of_kernel;

        // unpack
        int kh = C1.kh;
        int kw = C1.kw;
        int in_channel = C1.in_c;
        int number_of_kernel = C1.number_of_kernel;
        double** kernel = C1_kernel;
        int out_h = C1.out_h;
        int out_w = C1.out_w;
        double* bias = C1_bias;
        int in_c = C1.in_c;
        int out_c = C1.out_c;

        int fan_in = get_fan_in(in_channel, kh, kw);
        int fan_out = get_fan_out(number_of_kernel, kh, kw);

        for (int k = 0; k < number_of_kernel; k++)
        {
            kernel[k] = (double*)malloc(kh * kw * sizeof(double));
            init_kernel(kernel[k], kh, kw, fan_in, fan_out);

            C1_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
            bias[k] = 0.0;
        }
        int num_of_train_param = number_of_kernel * (kh * kw * in_c + 1);
        int num_of_conn = (kh * kw * in_c + 1) * out_h * out_w * out_c;
        int expected_num_of_train_param = 156;
        int expected_num_of_conn = 122304;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer C1 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer C1 wrong\n");
        }
    }

    // S2
    {
        S2.in_h = C1.out_h;
        S2.in_w = C1.out_w;
        S2.in_c = C1.out_c;
        S2.kh = 2;
        S2.kw = 2;
        S2.number_of_kernel = C1.number_of_kernel;
        S2.stride_h = 2;
        S2.stride_w = 2;
        S2.out_h = 14;
        S2.out_w = 14;
        S2.out_c = S2.in_c;

        // unpack
        int number_of_kernel = S2.number_of_kernel;
        int out_h = S2.out_h;
        int out_w = S2.out_w;
        int kh = S2.kh;
        int kw = S2.kw;
        int in_c = S2.in_c;

        for (int k = 0; k < number_of_kernel; k++)
        {
            S2_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
        }
        int num_of_train_param = number_of_kernel * 2; // #bias + #coeff
        int num_of_conn = (kh * kw + 1) * in_c * out_h * out_w;
        int expected_num_of_train_param = 12;
        int expected_num_of_conn = 5880;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer S2 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer S2 wrong\n");
        }
    }

    // C3
    {
        C3.in_h = S2.out_h;
        C3.in_w = S2.out_w;
        C3.in_c = S2.out_c;
        C3.kh = 5;
        C3.kw = 5;
        C3.number_of_kernel = 16;
        C3.out_h = 10;
        C3.out_w = 10;
        C3.out_c = C3.number_of_kernel;

        // unpack
        int kh = C3.kh;
        int kw = C3.kw;
        int in_channel = C3.in_c;
        int number_of_kernel = C3.number_of_kernel;
        double** kernel = C3_kernel;
        int out_h = C3.out_h;
        int out_w = C3.out_w;
        double* bias = C3_bias;
        int in_c = C3.in_c;
        int out_c = C3.out_c;

        int fan_in = get_fan_in(in_channel, kh, kw);
        int fan_out = get_fan_out(number_of_kernel, kh, kw);

        for (int k = 0; k < number_of_kernel; k++)
        {
            kernel[k] = (double*)malloc(kh * kw * sizeof(double));
            init_kernel(kernel[k], kh, kw, fan_in, fan_out);

            C3_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
            bias[k] = 0.0;
        }

        bool X = true;
        bool O = false;
        bool connection_table[6 * 16] =
        {
        //  0  1  2  3  4  5  6  7  8  9  10 11 12 13 14 15
            X, O, O, O, X, X, X, O, O, X, X, X, X, O, X, X,
            X, X, O, O, O, X, X, X, O, O, X, X, X, X, O, X,
            X, X, X, O, O, O, X, X, X, O, O, X, O, X, X, X,
            O, X, X, X, O, O, X, X, X, X, O, O, X, O, X, X,
            O, O, X, X, X, O, O, X, X, X, X, O, X, X, O, X,
            O, O, O, X, X, X, O, O, X, X, X, X, O, X, X, X,
        };
        int num_of_train_param = 0;
        int kc[16] = { 0 };
        for (int i = 0; i < in_c; i++)
        {
            for (int j = 0; j < out_c; j++)
            {
                int idx = i * out_c + j;
                kc[j] += connection_table[idx];
            }
        }

        //int kc[16] = {
        //    3, 3, 3, 3,
        //    3, 3, 4, 4,
        //    4, 4, 4, 4,
        //    4, 4, 4, 6
        //};

        for (int k = 0; k < out_c; k++)
        {
           num_of_train_param += (kh * kw * kc[k] + 1);
        }

        int num_of_conn = 0;
        for (int k = 0; k < out_c; k++)
        {
            num_of_conn += (kh * kw * kc[k] + 1) * out_h * out_w;
        }

        int expected_num_of_train_param = 1516;
        int expected_num_of_conn = 151600;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer C3 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer C3 wrong\n");
        }
    }

    // S4
    {
        S4.in_h = C3.out_h;
        S4.in_w = C3.out_w;
        S4.in_c = C3.out_c;
        S4.kh = 2;
        S4.kw = 2;
        S4.number_of_kernel = C3.number_of_kernel;
        S4.stride_h = 2;
        S4.stride_w = 2;
        S4.out_h = 5;
        S4.out_w = 5;
        S4.out_c = S4.in_c;

        // unpack
        int number_of_kernel = S4.number_of_kernel;
        int out_h = S4.out_h;
        int out_w = S4.out_w;
        int kh = S4.kh;
        int kw = S4.kw;
        int in_c = S4.in_c;

        for (int k = 0; k < number_of_kernel; k++)
        {
            S4_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
        }

        int num_of_train_param = number_of_kernel * 2; // #bias + #coeff
        int num_of_conn = (kh * kw + 1) * in_c * out_h * out_w;
        int expected_num_of_train_param = 32;
        int expected_num_of_conn = 2000;
        if (expected_num_of_train_param == num_of_train_param && expected_num_of_conn == num_of_conn)
        {
            printf("Layer S4 has %d trainable parameters, %d connections\n", num_of_train_param, num_of_conn);
        }
        else
        {
            printf("Layer S4 wrong\n");
        }
    }

    // C5
    {
        C5.in_h = S4.out_h;
        C5.in_w = S4.out_w;
        C5.in_c = S4.out_c;
        C5.kh = 5;
        C5.kw = 5;
        C5.number_of_kernel = 120;
        C5.out_h = 1;
        C5.out_w = 1;
        C5.out_c = C5.number_of_kernel;

        // unpack
        int kh = C5.kh;
        int kw = C5.kw;
        int in_channel = C5.in_c;
        int number_of_kernel = C5.number_of_kernel;
        double** kernel = C5_kernel;
        int out_h = C5.out_h;
        int out_w = C5.out_w;
        double* bias = C5_bias;
        int in_c = C5.in_c;
        int out_c = C5.out_c;

        int fan_in = get_fan_in(in_channel, kh, kw);
        int fan_out = get_fan_out(number_of_kernel, kh, kw);

        for (int k = 0; k < number_of_kernel; k++)
        {
            kernel[k] = (double*)malloc(kh * kw * sizeof(double));
            init_kernel(kernel[k], kh, kw, fan_in, fan_out);

            C5_output[k] = (double*)malloc(out_h * out_w * sizeof(double));
            bias[k] = 0.0;
        }
        int num_of_trainable_conn = (kh * kw * in_c + 1) * out_h * out_w * out_c;
        int expected_num_of_trainable_conn = 48120;
        if (expected_num_of_trainable_conn == num_of_trainable_conn)
        {
            printf("Layer C5 has %d trainable connections\n", num_of_trainable_conn);
        }
        else
        {
            printf("Layer C5 wrong\n");
        }
    }

    // F6
    {
        F6.in_num = C5.out_h * C5.out_w * C5.out_c;
        F6.out_num = 84;

        // unpack
        int in_num = F6.in_num;
        int out_num = F6.out_num;
        int num_of_kernel = out_num;
        for (int k = 0; k < num_of_kernel; k++)
        {
            F6_kernel[k] = (double*)malloc(in_num * sizeof(double));
            F6_bias[k] = 0;
        }
        int num_of_train_param = num_of_kernel * (in_num + 1);
        int expected_num_of_train_param = 10164;
        if (expected_num_of_train_param == num_of_train_param)
        {
            printf("Layer F6 has %d trainable parameters\n", num_of_train_param);
        }
        else
        {
            printf("Layer F6 wrong\n");
        }
    }
}

运行输出:

Layer C1 has 156 trainable parameters, 122304 connections
Layer S2 has 12 trainable parameters, 5880 connections
Layer C3 has 1516 trainable parameters, 151600 connections
Layer S4 has 32 trainable parameters, 2000 connections
Layer C5 has 48120 trainable connections
Layer F6 has 10164 trainable parameters

9. Referencecs

  1. https://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/674077.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

求2的N次幂(C++)解决高精度运算

​&#x1f47b;内容专栏&#xff1a;《C/C专栏》 &#x1f428;本文概括&#xff1a; 计算高精度的2的N次方数字。 &#x1f43c;本文作者&#xff1a;花 碟 &#x1f438;发布时间&#xff1a;2023.6.22 文章目录 ​前言求2的N次方&#xff0c;N ≤ 10000实现思路&#xff1a…

SpringBoot 如何使用 @PathVariable 进行数据校验

SpringBoot 如何使用 PathVariable 进行数据校验 在 SpringBoot 项目中&#xff0c;我们经常需要从 URL 中获取参数并进行相关的数据校验。而 PathVariable 注解就是一种非常方便的方式&#xff0c;可以让我们在方法参数中直接获取 URL 中的参数&#xff0c;并进行数据校验。本…

基于python开发实现数学中各种经典曲线的可视化

今天正好有点时间就想着把之前零星时间里面做的一点小东西整合一下梳理出来&#xff0c;本文的核心目的就是想要基于python来开发实现各种有趣的数学曲线的可视化展示。 笛卡尔心形线 笛卡尔心形线是一种二维平面曲线&#xff0c;由法国数学家笛卡尔在17世纪提出。它得名于其…

基于springboot+Redis的前后端分离项目(三)-【黑马点评】

&#x1f381;&#x1f381;资源文件分享 链接&#xff1a;https://pan.baidu.com/s/1189u6u4icQYHg_9_7ovWmA?pwdeh11 提取码&#xff1a;eh11 优惠券秒杀 优惠券秒杀1 -全局唯一ID2 -Redis实现全局唯一Id3 添加优惠卷4 实现秒杀下单5 库存超卖问题分析6 优惠券秒杀-一人一单…

Spring Boot 异常处理的主要特点

Spring Boot 异常处理的主要特点 在 Web 应用程序中&#xff0c;异常处理是非常重要的一部分。在 Spring Boot 中&#xff0c;异常处理是非常简单和灵活的。本文将介绍 Spring Boot 异常处理的主要特点&#xff0c;并提供一些示例代码来帮助您更好地理解。 异常处理的主要特点…

王道计算机网络学习笔记(1)——计算机网络基本知识

前言 文章中的内容来自B站王道考研计算机网络课程&#xff0c;想要完整学习的可以到B站官方看完整版。 一&#xff1a;计算机网络基本知识 1.1.1&#xff1a;认识计算机网络 计算机网络的功能 网络把许多计算机连接在一起&#xff0c;而互联网则将许多网络连接在一起&#x…

第一章JavaScript简介

第一章JavaScript简介 js是一门,高级,动态,解释型编程语言 每种语言都必须有一个平台或标准库,用于执行包括基本输入和输出在内的基本操作.核心js语言定义了最小限度的API,可以操作数组,文本,数组,集合,映射等,但不包括任何输入输出的功能.输入和输出(以及更加复杂的特性,如联…

基于Servlet实现分页查询

Servlet JSPJSTL MySQLBootstrap 等技术实现分页查询功能。 所用工具&#xff1a;IDEA 2022.3.3 Navicat Tomcat 等。 本文目录 一&#xff1a;运行效果 二&#xff1a;代码详解 &#xff08;1&#xff09;index.jsp &#xff08;2&#xff09;PageBean &#xff08…

图解操作系统笔记

硬件基础 CPU是如何执行程序的&#xff1f; 程序执行的基本过程 第一步&#xff0c;CPU 读取「程序计数器」的值&#xff0c;这个值是指令的内存地址&#xff0c;然后 CPU 的「控制单元」操作「地址总线」指定需要访问的内存地址&#xff0c;接着通知内存设备准备数据&#…

python:并发编程(十七)

前言 本文将和大家一起探讨python并发编程的实际运用&#xff0c;会以一些我实际使用的案例&#xff0c;或者一些典型案例来分享。本文使用的案例是我实际使用的案例&#xff08;中篇&#xff09;&#xff0c;是基于之前效率不高的代码改写成并发编程的。让我们来看看改造的过…

excel数据的编排与整理——行列的批量处理

excel数据的编排与整理——行列的批量处理 1 一次性插入多行多列 1.1 插入连续行 1.1.0 题目内容 1.1.1 选中插入的位置➡按住shift键➡往下选中2行 1.1.2 鼠标右击➡点击插入 1.1.3 插入后的效果 1.2 插入不连续行 1.2.0 题目内容 1.2.1 按下ctrl键➡选中插入的位置,需要插…

7.4_1B树(二序查找树BST的升级版)

如果需要查找的值比节点小&#xff0c;会向左子树方向查找&#xff0c;如果比节点值大&#xff0c;会向右子树方向查找 拓展为5叉的形态 5叉排序树的定义 num是这个节点中真实存在的节点个数 那么一个节点中 最少有1个关键字&#xff0c;两个分叉 最多有4个关键字&#xff0c…

数据结构:二叉树详解

目录 概念&#xff08;在做习题中常用的概念&#xff09; 两种特殊的二叉树 二叉树的性质 二叉树的遍历&#xff08;重点&#xff09; 如上图&#xff1a; 二叉树的构建&#xff08;代码表示一颗二叉树和一些操作二叉树的方法&#xff09; 二叉树的oj习题讲解&#xff0…

代码审计-Java项目Filter过滤器CNVD分析XSS跨站框架安全

文章目录 Demo-Filter-过滤器引用Demo-ST2框架-组件安全CNVD-Jeesns-XSS跨站绕过CNVD-悟空CRM-Fastjson组件 Demo-Filter-过滤器引用 Filter&#xff1a;Javaweb三大组件之一(另外两个是Servlet、Listener) 概念&#xff1a;Web中的过滤器&#xff0c;当访问服务器的资源时&am…

编程语言的优劣评选标准与未来发展趋势——探索最佳编程语言选择

编程语言的优劣评选标准与未来发展趋势——探索最佳编程语言选择 评判标准不同编程语言的优点与缺点分析对编程语言未来发展的猜测和未来趋势 &#x1f495; &#x1f495; &#x1f495; 博主个人主页&#xff1a; 汴京城下君–野生程序员&#x1f495; &#x1f495; &#x…

编程输出三位数的水仙花数

目录 题目 分析思路 代码 题目 编程输出三位数的水仙花数 标准的 水仙花数 就是三位数&#xff0c;即将三位数的个位&#xff1b;十位&#xff1b;百位取出来&#xff0c;分别三次方相加&#xff0c;若个位&#xff1b;十位&#xff1b;百位三次方相加与原来的三位数相等&a…

模拟电路系列文章-频率响应的描述

目录 概要 整体架构流程 技术名词解释 技术细节 1.为什么受频率的影响 2.频率响应 小结 概要 提示&#xff1a;这里可以添加技术概要 电容和电感是储能元件&#xff0c;对不同频率的交流信号&#xff0c;它们具有不同的感抗或者容抗。虽然它们不消耗功率&#xff0c;但同电阻一…

【PHP】文件写入和读取详解

一&#xff0e;实现文件读取和写入的基本思路&#xff1a; 1&#xff0e;通过fopen方法打开文件&#xff1a;$fp fopen(/*参数&#xff0c;参数*/)&#xff0c;fp为Resource类型 2&#xff0e;进行文件读取或者文件写入操作&#xff08;这里使用的函数以1中返回的$fp作为参数…

Python网络爬虫基础进阶到实战教程

文章目录 认识网络爬虫HTML页面组成Requests模块get请求与实战效果图代码解析 Post请求与实战代码解析 发送JSON格式的POST请求使用代理服务器发送POST请求发送带文件的POST请求 Xpath解析XPath语法的规则集&#xff1a;XPath解析的代码案例及其详细讲解&#xff1a;使用XPath解…

macOS Sonoma 14 beta 2 (23A5276g) ISO、IPSW、PKG 下载

macOS Sonoma 14 beta 2 (23A5276g) ISO、IPSW、PKG 下载 本站下载的 macOS 软件包&#xff0c;既可以拖拽到 Applications&#xff08;应用程序&#xff09;下直接安装&#xff0c;也可以制作启动 U 盘安装&#xff0c;或者在虚拟机中启动安装。另外也支持在 Windows 和 Linu…