昇思25天学习打卡营第16天|LLM-MindNLP ChatGLM-6B StreamChat

news2024/9/22 1:23:52

打卡

目录

打卡

任务说明

环境配置

部署方式

ChatGLM-6B 体验截图示例

ChatGLM-6B 模型结构解析如下

ChatGLM2-6B 模型结构解析如下


任务说明

加载智谱清言的chatglm模型权重文件(目前有4个版本),本次主要尝试了chatglm-6b。

chatglm 6b 在提供的本次实验npu卡中可以正常加载,加载2和3版本时加载tokenizer出错了。但是可以看到model的打印结果,看到chatglm2 和 chatglm3 的模型结构相比1版本,词表扩充了2w+。而我们知道chatglm3-6b 还具有了 functional calling 功能。

环境配置

  • 欧拉操作系统2.0(SP8);
  • python3.9;
  • 华为 NPU 910A;
  • mindnlp, mindspore, mindvision 安装部署。

  • 其余安装:
!pip install mdtex2html
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
  • 变量设置:HF_ENDPOINT=https://hf-mirror.com

部署方式

from mindnlp.transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import gradio as gr
import mdtex2html


### 下载 && 加载模型权重 
model = AutoModelForSeq2SeqLM.from_pretrained(
            'ZhipuAI/ChatGLM-6B', 
            mirror="modelscope").half()
model.set_train(False)

tokenizer = AutoTokenizer.from_pretrained(
            'ZhipuAI/ChatGLM-6B', 
             mirror="modelscope")

### 修改参数和prompt体验模型
prompt = '你好'
history = []
response, _ = model.chat(tokenizer, prompt, history=history, max_length=20)
response

ChatGLM-6B 体验截图示例

如上图,ChatGLM-6B tokenzier 的词表大小是 13w+,有5种特殊的 token。

如下为模型打印结果。

ChatGLM-6B 模型结构解析如下

1. 1级主类:ChatGLMForConditionalGeneration 生成式Transformer模型,用于条件文本生成。

2. 2级类:ChatGLMModel 层,是transformer 结构,是模型的核心部分。

3. 2级类:lm_head 结构的 Dense 全连接层。dim[in, out]=[4096, 130528]

4. ChatGLMModel 结构下的3级类组件分三层:

        >> word_embeddings 嵌入层:dim[in, out]=[130528, 4096] ,即使用了 130528 个词汇,每个词汇映射到一个4096维的向量。

        >> layers 网络结构层:Transformer模型的主体,包含 28 个GLMBlock。  

        >> final_layernorm 最后的层归一化。        

5. GLMBlock 的结构:

        》》1 input_layernorm层,层归一化,用于在注意力机制之前对输入进行归一化。

        》》2 SelfAttention层,自注意力机制,用于计算输入序列中不同位置的注意力权重。共包括3层:RotaryEmbedding 旋转嵌入,用于增强模型对位置信息的处理能力; Dense(query_key_value)用于生成查询(Q)、键(K)和值(V)向量;Dense(Dense)用于自注意力机制的输出。。

        》》3 post_attention_layernorm层,用于自注意力之后的归一化。

        》》4 mlp 层,多层感知机,用于对自注意力层的输出进行进一步的非线性变换。这里的MLP使用的是GLU(门控线性单元)激活函数。dense_h_to_4h 线性变换将输入维度扩大4倍。dense_4h_to_h 将扩大后的维度还原。

如下图,chatglm-6b model 的打印结果。

$ print(model)

ChatGLMForConditionalGeneration<
  (transformer): ChatGLMModel<
    (word_embeddings): Embedding<vocab_size=130528, embedding_size=4096, use_one_hot=False, weight=Parameter (Tensor(shape=[130528, 4096], dtype=Float16, value=[...], name=transformer.word_embeddings.weight), requires_grad=True), dtype=Float16, padding_idx=None>
    (layers): CellList<
      (0): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (1): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (2): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (3): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (4): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (5): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (6): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (7): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (8): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (9): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (10): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (11): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (12): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (13): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (14): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (15): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (16): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (17): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (18): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (19): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (20): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (21): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (22): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (23): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (24): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (25): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (26): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (27): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      >
    (final_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.final_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.final_layernorm.bias), requires_grad=True)>
    >
  (lm_head): Dense<input_channels=4096, output_channels=130528>
  >

ChatGLM2-6B 模型结构解析如下

如下图,chatglm2-6b model 的打印结果。相比于1版本,模型结构没有变化,只是vocab_size词表扩充成了15w+

$ print(model)


ChatGLMForConditionalGeneration<
  (transformer): ChatGLMModel<
    (word_embeddings): Embedding<vocab_size=150528, embedding_size=4096, use_one_hot=False, weight=Parameter (Tensor(shape=[150528, 4096], dtype=Float16, value=[...], name=transformer.word_embeddings.weight), requires_grad=True), dtype=Float16, padding_idx=None>
    (layers): CellList<
      (0): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.0.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (1): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.1.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (2): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.2.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (3): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.3.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (4): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.4.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (5): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.5.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (6): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.6.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (7): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.7.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (8): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.8.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (9): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.9.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (10): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.10.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (11): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.11.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (12): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.12.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (13): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.13.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (14): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.14.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (15): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.15.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (16): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.16.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (17): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.17.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (18): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.18.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (19): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.19.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (20): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.20.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (21): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.21.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (22): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.22.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (23): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.23.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (24): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.24.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (25): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.25.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (26): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.26.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      (27): GLMBlock<
        (input_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.input_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.input_layernorm.bias), requires_grad=True)>
        (attention): SelfAttention<
          (rotary_emb): RotaryEmbedding<>
          (query_key_value): Dense<input_channels=4096, output_channels=12288, has_bias=True>
          (dense): Dense<input_channels=4096, output_channels=4096, has_bias=True>
          >
        (post_attention_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.post_attention_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.layers.27.post_attention_layernorm.bias), requires_grad=True)>
        (mlp): GLU<
          (dense_h_to_4h): Dense<input_channels=4096, output_channels=16384, has_bias=True>
          (dense_4h_to_h): Dense<input_channels=16384, output_channels=4096, has_bias=True>
          >
        >
      >
    (final_layernorm): LayerNorm<normalized_shape=[4096], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.final_layernorm.weight), requires_grad=True), bias=Parameter (Tensor(shape=[4096], dtype=Float16, value=[...], name=transformer.final_layernorm.bias), requires_grad=True)>
    >
  (lm_head): Dense<input_channels=4096, output_channels=150528>
  >

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1943497.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

人工智能(AI)在办公场所的广泛应用

人工智能&#xff08;AI&#xff09;在办公场所的广泛应用正逐步改变着我们的工作方式和效率。随着技术的进步&#xff0c;越来越多的公司和组织开始采用各种AI技术来优化工作流程、提升生产力&#xff0c;并提供更好的用户体验。以下是人工智能在办公方面的一些主要作用和影响…

C++中,虚函数的作用详解

我个人认为虚函数的作用有两个&#xff1a; 增加安全性&#xff1b;提醒子类去做该做的事情。 提高效率&#xff1b;不是指程序执行效率&#xff0c;而是编码效率。 首先我这里要纠正一下&#xff1a; 一个函数被定义为虚函数&#xff0c;不代表这个函数未被实现&#xff1…

leetcode.nvim使用cookie无法登陆问题

错误描述&#xff1a; 使用力扣 (LeetCode) 全球极客挚爱的技术成长平台 的cookie在neovim上使用leetcode.nvim进行登录会出现curl xxx -D xxxx的报错。 解决方法&#xff1a; 使用LeetCode - The Worlds Leading Online Programming Learning Platform这个网站的cookie进行登…

这7款高效爬虫工具软件,非常实用!

在当今数据驱动的时代&#xff0c;自动化爬虫工具和软件成为了许多企业和个人获取数据的重要手段。这里会介绍6款功能强大、操作简便的自动化爬虫工具&#xff0c;用好了可以更高效地进行数据采集。 1. 八爪鱼采集器 八爪鱼是一款功能强大的桌面端爬虫软件&#xff0c;主打可…

轮船控制系统nmea2000电缆组件 7/8 T型连接器

NMEA 2000 7/8 T型连接器概述 NMEA 2000 7/8 T型连接器是专为船舶控制系统设计的电缆组件&#xff0c;主要用于连接船上的各种电子设备和系统&#xff0c;如GPS接收器、自动驾驶仪、风速和风向传感器、深度声纳等。这些设备通过NMEA 2000总线共享数据&#xff0c;包括导航信息…

1.Fabric框架

要了解Fabric&#xff0c;首先要知道Hyperledger开源项目。 2015年12月&#xff0c;由开源世界的旗舰组织Linux基金会牵头&#xff0c;30家初始企业成员共同宣布Hyperledger联合项目成立。Hyperledger 超级账本&#xff0c;是首个面向企业应用场景的分布式账本平台&#xff0c…

【算法】深入理解并优化算法:提升软件开发效率与质量

目录 一、算法的基本概念 输入 输出 确定性 有限性 有效性 二、常见算法类型 1. 排序算法 选择排序&#xff08;Selection Sort&#xff09; 插入排序&#xff08;Insertion Sort&#xff09; 快速排序&#xff08;Quick Sort&#xff09; 归并排序&#xff08;Mer…

搜维尔科技:Movella推出面向自主机器和边缘人工智能应用的Xsens MTi传感器组合

Movella近日宣布针对自主机器和边缘人工智能应用&#xff0c;已增强旗下的Xsens MTi™惯性传感器模块。Xsens MTi传感器可与NVIDIA Jetson™平台轻松集成&#xff0c;用于边缘人工智能和机器人技术&#xff0c;并与NVIDIA Jetson AGX Orin™、NVIDIA Jetson Orin™ NX和NVIDIA …

基本聚集函数和case的应用

文章目录 1.基本聚集函数(1)基本聚集函数的介绍(2)使用基本聚集函数的简单例子&#xff08;1&#xff09;查询最大年龄&#xff0c;最小年龄年龄和平均年龄<1>最大年龄<2>最小年龄<3>平均年龄 (2&#xff09;配合上where语句&#xff0c;查询女士的平均年龄(…

虚拟化环境中如何实现以业务为中心的网络隔离?Everoute 推出虚拟专有云网络(VPC)功能

目前&#xff0c;不少企业都利用云计算和虚拟化技术提升 IT 系统灵活性、敏捷性和成本效益。然而&#xff0c;云环境的“多租户”特性也为业务安全带来了新的挑战&#xff0c;如何保障不同业务主体或租户之间的数据安全和网络隔离&#xff0c;成为企业关注的焦点。 作为 Smart…

(C++) 智能指针指定删除器

文章目录 ⌚前言⏲️注意 ⌚unique_ptr⏲️说明⏲️实例 ⌚shared_ptr⏲️说明⏲️实例 ⌚拓展⏲️函数类型 & 函数指针类型 ⌚END&#x1f31f;关注我 ⌚前言 自C11后&#xff0c;推出了三个智能指针。其中 unique_ptr和shared_ptr可以指定删除器。 但两者的形式却不太一…

【Canvas与艺术】红底白色压边五角星

【成图】 【代码】 <!DOCTYPE html> <html lang"utf-8"> <meta http-equiv"Content-Type" content"text/html; charsetutf-8"/> <head><title>精确压边五角星版本2</title><style type"text/css&qu…

Java IO模型深入解析:BIO、NIO与AIO

Java IO模型深入解析&#xff1a;BIO、NIO与AIO 一. 前言 在Java编程中&#xff0c;IO&#xff08;Input/Output&#xff09;操作是不可或缺的一部分&#xff0c;它涉及到文件读写、网络通信等方面。Java提供了多种类和API来支持这些操作。本文将从IO的基础知识讲起&#xff…

虚拟现实和增强现实技术系列—Expressive Talking Avatars

文章目录 1. 概述2. 背景介绍3. 数据集3.1 设计标准3.2 数据采集 4. 方法4.1 概述4.2 架构4.3 目标函数 5. 实验评测5.1 用户研究5.2 我们方法的结果5.3 比较与消融研究 1. 概述 支持远程协作者之间的交互和沟通。然而&#xff0c;明确的表达是出了名的难以创建&#xff0c;主…

两台电脑之间如何进行数据传输?两台电脑数据传输攻略

在数字化时代&#xff0c;电脑之间的数据传输变得日益重要。无论是个人用户还是企业用户&#xff0c;经常需要在不同的电脑之间共享或迁移数据。那么&#xff0c;两台电脑之间如何进行数据传输呢&#xff1f;本文将详细介绍两台电脑之间进行数据传输的几种常见方法&#xff0c;…

奖金+奖杯+荣誉证书 | FPGA硬件扑克牌比赛邀你参加

关键词&#xff1a;个人赛&#xff0c;随机发牌&#xff0c;比运气&#xff0c;还比设计&#xff0c;好玩又有趣 想用FPGA玩一场有趣的游戏吗&#xff1f;想检验自己的FPGA算法水平吗&#xff1f; “向日葵杯”全国教育仿真技术大赛——FPGA硬件扑克牌对抗赛等你来体验&#…

I can‘t link the chatbot model with react

题意&#xff1a;我无法将聊天机器人模型 chatbot 与React连接起来 问题背景&#xff1a; This is the model from flask import Flask, request, jsonify from flask_cors import CORS import json import nltk import numpy as np import random import pickle from time i…

英特尔终于宣布了解决CPU崩溃和不稳定性问题的方法,声称过高的电压是根本原因;补丁预计将于8月中旬推出【更新】

英特尔终于宣布了解决CPU崩溃和不稳定性问题的方法&#xff0c;声称过高的电压是根本原因&#xff1b;补丁预计将于8月中旬推出【更新】 英特尔官方宣布&#xff0c;已找到困扰其CPU的崩溃问题的根本原因&#xff0c;并将于8月中旬前发布微码更新以解决这一问题&#xff0c;从而…

聊聊 C# 中的顶级语句

前言 在 C# 9.0 版本之前&#xff0c;即使只编写一行输出 “Hello world” 的 C# 代码&#xff0c;也需要创建一个 C# 类&#xff0c;并且需要为这个 C# 类添加 Main 方法&#xff0c;才能在 Main 方法中编写代码。从 C# 9.0 开始&#xff0c;C# 增加了 “顶级语句” 语法&…

获取对象碎片情况

查看oracle数据库表上碎片 先创建个函数 FUNCTION get_space_usage1(owner IN VARCHAR2,object_name IN VARCHAR2,segment_type IN VARCHAR2,partition_name IN VARCHAR2 DEFAULT NULL) RETURN sys.DBMS_DEBUG_VC2COLL PIPELINEDASufbl NUMBER;ufby NUMBER;fs1bl NUMBER…