大规模语言LLaVA:多模态GPT-4智能助手,融合语言与视觉,满足用户复杂需求

news2024/10/6 4:45:53

大规模语言LLaVA:多模态GPT-4智能助手,融合语言与视觉,满足用户复杂需求

一个面向多模式GPT-4级别能力构建的助手。它结合了自然语言处理和计算机视觉,为用户提供了强大的多模式交互和理解。LLaVA旨在更深入地理解和处理语言和视觉信息,从而实现更复杂的任务和对话。这个项目代表了下一代智能助手的发展方向,它能够更好地理解和应对用户需求。

  • 效果展示

demo链接:https://llava.hliu.cc/

当我来这里旅游时,我应该注意哪些事情?

在平静的湖面上游览码头时,有几件事你应该小心。首先,要注意天气状况,因为图像显示的是多云的天空。这可能表明天气突然变化的可能性,如下雨或强风,这可能使码头湿滑或不安全。其次,要注意水深和水位突然变化的可能性,因为码头位于湖中。最后,要小心任何可能出现在该地区的野生动物或昆虫,因为它们可能会对你的安全或健康构成威胁。为了确保安全愉快的体验,保持警惕和尊重环境是必不可少的。

  • 主流大模型之间对比

1.安装

  1. Clone this repository and navigate to LLaVA folder
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
  1. Install Package
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
  1. Install additional packages for training cases
pip install ninja
pip install flash-attn --no-build-isolation

1.1 升级到最新的代码库

git pull
pip uninstall transformers
pip install -e .

2.LLaVA 权重

Please check out our Model Zoo for all public LLaVA checkpoints, and the instructions of how to use the weights.

2.1 Demo

To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions here to download the checkpoints.

2.2 基于Gradio Web UI

要在本地启动Gradio demo,请依次运行以下命令。如果你计划启动多个模型工作者来比较不同的检查点,你只需要启动控制器和web服务器一次

  • Launch a controller
python -m llava.serve.controller --host 0.0.0.0 --port 10000
  • Launch a gradio web server.
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload

您刚刚启动了grado web界面。现在,您可以打开带有打印在屏幕上的URL的web界面。您可能会注意到在模型列表中没有模型。别担心,我们还没有推出劳模。当你启动一个模型工作者时,它将被自动更新。

  • Launch a model worker

This is the actual worker that performs the inference on the GPU. Each worker is responsible for a single model specified in --model-path.

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b

Wait until the process finishes loading the model and you see “Uvicorn running on …”. Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.

You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the --controller the same, and modify the --port and --worker to a different port number for each worker.

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2>

If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the --device flag: --device mps.

  • Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)

如果GPU的VRAM小于24GB(例如,RTX 3090, RTX 4090等),您可以尝试在多个GPU上运行它。如果您有多个GPU,我们最新的代码库将自动尝试使用多个GPU。你可以使用’ CUDA_VISIBLE_DEVICES '来指定使用哪个gpu。下面是使用前两个gpu运行的示例。

CUDA_VISIBLE_DEVICES=0,1 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
  • Launch a model worker (4-bit, 8-bit inference, quantized)

您可以使用量化位(4位,8位)启动模型工作器,这允许您在减少GPU内存占用的情况下运行推理,可能允许您在只有12GB VRAM的GPU上运行。请注意,使用量子化位的推理可能不如全精度模型准确。只需将’——load-4bit ‘或’——load-8bit '附加到您正在执行的model worker命令。下面是一个使用4位量化运行的示例。

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b --load-4bit
  • Launch a model worker (LoRA weights, unmerged)

您可以使用LoRA权重启动模型工作器,而不将它们与基本检查点合并,以节省磁盘空间。会有额外的加载时间,而推理速度与合并的检查点相同。未合并的LoRA检查点在模型名称中没有“LoRA -merge”,并且通常比合并的检查点小得多(小于1GB) (7B为13G, 13B为25G)。

要加载未合并的LoRA权重,您只需要传递一个额外的参数’——model-base ',这是用于训练LoRA权重的基本LLM。您可以在模型动物园中查看每个LoRA权重的基本LLM。

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3 --model-base lmsys/vicuna-13b-v1.3

3.CLI 推理

使用LLaVA讨论图像,而不需要使用Gradio接口。支持多gpu、4位和8位量化推理。使用4位量化,对于我们的LLaVA-1.5-7B,它在单个GPU上使用不到8GB的VRAM。

python -m llava.serve.cli \
    --model-path liuhaotian/llava-v1.5-7b \
    --image-file "https://llava-vl.github.io/static/images/view.jpg" \
    --load-4bit

4.模型训练

以下是LLaVA v1.5的最新培训配置。对于遗留模型,请参考此版本的README。稍后我们将把它们添加到一个单独的文档中

LLaVA训练包括两个阶段:(1)特征对齐阶段:使用LAION-CC-SBU数据集的558K子集将“冻结预训练”视觉编码器连接到“冻结LLM”;(2)视觉指令调整阶段:使用150K gpt生成的多模态指令跟随数据,加上515K左右的学术任务VQA数据,来教模型遵循多模态指令。

LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the per_device_train_batch_size and increase the gradient_accumulation_steps accordingly. Always keep the global batch size the same: per_device_train_batch_size x gradient_accumulation_steps x num_gpus.

4.1 超参数

We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.

  1. Pretraining
HyperparameterGlobal Batch SizeLearning rateEpochsMax lengthWeight decay
LLaVA-v1.5-13B2561e-3120480
  1. Finetuning
HyperparameterGlobal Batch SizeLearning rateEpochsMax lengthWeight decay
LLaVA-v1.5-13B1282e-5120480

4.2 下载 Vicuna checkpoints (automatically)

我们的基本模型Vicuna v1.5,这是一个指令调整聊天机器人,将自动下载,当你运行我们提供的训练脚本。不需要任何操作。

4.3 预训练 (特征对齐)

请下载我们在论文中使用的带有BLIP标题的LAION-CC-SBU数据集的558K子集在这里。

在8x A100 (80G)上,由于分辨率增加到336px, LLaVA-v1.5-13B的预训练大约需要5.5小时。LLaVA-v1.5-7B大约需要3.5小时。

Training script with DeepSpeed ZeRO-2: pretrain.sh.

  • --mm_projector_type mlp2x_gelu: the two-layer MLP vision-language connector.
  • --vision_tower openai/clip-vit-large-patch14-336: CLIP ViT-L/14 336px.

4.4 可视化训练调试

  1. Prepare data

Please download the annotation of the final mixture our instruction tuning data llava_v1_5_mix665k.json, and download the images from constituting datasets:

  • COCO: train2017
  • GQA: images
  • OCR-VQA: download script
  • TextVQA: train_val_images
  • VisualGenome: part1, part2

After downloading all of them, organize the data as follows in ./playground/data,

├── coco
│   └── train2017
├── gqa
│   └── images
├── ocr_vqa
│   └── images
├── textvqa
│   └── train_images
└── vg
    ├── VG_100K
    └── VG_100K_2
  1. Start training!

You may download our pretrained projectors in Model Zoo. It is not recommended to use legacy projectors, as they may be trained with a different version of the codebase, and if any option is off, the model will not function/train as we expected.

Visual instruction tuning takes around 20 hours for LLaVA-v1.5-13B on 8x A100 (80G), due to the increased resolution to 336px. It takes around 10 hours for LLaVA-v1.5-7B on 8x A100 (40G).

Training script with DeepSpeed ZeRO-3: finetune.sh.

New options to note:

  • --mm_projector_type mlp2x_gelu: the two-layer MLP vision-language connector.
  • --vision_tower openai/clip-vit-large-patch14-336: CLIP ViT-L/14 336px.
  • --image_aspect_ratio pad: this pads the non-square images to square, instead of cropping them; it slightly reduces hallucination.
  • --group_by_modality_length True: this should only be used when your instruction tuning dataset contains both language (e.g. ShareGPT) and multimodal (e.g. LLaVA-Instruct). It makes the training sampler only sample a single modality (either image or language) during training, which we observe to speed up training by ~25%, and does not affect the final outcome.

5.模型评估

In LLaVA-1.5, we evaluate models on a diverse set of 12 benchmarks. To ensure the reproducibility, we evaluate the models with greedy decoding. We do not evaluate using beam search to make the inference process consistent with the chat demo of real-time outputs.

See Evaluation.md.

5.1 基于GPT协助的评估

我们的gpt辅助的多模态建模评估管道提供了对视觉语言模型能力的全面理解。详情请参阅我们的文章。

  1. Generate LLaVA responses
python model_vqa.py \
    --model-path ./checkpoints/LLaVA-13B-v0 \
    --question-file \
    playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
    --image-folder \
    /path/to/coco2014_val \
    --answers-file \
    /path/to/answer-file-our.jsonl
  1. Evaluate the generated responses. In our case, answer-file-ref.jsonl is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
OPENAI_API_KEY="sk-***********************************" python llava/eval/eval_gpt_review_visual.py \
    --question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
    --context llava/eval/table/caps_boxes_coco2014_val_80.jsonl \
    --answer-list \
    /path/to/answer-file-ref.jsonl \
    /path/to/answer-file-our.jsonl \
    --rule llava/eval/table/rule.json \
    --output /path/to/review.json
  1. Summarize the evaluation results
python summarize_gpt_review.py

6.模型合集

要使用llava -1.5检查点,您的llava软件包版本必须高于1.1.0。说明如何升级。

如果您有兴趣在模型动物园中加入任何其他细节,请打开一个问题:)

下面的模型权重是合并的权重。你不需要应用。LLaVA检查点的使用应该符合基本LLM的模型许可:Llama 2。

LLaVA-v1.5

VersionSizeScheduleCheckpointVQAv2GQAVizWizSQAT-VQAPOPEMMEMM-BenchMM-Bench-CNSEEDLLaVA-Bench-WildMM-Vet
LLaVA-1.57Bfull_ft-1eliuhaotian/llava-v1.5-7b78.562.050.066.858.285.91510.764.358.358.665.431.1
LLaVA-1.513Bfull_ft-1eliuhaotian/llava-v1.5-13b80.063.353.671.661.385.91531.367.763.661.672.536.1
LLaVA-1.57Blora-1ecoming soon
LLaVA-1.513Blora-1ecoming soon

LLaVA-v1

Note: We recommend using the most capable LLaVA-v1.5 series above for the best performance.

Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleLLaVA-Bench-ConvLLaVA-Bench-DetailLLaVA-Bench-ComplexLLaVA-Bench-OverallDownload
Vicuna-13B-v1.3CLIP-L-336pxLCS-558K1eLLaVA-Instruct-80Kproj-1e, lora-1e64.355.981.770.1LoRA LoRA-Merged
LLaMA-2-13B-ChatCLIP-LLCS-558K1eLLaVA-Instruct-80Kfull_ft-1e56.758.680.067.9ckpt
LLaMA-2-7B-ChatCLIP-LLCS-558K1eLLaVA-Instruct-80Klora-1e51.258.971.662.8LoRA

Projector weights

These are projector weights we have pretrained. You can use these projector weights for visual instruction tuning. They are just pretrained on image-text pairs, and are NOT instruction tuned, which means they do NOT follow instructions as good as our official models, and can output repetitive, lengthy, and garbled outputs. If you want to have nice conversations with LLaVA, use the checkpoints above (LLaVA v1.5).

NOTE: These projector weights are only compatible with the llava>=1.0.0, please check out the latest code base if your local code version is below v1.0.0.

NOTE: When you use our pretrained projector for visual instruction tuning, it is very important to use the same base LLM and vision encoder as the one we used for pretraining the projector. Otherwise, the performance will be very bad.

When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,

--mm_use_im_start_end False
--mm_use_im_patch_token False
Base LLMVision EncoderProjectionPretrain DataPretraining scheduleDownload
Vicuna-13B-v1.5CLIP-L-336pxMLP-2xLCS-558K1eprojector
Vicuna-7B-v1.5CLIP-L-336pxMLP-2xLCS-558K1eprojector
LLaMA-2-13B-ChatCLIP-L-336pxLinearLCS-558K1eprojector
LLaMA-2-7B-ChatCLIP-L-336pxLinearLCS-558K1eprojector
LLaMA-2-13B-ChatCLIP-LLinearLCS-558K1eprojector
LLaMA-2-7B-ChatCLIP-LLinearLCS-558K1eprojector
Vicuna-13B-v1.3CLIP-L-336pxLinearLCS-558K1eprojector
Vicuna-7B-v1.3CLIP-L-336pxLinearLCS-558K1eprojector
Vicuna-13B-v1.3CLIP-LLinearLCS-558K1eprojector
Vicuna-7B-v1.3CLIP-LLinearLCS-558K1eprojector

Science QA Checkpoints

Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleDownload
Vicuna-13B-v1.3CLIP-LLCS-558K1eScienceQAfull_ft-12eckpt

Legacy Models (merged weights)

The model weights below are merged weights. You do not need to apply delta. The usage of LLaVA checkpoints should comply with the base LLM’s model license.

Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleDownload
MPT-7B-ChatCLIP-LLCS-558K1eLLaVA-Instruct-80Kfull_ft-1epreview

Legacy Models (delta weights)

The model weights below are delta weights. The usage of LLaVA checkpoints should comply with the base LLM’s model license: LLaMA.

You can add our delta to the original LLaMA weights to obtain the LLaVA weights.

Instructions:

  1. Get the original LLaMA weights in the huggingface format by following the instructions here.
  2. Use the following scripts to get LLaVA weights by applying our delta. It will automatically download delta weights from our Hugging Face account. In the script below, we use the delta weights of liuhaotian/LLaVA-7b-delta-v0 as an example. It can be adapted for other delta weights by changing the --delta argument (and base/target accordingly).
python3 -m llava.model.apply_delta \
    --base /path/to/llama-7b \
    --target /output/path/to/LLaVA-7B-v0 \
    --delta liuhaotian/LLaVA-7b-delta-v0
Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleDownload
Vicuna-13B-v1.1CLIP-LCC-595K1eLLaVA-Instruct-158Kfull_ft-3edelta-weights
Vicuna-7B-v1.1CLIP-LLCS-558K1eLLaVA-Instruct-80Kfull_ft-1edelta-weights
Vicuna-13B-v0CLIP-LCC-595K1eLLaVA-Instruct-158Kfull_ft-3edelta-weights
Vicuna-13B-v0CLIP-LCC-595K1eScienceQAfull_ft-12edelta-weights
Vicuna-7B-v0CLIP-LCC-595K1eLLaVA-Instruct-158Kfull_ft-3edelta-weights

Legacy Projector weights

The following projector weights are deprecated, and the support for them may be removed in the future. They do not support zero-shot inference. Please use the projector weights in the table above if possible.

NOTE: When you use our pretrained projector for visual instruction tuning, it is very important to use the same base LLM and vision encoder as the one we used for pretraining the projector. Otherwise, the performance will be very bad.

When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,

--mm_use_im_start_end True
--mm_use_im_patch_token False
Base LLMVision EncoderPretrain DataPretraining scheduleDownload
Vicuna-7B-v1.1CLIP-LLCS-558K1eprojector
Vicuna-13B-v0CLIP-LCC-595K1eprojector
Vicuna-7B-v0CLIP-LCC-595K1eprojector

When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,

--mm_use_im_start_end False
--mm_use_im_patch_token False
Base LLMVision EncoderPretrain DataPretraining scheduleDownload
Vicuna-13B-v0CLIP-LCC-595K1eprojector

7.数据集介绍

Data file nameSize
llava_instruct_150k.json229 MB
llava_instruct_80k.json229 MB
conversation_58k.json126 MB
detail_23k.json20.5 MB
complex_reasoning_77k.json79.6 MB

7.1 Pretraining Dataset

The pretraining dataset used in this release is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution. Please see here for a detailed description of the dataset structure and how to download the images.

If you already have CC-3M dataset on your disk, the image names follow this format: GCC_train_000000000.jpg. You may edit the image field correspondingly if necessary.

DataChat FileMeta DataSize
CC-3M Concept-balanced 595Kchat.jsonmetadata.json211 MB
LAION/CC/SBU BLIP-Caption Concept-balanced 558Kblip_laion_cc_sbu_558k.jsonmetadata.json181 MB

Important notice: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It must not be used for any other purposes. The use of these images must comply with the CC-3M license. This may be taken down at any time when requested by the original CC-3M dataset owner or owners of the referenced images.

7.2 GPT-4 Prompts

我们为GPT-4查询提供提示和少量样本,以更好地促进该领域的研究。请查看’ prompts '文件夹中的三种问题:对话、细节描述和复杂推理。

它们以’ system_message.txt ‘的格式组织,用于系统消息,’ abc_caps.txt ‘对用于少数几个示例用户输入,’ abc_conf .txt '用于少数几个示例参考输出。

请注意,它们的格式可能不同。例如,’ conversation ‘在’ json '中,详细描述是只回答的。我们在初步实验中选择的格式比我们尝试的一组有限的替代方案稍微好一些:“json”,更自然的格式,只有答案。如果有兴趣,您可以尝试其他变体或对此进行更仔细的研究。欢迎投稿!

更多优质内容请关注公号:汀丶人工智能;会提供一些相关的资源和优质文章,免费获取阅读。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1106144.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

小程序setData动态传递key

有些时候可能需要根据key是个变量 比如 let keyName "name" this.setData({keyName :"张三" })本来想将keyName替换为name的&#xff0c;但是小程序只会在data中定义一个key为keyName ,value为“张三”的一条数据。 正确写法为&#xff1a; let keyNam…

SS626V100_SDK_V2.0.1.0 安装编译 osdrv 问题汇总

目录 前言1、开发环境2、在 linux 服务器上安装交叉工具链2.1 安装 aarch64-mix410-linux.tgz2.2 安装 cc-riscv32-cfg11-musl-20220523-elf.tar.gz2.3 检查工具链版本&#xff0c;打印版本则表示配置成功 3、安装 SDK3.1 SS626V100_SDK_V2.0.1.0 安装包位置3.2 解压缩并展开 S…

怎么把图片改成jpg格式?

怎么把图片改成jpg格式&#xff1f;大家都知道&#xff0c;随着计算机被发明到现在已经存在了很多年&#xff0c;在这么多的的技术发展过程中&#xff0c;也形成了种类非常多的图片文件格式&#xff0c;例如平时我们能接触到的图片格式有jpg、png、gif、bmp、heic、tiff、jfif、…

力扣-python-两数之和

题解&#xff1a; class Solution(object):def twoSum(self, nums, target):# 遍历列表for i in range(len(nums)):# 计算需要找到的下一个目标数字res target-nums[i]# 遍历剩下的元素&#xff0c;查找是否存在该数字if res in nums[i1:]:# 若存在&#xff0c;返回答案。这里…

云安全—云计算架构

0x00 前言 云中的所有的软件都是作为服务来提供的&#xff0c;需要支持多租户&#xff0c;需要提供伸缩的能力&#xff0c;所有需要特定的软件架构来进行支持。 0x01 云计算的本质 1.云计算系统工程 主要特点是&#xff1a; 弹性透明模块化通用动态多租赁 云计算通过对硬…

基于JavaWeb+SpringBoot+Vue健身俱乐部系统的设计和实现

基于JavaWebSpringBootVue健身俱乐部系统的设计和实现 源码传送入口前言主要技术系统设计功能截图Lun文目录订阅经典源码专栏Java项目精品实战案例《500套》 源码获取 源码传送入口 前言 1.1 课题背景 随着互联网的发展&#xff0c;电脑已成为人们生活中必不可少的生活办公工…

asyncawait函数

一种更简洁的方式写出基于Promise的异步行为 async函数的返回值为一个promise&#xff0c;通过then和catch来捕获内部的返回值 1.特性: 1. async函数内部会返回一个promise对象&#xff0c;如果看起来不是promise&#xff0c;那么它将会隐式的包装在promise中&#xff08;如…

保姆级VitrualBox下载ubantu

首先先到此处下载VitrualBox选择对应的配置 Oracle VM VirtualBox 下载VitrualBox的同时要下载一个Visual&#xff0c;支持VitrualBox运行 最新受支持的 Visual C 可再发行程序包下载 | Microsoft Learn 同时再根据下面的网址去下载Ubantu 下载好后桌面出现这两个&#xff0c…

从培训班出来之后找工作的经历,教会了我五件事.....

我是非计算机专业&#xff0c;由于专业不好实习急着就业有过一些失败的工作经历后&#xff0c;跑去参加培训进入IT这行的。 之前在报名学习软件测试之前我也很纠结&#xff0c;不知道怎么选择机构。后面看到有同学在知乎上分享自己的学习经历&#xff0c;当时对我的帮助很大。…

从手动操作到自动化管理,如何实现企业身份业务全面自动化?

在数字化时代&#xff0c;身份管理已经成为了企业和组织不可或缺的一部分&#xff0c;企业对于管理员工、客户和合作伙伴的身份信息和访问权限的需求变得愈发复杂。身份管理不仅仅是一项必要的任务&#xff0c;更是确保业务流畅运营和数据安全的关键因素。然而&#xff0c;传统…

Restful 风格

目录 Restful风格创建springboot项目SpringMVC开发Restful接口1、获取所有的员工思路代码RequestMappingGetMapping 2、获取单个员工思路代码注意&#xff1a; 3、删除员工数据思路&#xff1a;代码单个删除批量删除 4、更新员工数据思路&#xff1a; 5、jQuery发送请求发送del…

应该继续学习编程,还是学数控?

今日话题&#xff0c;继续学习编程&#xff0c;还是学数控&#xff1f;综合来说肯定是软件的待遇和工作环境都要好些。 当然这行有一定的技术门槛&#xff0c;所谓会者不难&#xff0c;难者不会。要入门需要一定的天赋或者说时间&#xff0c;当然 兴趣是最好的老师&#xff0c;…

【密评】商用密码应用安全性评估从业人员考核题库(九)

商用密码应用安全性评估从业人员考核题库&#xff08;九&#xff09; 国密局给的参考题库5000道只是基础题&#xff0c;后续更新完5000还会继续更其他高质量题库&#xff0c;持续学习&#xff0c;共同进步。 2001 判断题 在GM/T 0022《IPSec VPN技术规范》中定义了 OSI七层网络…

微信小程序通过webview嵌入的h5 ,遇到打开pdf网址需求的解决办法

h5中&#xff0c;后端给到我一个地址假设为&#xff1a; https://tj-data-bak-to-test228.oss-cn-hanu.aliyunm/us/pdfs/glu_report/xxxxxx99391.pdf 然后需要自己写个pc页面&#xff0c;里面通过iframe引入这个pdf地址&#xff0c;然后将这个pc页面&#xff0c;通过webview嵌…

python项目之医用耗材网上申领系统(django)

项目简介 医用耗材网上申领系统实现了以下功能&#xff1a; 管理员功能&#xff1a;登录&#xff0c;耗材申领&#xff0c;日志管理&#xff0c;申领管理&#xff0c;系统设计&#xff0c;耗材管理。用户&#xff1a;日志管理&#xff0c;耗材申领&#xff0c;申领管理&#…

【EI会议征稿】第三届大数据、信息与计算机网络国际学术会议(BDICN 2024)

第三届大数据、信息与计算机网络国际学术会议&#xff08;BDICN 2024&#xff09; 2024 3rd International Conference on Big Data, Information and Computer Network 第三届大数据、信息与计算机网络国际学术会议&#xff08;BDICN 2024&#xff09;定于2024年1月12-14日在…

外汇天眼:澳大利亚拟规范数字资产中介与交易所!

澳大利亚政府&#xff0c;通过财政部&#xff0c;宣布正在努力引入一项监管框架&#xff0c;针对为澳大利亚居民和澳大利亚企业提供数字资产访问和持有服务的实体&#xff0c;将包括对所有数字资产中介的许可要求。 这些改革旨在解决已经确定的消费者风险&#xff0c;并支持数…

KNN算法 c++实现

来源【机器学习实战之一】&#xff1a;C实现K-近邻算法KNN_两个图像的特征向量应用knn模型进行匹配-CSDN博客 //计算每个训练数据到待分类元组的距离&#xff0c;取和待分类元组距离最近的k个训练数据&#xff0c;k个数据中哪个类别的训练数据占多数&#xff0c;则待分类元组就…

ewebeditor编辑器漏洞

原理 网页在线编辑器&#xff0c;权限限制不严格&#xff0c;弱口令&#xff0c;文件过滤出现问题 特征 在网址上面加ewebeditor 防御 1、及时更新网站的软件和插件&#xff0c;漏洞及时修补 2、加强网站的访问控制&#xff0c;限制未经授权的访问和操作 3、加强对网站数据…