昇思MindSpore学习笔记2-04 LLM原理和实践--文本解码原理--以MindNLP为例

news2024/7/4 3:18:57

摘要:

介绍了昇思MindSpore AI框架采用贪心搜索、集束搜索计算高概率词生成文本的方法、步骤,并为解决重复等问题所作的多种尝试。

这一节完全看不懂,猜测是如何用一定范围的词造句。

一、概念

自回归语言模型

文本序列概率分布

分解为每个词基于其上文的条件概率的乘积

        

        W_0:初始上下文单词序列

        T: 时间步

        当生成EOS标签时,停止生成。

MindNLP/huggingface Transformers提供的文本生成方法

Greedy search

在每个时间步t输出概率最高的词

Wt=argmax_w P(w|w(l::t-l))

贪心搜索输出序列("The","nice","woman") 的条件概率为:0.5 * 0.4 = 0.2

缺点:容易错过后面的高概率词

如:dog=0.5, has=0.9 ![image.png](attachment:image.png =600x600)

二、环境配置

%%capture captured_output
# 实验环境已经预装了mindspore==2.2.14,如需更换mindspore版本,可更改下面mindspore的版本号
!pip uninstall mindspore -y
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
!pip uninstall mindvision -y
!pip uninstall mindinsight -y

输出:

Found existing installation: mindvision 0.1.0
Uninstalling mindvision-0.1.0:
  Successfully uninstalled mindvision-0.1.0
WARNING: Skipping mindinsight as it is not installed.

# 该案例在 mindnlp 0.3.1 版本完成适配,如果发现案例跑不通,可以指定mindnlp版本,执行`!pip install mindnlp==0.3.1`
!pip install mindnlp

输出:

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: mindnlp in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (0.3.1)
Requirement already satisfied: mindspore in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.2.14)
Requirement already satisfied: tqdm in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (4.66.4)
Requirement already satisfied: requests in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.32.3)
Requirement already satisfied: datasets in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.20.0)
Requirement already satisfied: evaluate in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.4.2)
Requirement already satisfied: tokenizers in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.19.1)
Requirement already satisfied: safetensors in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.4.3)
Requirement already satisfied: sentencepiece in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.2.0)
Requirement already satisfied: regex in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2024.5.15)
Requirement already satisfied: addict in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.4.0)
Requirement already satisfied: ml-dtypes in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.4.0)
Requirement already satisfied: pyctcdecode in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.5.0)
Requirement already satisfied: jieba in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.42.1)
Requirement already satisfied: pytest==7.2.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (7.2.0)
Requirement already satisfied: attrs>=19.2.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (23.2.0)
Requirement already satisfied: iniconfig in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (2.0.0)
Requirement already satisfied: packaging in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (23.2)
Requirement already satisfied: pluggy<2.0,>=0.12 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (1.5.0)
Requirement already satisfied: exceptiongroup>=1.0.0rc8 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (1.2.0)
Requirement already satisfied: tomli>=1.0.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (2.0.1)
Requirement already satisfied: filelock in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (3.15.3)
Requirement already satisfied: numpy>=1.17 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (1.26.4)
Requirement already satisfied: pyarrow>=15.0.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (16.1.0)
Requirement already satisfied: pyarrow-hotfix in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.6)
Requirement already satisfied: dill<0.3.9,>=0.3.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.3.8)
Requirement already satisfied: pandas in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (2.2.2)
Requirement already satisfied: xxhash in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (3.4.1)
Requirement already satisfied: multiprocess in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.70.16)
Requirement already satisfied: fsspec<=2024.5.0,>=2023.1.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from fsspec[http]<=2024.5.0,>=2023.1.0->datasets->mindnlp) (2024.5.0)
Requirement already satisfied: aiohttp in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (3.9.5)
Requirement already satisfied: huggingface-hub>=0.21.2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.23.4)
Requirement already satisfied: pyyaml>=5.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (6.0.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (2.2.2)
Requirement already satisfied: certifi>=2017.4.17 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (2024.6.2)
Requirement already satisfied: protobuf>=3.13.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (5.27.1)
Requirement already satisfied: asttokens>=2.0.4 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (2.0.5)
Requirement already satisfied: pillow>=6.2.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (10.3.0)
Requirement already satisfied: scipy>=1.5.4 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (1.13.1)
Requirement already satisfied: psutil>=5.6.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (5.9.0)
Requirement already satisfied: astunparse>=1.6.3 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (1.6.3)
Requirement already satisfied: pygtrie<3.0,>=2.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pyctcdecode->mindnlp) (2.5.0)
Requirement already satisfied: hypothesis<7,>=6.14 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pyctcdecode->mindnlp) (6.104.2)
Requirement already satisfied: six in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from asttokens>=2.0.4->mindspore->mindnlp) (1.16.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from astunparse>=1.6.3->mindspore->mindnlp) (0.43.0)
Requirement already satisfied: aiosignal>=1.1.2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (1.3.1)
Requirement already satisfied: frozenlist>=1.1.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (1.9.4)
Requirement already satisfied: async-timeout<5.0,>=4.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (4.0.3)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from huggingface-hub>=0.21.2->datasets->mindnlp) (4.11.0)
Requirement already satisfied: sortedcontainers<3.0.0,>=2.1.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from hypothesis<7,>=6.14->pyctcdecode->mindnlp) (2.4.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pandas->datasets->mindnlp) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pandas->datasets->mindnlp) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pandas->datasets->mindnlp) (2024.1)

[notice] A new release of pip is available: 24.1 -> 24.1.1
[notice] To update, run: python -m pip install --upgrade pip

三、贪心搜索Greedy search

#greedy_search
​
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
# generate text until the output length (which includes the context length) reaches 50
greedy_output = model.generate(input_ids, max_length=50)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))

输出:

Building prefix dict from the default dictionary ...
Dumping model to file cache /tmp/jieba.cache
Loading model cost 1.006 seconds.
Prefix dict has been built successfully.
100%---------------------------------------- 26.0/26.0 [00:00<00:00, 1.63kB/s]
100%---------------------------------------- 0.99M/0.99M [00:00<00:00, 18.9MB/s]
100%---------------------------------------- 446k/446k [00:00<00:00, 7.49MB/s]
100%---------------------------------------- 1.29M/1.29M [00:00<00:00, 18.0MB/s]
100%---------------------------------------- 665/665 [00:00<00:00, 49.7kB/s]
100%---------------------------------------- 523M/523M [00:42<00:00, 17.2MB/s]Output:
-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog.

I'm not sure if I'll

四、集束搜索Beam search

每个时间步保留最可能的 num_beams 个词选择概率最高的序列。

如图以 num_beams=2 为例:

("The","dog" ,"has"  ) : 0.4 * 0.9 = 0.36

("The","nice","woman") : 0.5 * 0.4 = 0.20

优点:一定程度保留最优路径

缺点:

        1. 无法解决重复问题;

        2. 开放域生成效果差

from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
# activate beam search and early_stopping
beam_output = model.generate(
    input_ids, 
    max_length=50, 
    num_beams=5, 
    early_stopping=True
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
print(100 * '-')
​
# set no_repeat_ngram_size to 2
beam_output = model.generate(
    input_ids, 
    max_length=50, 
    num_beams=5, 
    no_repeat_ngram_size=2, 
    early_stopping=True
)
​
print("Beam search with ngram, Output:\n" + 100 * '-')
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
print(100 * '-')
​
# set return_num_sequences > 1
beam_outputs = model.generate(
    input_ids, 
    max_length=50, 
    num_beams=5, 
    no_repeat_ngram_size=2, 
    num_return_sequences=5, 
    early_stopping=True
)
​
# now we have 3 output sequences
print("return_num_sequences, Output:\n" + 100 * '-')
for i, beam_output in enumerate(beam_outputs):
    print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=True)))
print(100 * '-')

输出:

-------------------------------------------------------------------------------------------------I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again."

"I don't think I'll ever be able to walk with her again."

"I don't think I
-------------------------------------------------------------------------------------------------
Beam search with ngram, Output:
-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again."

"I'm not sure what to say to that," she said. "I mean, it's not like I'm
-------------------------------------------------------------------------------------------------
return_num_sequences, Output:
-------------------------------------------------------------------------------------------------
0: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again."

"I'm not sure what to say to that," she said. "I mean, it's not like I'm
1: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again."

"I'm not sure what to say to that," she said. "I mean, it's not like she's
2: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again."

"I'm not sure what to say to that," she said. "I mean, it's not like we're
3: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again."

"I'm not sure what to say to that," she said. "I mean, it's not like I've
4: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again."

"I'm not sure what to say to that," she said. "I mean, it's not like I can
-------------------------------------------------------------------------------------------------

五、集束搜索的问题

 

重复问题 

n-gram 惩罚:

候选词再现概率设置为 0

设置no_repeat_ngram_size=2 ,任意 2-gram 不会出现两次

Notice: 实际文本生成需要重复出现

Sample

根据当前条件概率分布随机选择输出词 

("car") ~P(w∣"The")("drives") ~P(w∣"The","car")

优点:文本生成多样性高

缺点:生成文本不连续

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
# activate sampling and deactivate top_k by setting top_k sampling to 0
sample_output = model.generate(
    input_ids, 
    do_sample=True, 
    max_length=50, 
    top_k=0
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog Neddy as much as I'd like. Keep up the good work Neddy!"

I realized what Neddy meant when he first launched the website. "Thank you so much for joining."

I

Temperature 

降低softmax 的temperature使 P(w∣w1:t−1​)分布更陡峭

增加高概率单词的似然

降低低概率单词的似然

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(1234)
# activate sampling and deactivate top_k by setting top_k sampling to 0
sample_output = model.generate(
    input_ids, 
    do_sample=True, 
    max_length=50, 
    top_k=0,
    temperature=0.7
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog and have never had a problem with her until now.

A large dog named Chucky managed to get a few long stretches of grass on her back and ran around with it for about 5 minutes, ran around

TopK sample

选出概率最大的 K 个词,重新归一化,最后在归一化后的 K 个词中采样

将采样池限制为固定大小 K :

  • 在分布比较尖锐的时候产生胡言乱语
  • 在分布比较平坦的时候限制模型的创造力
import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
# activate sampling and deactivate top_k by setting top_k sampling to 0
sample_output = model.generate(
    input_ids, 
    do_sample=True, 
    max_length=50, 
    top_k=50
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog.

She's always up for some action, so I have seen her do some stuff with it.

Then there's the two of us.

The two of us I'm talking about were

Top-P sample

在累积概率超过概率 p 的最小单词集中进行采样,重新归一化

采样池可以根据下一个词的概率分布动态增加和减少

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
​
# deactivate top_k sampling and sample only from 92% most likely words
sample_output = model.generate(
    input_ids, 
    do_sample=True, 
    max_length=50, 
    top_p=0.92, 
    top_k=0
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog Neddy as much as I'd like. Keep up the good work Neddy!"

I realized what Neddy meant when he first launched the website. "Thank you so much for joining."

I

top_k_top_p

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
sample_outputs = model.generate(
    input_ids,
    do_sample=True,
    max_length=50,
    top_k=5,
    top_p=0.95,
    num_return_sequences=3
)
​
print("Output:\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
  print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))

输出:

-------------------------------------------------------------------------------------------------
0: I enjoy walking with my cute dog.

"My dog loves the smell of the dog. I'm so happy that she's happy with me.

"I love to walk with my dog. I'm so happy that she's happy
1: I enjoy walking with my cute dog. I'm a big fan of my cat and her dog, but I don't have the same enthusiasm for her. It's hard not to like her because it is my dog.

My husband, who
2: I enjoy walking with my cute dog, but I'm also not sure I would want my dog to walk alone with me."

She also told The Daily Beast that the dog is very protective.

"I think she's very protective of

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1886399.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

SpringMVC的基本使用

SpringMVC简介 SpringMVC是Spring提供的一套建立在Servlet基础上&#xff0c;基于MVC模式的web解决方案 SpringMVC核心组件 DispatcherServlet&#xff1a;前置控制器&#xff0c;来自客户端的所有请求都经由DispatcherServlet进行处理和分发Handler&#xff1a;处理器&…

C语言+ MSSQL技术开发的 PACS系统源码:CT后处理技术之仿真内镜CTVE

C语言 MSSQL技术开发的 PACS系统源码&#xff1a;CT后处理技术之仿真内镜CTVE 仿真内窥镜VE VE是利用医学影像作为原始数据&#xff0c;融合图像处理、计算机图形学、科学计算可视化、虚拟现实技术&#xff0c;模拟传统光学内镜的一种技术。 又叫做腔内重建技术&#xff0c;是…

使用瀚高数据库开发管理工具进行数据的备份与恢复---国产瀚高数据库工作笔记008

使用瀚高数据库,备份 恢复数据 然后找到对应的目录 其实就是hgdbdeveloper,瀚高的数据库开发管理工具 对应的包中有个dbclient 这个目录,选中这个目录以后,就可以了,然后 在对应的数据库,比如 data_middle 中,选中 某个模式,比如bigdata_huiju 然后右键进行,点击 恢复,然…

入门Salesforce:必须掌握的20+基础专业术语!

Salesforce的发展令人印象深刻。在过去的20年中&#xff0c;Salesforce创建了一个由管理员、开发人员、顾问和用户组成的生态系统&#xff0c;不断颠覆创新CRM&#xff0c;促进平等和多样性。 作为初学者&#xff0c;探索Salesforce领域就像学习一门新语言。Salesforce中有着大…

最新CRMEB商城多商户java版源码v1.6版本+前端uniapp

CRMEB 开源商城系统Java版&#xff0c;基于JavaVueUni-app开发&#xff0c;在微信公众号、小程序、H5移动端都能使用&#xff0c;代码全开源无加密&#xff0c;独立部署&#xff0c;二开很方便&#xff0c;还支持免费商用&#xff0c;能满足企业新零售、分销推广、拼团、砍价、…

【设计模式】【行为型模式】【责任链模式】

系列文章目录 可跳转到下面链接查看下表所有内容https://blog.csdn.net/handsomethefirst/article/details/138226266?spm1001.2014.3001.5501文章浏览阅读2次。系列文章大全https://blog.csdn.net/handsomethefirst/article/details/138226266?spm1001.2014.3001.5501 目录…

Android SQLite 数据库存学习与总结

Android 系统内置了一个名为 SQLite 数据库。那么 SQLite 是一种什么样的数据库&#xff0c;它有那些特点&#xff0c;应该怎么操作它&#xff1f;下面&#xff0c;让我们就来认识一下它吧。 1、概念&#xff1a; SQLite 是一种轻量级的关系型数据库&#xff0c;它不仅支持标准…

【实战】EasyExcel实现百万级数据导入导出

文章目录 前言技术积累实战演示实现思路模拟代码测试结果 前言 最近接到一个百万级excel数据导入导出的需求&#xff0c;大概就是我们在进行公众号API群发的时候&#xff0c;需要支持500w以上的openid进行群发&#xff0c;并且可以提供发送openid数据的导出功能。可能有的同学…

电脑录歌用什么软件好?分享电脑录音软件:6款

短视频普遍的今天&#xff0c;越来越多的人喜欢通过电脑进行音乐创作和录制。然而&#xff0c;面对市面上琳琅满目的电脑录音软件&#xff0c;很多人可能会感到困惑&#xff1a;电脑录歌用什么软件好呢&#xff1f;本文将为大家分享六款精选的录音软件&#xff0c;帮助大家找到…

主从同步binlog

主从同步的原理是怎样的 提到主从同步的原理&#xff0c;我们就需要了解在数据库中的一个重要日志文件&#xff0c;那就是 Binlog 二 进制日志&#xff0c;它记录了对数据库进行更新的事件。实际上主从同步的原理就是基于 Binlog 进 行数据同步的。在主从复制过程中&#xff…

KVM性能优化之CPU优化

1、查看kvm虚拟机vCPU的QEMU线程 ps -eLo ruser,pid,ppid,lwp,psr,args |awk /^qemu/{print $1,$2,$3,$4,$5,$6,$8} 注:vcpu是不同的线程&#xff0c;而不同的线程是跑在不同的cpu上&#xff0c;一般情况&#xff0c;虚拟机在运行时自身会点用3个cpus&#xff0c;为保证生产环…

第二篇——始计篇:“计”是最早的SWOT分析

目录 一、背景介绍二、思路&方案三、过程1.思维导图2.文章中经典的句子理解3.学习之后对于投资市场的理解4.通过这篇文章结合我知道的东西我能想到什么&#xff1f; 四、总结五、升华 一、背景介绍 第二次详读孙子兵法&#xff0c;当初听讲解的时候&#xff0c;就觉得自己…

短剧系统开发:如何让你的创意变成现实

短剧系统开发是一个将创意转化为现实的过程&#xff0c;它涉及多个方面&#xff0c;包括需求分析、系统设计、开发环境搭建、前后端开发、测试与发布等。 1. 需求分析 &#xff08;1&#xff09;明确目标&#xff1a;首先&#xff0c;明确短剧系统的目标和定位&#xff0c;包括…

某智能装备公司如何实现多个工程师共用1台图形工作站

在当今快速发展的科技领域&#xff0c;资源共享和高效利用已成为企业提升竞争力的关键&#xff0c;特别是在工程设计和研发领域。如何最大化地利用有限的资源&#xff0c;如工作站&#xff0c;成为了许多公司面临的挑战。某智能装备公司便是在这样的背景下&#xff0c;通过云飞…

【如何使用RSA签名验签】python语言

文章目录 签名方法异步同步通知数据验签生活号响应数据验签同步响应数据验签 &#x1f308;你好呀&#xff01;我是 山顶风景独好 &#x1f388;欢迎踏入我的博客世界&#xff0c;能与您在此邂逅&#xff0c;真是缘分使然&#xff01;&#x1f60a; &#x1f338;愿您在此停留的…

002-基于Sklearn的机器学习入门:基本概念

本节将继续介绍与机器学习有关的一些基本概念&#xff0c;包括机器学习的分类&#xff0c;性能指标等。同样&#xff0c;如果你对本节内容很熟悉&#xff0c;可直接跳过。 2.1 机器学习概述 2.1.1 什么是机器学习 常见的监督学习方法 2.1.2 机器学习的分类 机器学习一般包括监…

玉林师范学院宿舍管理系统的设计与实现19633

玉林师范学院宿舍管理系统设计与实现 摘要&#xff1a;随着大学生人数的增加&#xff0c;宿舍管理成为高校管理中的重要问题。本论文旨在研究玉林师范学院宿舍管理系统&#xff0c;探讨其优势和不足&#xff0c;并提出改进建议。通过对相关文献的综述和实地调研&#xff0c;我们…

【操作系统】进程管理——线程管理(个人笔记)

学习日期&#xff1a;2024.7.2 内容摘要&#xff1a;线程的概念、存在的意义、线程的属性&#xff0c;线程的实现方式&#xff0c;线程的状态与组织。 线程的概念 拿QQ来说&#xff0c;QQ既可以打视频电话&#xff0c;也可以在这同时进行文字聊天或传送文件&#xff0c;进程是…

java基于ssm+jsp 二手交易平台网站

1商家能模块 商家首页&#xff0c;在商家首页页面可以查看个人中心、商品分类管理、商品信息管理、订单信息管理、订单配送管理信息&#xff0c;如图1所示。 图1商家首页界面图 个人中心&#xff0c;用户通过个人中心可以查看用户名、用户姓名、头像、性别、手机号码、邮箱等信…

MySQL数据库设计作业 ——《网上书店系统》数据库设计实验报告

数据库设计作业——《网上书店系统》数据库设计 一、功能需求 普通用户&#xff1a;可以进行最基础的登陆操作&#xff0c;可浏览图书、按类别查询图书、查看 图书的详细信息&#xff0c;还可以注册成为会员。会员&#xff1a;需要填写详细信息&#xff08;真实姓名、性别、手…