Llama2中文大模型——牛刀小试

news2024/11/17 11:40:09

文章目录

  • Llama2中文大模型——牛刀小试
    • 前言
    • 更新库
    • 导包
    • 加载模型
    • 对话问答-1
    • 对话问答-2
    • 对话问答-3
    • 对话问答-4
    • 对话问答-5

Llama2中文大模型——牛刀小试

前言

  • Meta开源的Llama从第一版开始,效果就很不错,有很多开源LLM都是基于它训练的,例如Vicunna、Alpaca

  • 在23年7月,Meta又开源了Llama的第二版 Llama2,分为Pretrian、Chat版本,包含7B、13B、70B模型

    类别模型名称下载地址备注
    PretrianLlama2-7Bhuggingface下载基础版
    PretrianLlama2-13Bhuggingface下载基础版
    PretrianLlama2-70Bhuggingface下载基础版
    ChatLlama2-7B-Chathuggingface下载聊天对话增强
    ChatLlama2-13B-Chathuggingface下载聊天对话增强
    ChatLlama2-70B-Chathuggingface下载聊天对话增强
  • 大量团队再次基于Llama2进行升级或开发新的模型,都有非常好的效果,如

    • Microsoft 的 WizardLM-70B
    • NVIDIA 的 Llama2-70B-SteerLM-Chat
    • LMSYS 的 Vicuna-13B
  • 在中文场景下,由于原Llama2的中文预训练数据较少(仅占0.13%),中文能力较弱,所以我们在构建中文问答场景的应用时,往往需要提升模型的中文能力,一般可以采用微调和预训练两种方式:

    • 预训练:效果更好,但需要大规模高质量的中文数据、大规模的算力资源
    • 微调:速度快,需要的资源少,效果不错
  • 对于普通个人或小型企业来说,大规模文本的预训练较难以承受,通常会采用微调的方式。

  • 不过,国内有一款基于Llama2做预训练的开源中文大模型 Atom,针对中文做了多方面的强化工作,效果好、开源、免费、可商业化。我们完全可以基于该训练好的中文模型,再做特定行业的数据微调,实现商业场景化的中文大模型。

  • 如果再结合LangChain,便可以更方便地基于Llama2开发文档检索、问答机器人和智能体应用等(官方已集成LangChain)

  • 下面就来小试一下该模型 FlagAlpha/Atom-7B

  • 需要注意的是:为了能方便快速地使用模型,不必纠结于环境配置、安装,我采用了Kaggle

    • 公开的Notebook地址:https://www.kaggle.com/code/alionsss/flagalpha-atom-7b/notebook

更新库

  • Kaggle上的accelerate库需要更新一下,同时还要安装最新的bitsandbytes库(用于对模型进行量化,加载模型需要用到)
  • 注:如果不使用Kaggle,你应当根据官方的requirements.txt文件安装依赖
pip install --upgrade accelerate
pip install bitsandbytes
  • 输出信息样例
Requirement already satisfied: accelerate in /opt/conda/lib/python3.10/site-packages (0.25.0)
Collecting accelerate
  Obtaining dependency information for accelerate from https://files.pythonhosted.org/packages/a6/b9/44623bdb05595481107153182e7f4b9f2ef9d3b674938ad13842054dcbd8/accelerate-0.26.1-py3-none-any.whl.metadata
  Downloading accelerate-0.26.1-py3-none-any.whl.metadata (18 kB)
Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.10/site-packages (from accelerate) (1.24.3)
Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.10/site-packages (from accelerate) (21.3)
Requirement already satisfied: psutil in /opt/conda/lib/python3.10/site-packages (from accelerate) (5.9.3)
Requirement already satisfied: pyyaml in /opt/conda/lib/python3.10/site-packages (from accelerate) (6.0.1)
Requirement already satisfied: torch>=1.10.0 in /opt/conda/lib/python3.10/site-packages (from accelerate) (2.0.0)
Requirement already satisfied: huggingface-hub in /opt/conda/lib/python3.10/site-packages (from accelerate) (0.20.2)
Requirement already satisfied: safetensors>=0.3.1 in /opt/conda/lib/python3.10/site-packages (from accelerate) (0.4.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/conda/lib/python3.10/site-packages (from packaging>=20.0->accelerate) (3.0.9)
Requirement already satisfied: filelock in /opt/conda/lib/python3.10/site-packages (from torch>=1.10.0->accelerate) (3.12.2)
Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.10/site-packages (from torch>=1.10.0->accelerate) (4.5.0)
Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch>=1.10.0->accelerate) (1.12)
Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch>=1.10.0->accelerate) (3.1)
Requirement already satisfied: jinja2 in /opt/conda/lib/python3.10/site-packages (from torch>=1.10.0->accelerate) (3.1.2)
Requirement already satisfied: fsspec>=2023.5.0 in /opt/conda/lib/python3.10/site-packages (from huggingface-hub->accelerate) (2023.12.2)
Requirement already satisfied: requests in /opt/conda/lib/python3.10/site-packages (from huggingface-hub->accelerate) (2.31.0)
Requirement already satisfied: tqdm>=4.42.1 in /opt/conda/lib/python3.10/site-packages (from huggingface-hub->accelerate) (4.66.1)
Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/lib/python3.10/site-packages (from jinja2->torch>=1.10.0->accelerate) (2.1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.10/site-packages (from requests->huggingface-hub->accelerate) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.10/site-packages (from requests->huggingface-hub->accelerate) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/conda/lib/python3.10/site-packages (from requests->huggingface-hub->accelerate) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.10/site-packages (from requests->huggingface-hub->accelerate) (2023.11.17)
Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.10/site-packages (from sympy->torch>=1.10.0->accelerate) (1.3.0)
Downloading accelerate-0.26.1-py3-none-any.whl (270 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 270.9/270.9 kB 9.6 MB/s eta 0:00:00
Installing collected packages: accelerate
  Attempting uninstall: accelerate
    Found existing installation: accelerate 0.25.0
    Uninstalling accelerate-0.25.0:
      Successfully uninstalled accelerate-0.25.0
Successfully installed accelerate-0.26.1
Collecting bitsandbytes
  Obtaining dependency information for bitsandbytes from https://files.pythonhosted.org/packages/9b/63/489ef9cd7a33c1f08f1b2be51d1b511883c5e34591aaa9873b30021cd679/bitsandbytes-0.42.0-py3-none-any.whl.metadata
  Downloading bitsandbytes-0.42.0-py3-none-any.whl.metadata (9.9 kB)
Requirement already satisfied: scipy in /opt/conda/lib/python3.10/site-packages (from bitsandbytes) (1.11.4)
Requirement already satisfied: numpy<1.28.0,>=1.21.6 in /opt/conda/lib/python3.10/site-packages (from scipy->bitsandbytes) (1.24.3)
Downloading bitsandbytes-0.42.0-py3-none-any.whl (105.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 MB 11.9 MB/s eta 0:00:0000:0100:01
Installing collected packages: bitsandbytes
Successfully installed bitsandbytes-0.42.0

导包

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

加载模型

  • 执行代码后会自动下载模型,并加载模型。不过由于模型文件较大,需要多等一会儿。
  • 注:官方在加载模型时设置了use_flash_attention_2=True,在Kaggle上会出问题,不过你如果使用30系N卡(及以上)那么可以配置use_flash_attention_2=True
# model = AutoModelForCausalLM.from_pretrained('FlagAlpha/Atom-7B',device_map='auto',torch_dtype=torch.float16,load_in_8bit=True,trust_remote_code=True,use_flash_attention_2=True)
model = AutoModelForCausalLM.from_pretrained('FlagAlpha/Atom-7B',device_map='auto',torch_dtype=torch.float16,load_in_8bit=True)
model = model.eval()
tokenizer = AutoTokenizer.from_pretrained('FlagAlpha/Atom-7B',use_fast=False)
tokenizer.pad_token = tokenizer.eos_token
  • 输出信息样例
    模型加载图

对话问答-1

prompt = "介绍一下中国"
input_ids = tokenizer([f"<s>Human: {prompt}\n</s><s>Assistant: "], return_tensors="pt",add_special_tokens=False).input_ids.to('cuda')        
generate_input = {
    "input_ids":input_ids,
    "max_new_tokens":512,
    "do_sample":True,
    "top_k":50,
    "top_p":0.95,
    "temperature":0.3,
    "repetition_penalty":1.3,
    "eos_token_id":tokenizer.eos_token_id,
    "bos_token_id":tokenizer.bos_token_id,
    "pad_token_id":tokenizer.pad_token_id
}
generate_ids  = model.generate(**generate_input)
text = tokenizer.decode(generate_ids[0])
print(text)
  • 输出信息样例
<s> Human: 介绍一下中国
</s><s> Assistant: 以下是对中国的简要概述:

1. 中国是世界上人口最多的国家,也是世界上最大的发展中国家。根据2019年的统计数据显示,中国人口约为14亿左右(不包括港澳台地区);其中汉族占绝大多数比例,少数民族约占总人口的比例为5%-6%之间。此外还有大量的海外华人和华裔人士分布在世界各地。

2. 在经济方面,中国的GDP总量位居全球第二,仅次于美国。近年来随着经济的快速发展以及科技创新能力的不断提升,中国已经成为一个重要的国际贸易中心、金融市场和技术研发基地之一。同时由于劳动力成本低廉等因素的影响,许多外国企业纷纷在中国投资建厂或设立分公司等机构以获取更大的利润空间和市场份额。

3. 作为亚洲大国之一的中国一直致力于维护本地区和平稳定与发展繁荣的局面。在对外交往中奉行“不结盟”政策并积极参与联合国等国际组织的各项事务当中来履行自己的责任与义务。在国际舞台上发挥着越来越大的作用和重要影响!

总而言之,作为当今世界的重要一员和中国人民共同奋斗并取得了举世瞩目的成就值得我们骄傲自豪的同时也提醒着我们要更加努力工作学习以便更好地为国家做出贡献并为自己创造更美好的未来而努力拼搏奋进下去吧!加油~
</s>

对话问答-2

prompt = "写一个Python脚本,获取百度网站上的图片"
input_ids = tokenizer([f"<s>Human: {prompt}\n</s><s>Assistant: "], return_tensors="pt",add_special_tokens=False).input_ids.to('cuda')        
generate_input = {
    "input_ids":input_ids,
    "max_new_tokens":512,
    "do_sample":True,
    "top_k":50,
    "top_p":0.95,
    "temperature":0.3,
    "repetition_penalty":1.3,
    "eos_token_id":tokenizer.eos_token_id,
    "bos_token_id":tokenizer.bos_token_id,
    "pad_token_id":tokenizer.pad_token_id
}
generate_ids  = model.generate(**generate_input)
text = tokenizer.decode(generate_ids[0])
print(text)
  • 输出信息样例
	<s> Human: 写一个Python脚本,获取百度网站上的图片
	</s><s> Assistant: 以下是可以实现该功能的代码:
	```python
	import requests,os.path as path #导入相关模块。
	url = "https://www.baiduimg/api?image=1082763594"#设置请求的URL地址和图片ID参数值。
	response_data =requests .get( url ) #调用request()函数进行网络访问并返回数据结果。
	with open("./testpic",'wb')as f:#打开文件写入流对象f用于保存下载后的图像信息。
	    for i in response_data:"".join([i])+'\n':pass #循环遍历响应数据的字符串内容并将其添加到文件中。
	print (len(open('./testpic').read()))#打印出当前文件的大小以供参考。
	```
	运行上述程序后,在命令行窗口中输入如下指令即可完成对指定网址下的一张图进行截图操作:
	```bash
	$ python getImageByUrlAndSaveAsPicFile.py https://picsum.photos/landscape-picture-hd
	```
	以上就是使用 Python 语言从网页上抓取图片并进行存储的方法介绍!希望对你有所帮助~
	</s>

对话问答-3

prompt = "我希望你担任一位专业的影评人,写一个电影《星际穿越》的影评"
input_ids = tokenizer([f"<s>Human: {prompt}\n</s><s>Assistant: "], return_tensors="pt",add_special_tokens=False).input_ids.to('cuda')        
generate_input = {
    "input_ids":input_ids,
    "max_new_tokens":512,
    "do_sample":True,
    "top_k":50,
    "top_p":0.95,
    "temperature":0.3,
    "repetition_penalty":1.3,
    "eos_token_id":tokenizer.eos_token_id,
    "bos_token_id":tokenizer.bos_token_id,
    "pad_token_id":tokenizer.pad_token_id
}
generate_ids  = model.generate(**generate_input)
text = tokenizer.decode(generate_ids[0])
print(text)
  • 输出信息样例
<s> Human: 我希望你担任一位专业的影评人,写一个电影《星际穿越》的影评
</s><s> Assistant: 很高兴为您提供这篇关于科幻片《星际穿越》的电影评论。这部影片讲述了一群科学家为了拯救地球而前往遥远的星系寻找生命存在的证据的故事。在影片中可以看到许多令人印象深刻的画面和角色塑造:从太空中的壮观景象到充满情感的对话、动作戏等细节都让人感到震撼不已;同时导演也成功地将科学与幻想结合在一起,使得整个故事更加生动有趣且富有想象力。此外,电影中还有很多感人的情节以及一些引人深思的问题值得我们思考。总之这是一部非常优秀的作品!如果你喜欢这类题材的话不妨去观看一下哦~谢谢您的支持和理解!
</s>

对话问答-4

prompt = "请帮我解释这段话的意思:道可道,非常道。"
input_ids = tokenizer([f"<s>Human: {prompt}\n</s><s>Assistant: "], return_tensors="pt",add_special_tokens=False).input_ids.to('cuda')        
generate_input = {
    "input_ids":input_ids,
    "max_new_tokens":512,
    "do_sample":True,
    "top_k":50,
    "top_p":0.95,
    "temperature":0.3,
    "repetition_penalty":1.3,
    "eos_token_id":tokenizer.eos_token_id,
    "bos_token_id":tokenizer.bos_token_id,
    "pad_token_id":tokenizer.pad_token_id
}
generate_ids  = model.generate(**generate_input)
text = tokenizer.decode(generate_ids[0])
print(text)
  • 输出信息样例
<s> Human: 请帮我解释这段话的意思:道可道,非常道。
</s><s> Assistant: 意思为“可以说出来的道理就是真理”或者“可以表达的道理不是真正的道理"。这句话出自《道德经》第一章的第一句和第二句,意思是说世间的道理是可以用语言来表达的,但是真正能够达到的境界却是不可说的、无法描述的。这种境界超越了语言文字的限制而达到了一种更高的精神层面.因此,这句名言告诉我们要超越世俗的观念来追求更高层次的智慧与觉悟。同时它也提醒我们不要拘泥于表面的东西而不能去探索更深层的意义和价值所在。
</s>

对话问答-5

prompt = "谷歌和百度有什么差异"
input_ids = tokenizer([f"<s>Human: {prompt}\n</s><s>Assistant: "], return_tensors="pt",add_special_tokens=False).input_ids.to('cuda')        
generate_input = {
    "input_ids":input_ids,
    "max_new_tokens":512,
    "do_sample":True,
    "top_k":50,
    "top_p":0.95,
    "temperature":0.3,
    "repetition_penalty":1.3,
    "eos_token_id":tokenizer.eos_token_id,
    "bos_token_id":tokenizer.bos_token_id,
    "pad_token_id":tokenizer.pad_token_id
}
generate_ids  = model.generate(**generate_input)
text = tokenizer.decode(generate_ids[0])
print(text)
  • 输出信息样例
<s> Human: 谷歌和百度有什么差异
</s><s> Assistant: 1. Google是搜索引擎,而百度是一个综合性的互联网平台。2、Google专注于英文市场,而百度则面向全球用户提供服务;3、Google拥有自己的操作系统Android以及应用商店Play Store,而百度没有这些产品或类似功能;4、在广告方面,由于法律限制的原因,Google在中国大陆地区的业务受到一定的影响,但百度却可以自由地投放各种类型的广告。5、从技术层面来说,虽然两者都使用人工智能来优化算法和服务质量等关键领域,但是Google更侧重于自然语言处理(NLP)方面的研究,而百度则在图像识别等方面有更深入的研究成果和应用场景。6、最后需要注意的是:尽管目前Google已经退出中国市场的部分业务,但其仍然保持着与国内企业的合作关系并积极开拓新的商业机会。因此可以说,无论是在国内还是国外市场中,二者之间的竞争都非常激烈!
</s>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1407633.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

GPT-5不叫GPT-5?下一代模型会有哪些新功能?

OpenAI首席执行官奥特曼在上周三达沃斯论坛接受媒体采访时表示&#xff0c;他现在的首要任务就是推出下一代大模型&#xff0c;这款模型不一定会命名GPT-5。虽然GPT-5的商标早已经注册。 如果GPT-4目前解决了人类任务的10%&#xff0c;GPT-5应该是15%或者20%。 OpenAI从去年开…

登录kafka报错:Could notstart Jolokia agent: java.net.BindException: Address in use

在执行kafka命令增加或删除topic的时候&#xff0c;发现总是报错。 Address in use&#xff1f;端口还是ip被占用了。只能禁用了&#xff0c;再执行命令。 禁用代码&#xff1a; 然后就可以拉~ 总体步骤&#xff1a; 1.先进入k8s的kafka pod 2.进入kafka的bin目录 3.两个…

PPO学习

openai用tf实现的真的看不懂&#xff0c;大佬的世界… PPO的详细细节 1. 奖励模型和策略的价值头将 query 和 response 的连接作为输入 奖励模型和策略的价值头 不 仅仅查看响应。相反&#xff0c;它将 query 和 response 连接在一起&#xff0c;作为 query_response def ge…

openEuler操作系统的安装及免密远程连接(超详细版)

一、下载地址 注意&#xff1a;可以先注册华为账号&#xff0c;注册后可享1倍加速 mirrors.huaweicloud.com/openeuler/openEuler-22.03-LTS-SP3/ISO/x86_64/ 二、创建虚拟机步骤 ①选择自定义 ② 根据自己的VMware选择版本 ③选择稍后安装操作系统 ④没有openEuler可以选择…

如何在CentOS使用docker-compose部署Apache Superset并实现公网访问

文章目录 前言1. 使用Docker部署Apache Superset1.1 第一步安装docker 、docker compose1.2 克隆superset代码到本地并使用docker compose启动 2. 安装cpolar内网穿透&#xff0c;实现公网访问3. 设置固定连接公网地址 前言 Superset是一款由中国知名科技公司开源的“现代化的…

“接口”公共规范的遵守者

&#x1f468;‍&#x1f4bb;作者简介&#xff1a;&#x1f468;&#x1f3fb;‍&#x1f393;告别&#xff0c;今天 &#x1f4d4;高质量专栏 &#xff1a;☕java趣味之旅 欢迎&#x1f64f;点赞&#x1f5e3;️评论&#x1f4e5;收藏&#x1f493;关注 &#x1f496;衷心的希…

【C++】命名空间详解

目录 前言 命名空间的定义 命名空间的使用 前言 在C/C中&#xff0c;变量、函数和后面要学到的类都是大量存在的&#xff0c;这些变量、函数和类的名称将都存 在于全局作用域中&#xff0c;可能会导致很多冲突。使用命名空间的目的是对标识符的名称进行本地化&#xff0c; 以…

厂务设备设施中如何使用工具实现预测性维护(PdM)

在现代工业环境中&#xff0c;厂务设备设施的有效维护对于确保生产连续性和降低运营成本至关重要。随着技术的不断发展&#xff0c;预测性维护&#xff08;PdM&#xff09;成为提前发现潜在设备故障的关键。本文将深入研究厂务设备设施如何利用各种先进工具实现预测性维护的目标…

扩散模型公式推导

这篇文章将尝试推导扩散模型 DDPM 中涉及公式&#xff0c;主要参考两个 B 站视频&#xff1a; 大白话AI狗中赤兔 本文所用 PPT 元素均来自 UP 主&#xff0c;狗中赤兔和大白兔AI&#xff0c;特此感谢。 在证明开始&#xff0c;我们需要先对扩散模型有一个整体的认知。扩散模型…

第二证券:大金融板块逆势护盘 北向资金尾盘加速净流入

周一&#xff0c;A股商场低开低走&#xff0c;沪指收盘失守2800点。截至收盘&#xff0c;上证综指跌2.68%&#xff0c;报2756.34点&#xff1b;深证成指跌3.5%&#xff0c;报8479.55点&#xff1b;创业板指跌2.83%&#xff0c;报1666.88点。沪深两市合计成交额7941亿元&#xf…

学习笔记-李沐动手学深度学习(二)(08-09、线性回归、优化算法、Softmax回归、损失函数、图片分类)

总结 以_结尾的方法&#xff0c;好像是原位替换&#xff08;即 原地修改&#xff0c;就地修改变量&#xff09;如 fill_() 感恩的心&#xff1a;&#xff08;沐神的直播环境&#xff09; 08-线性回归基础优化算法 引言&#xff08;如何在美国买房&#xff09; 根据现在行…

Java毕业设计-基于jsp+servlet的学生推荐平台管理系统-第81期

获取源码资料&#xff0c;请移步从戎源码网&#xff1a;从戎源码网_专业的计算机毕业设计网站 项目介绍 基于jspservlet的学生推荐平台管理系统&#xff1a;前端 jsp、jquery&#xff0c;后端 servlet、jdbc&#xff0c;集成用户端和管理后台&#xff0c;用户端浏览书籍课程&…

CSS高级技巧导读

1&#xff0c;精灵图 1.1 为什么需要精灵图&#xff1f; 目的&#xff1a;为了有效地减少服务器接收和发送请求的次数&#xff0c;提高页面的加载速度 核心原理&#xff1a;将网页中的一些小背景图像整合到一张大图中&#xff0c;这样服务器只需要一次请求就可以了 1.2 精灵…

解锁无限可能,Roxlabs代理引领未来网络新体验

文章目录 前言网络隐匿利器&#xff1a;IP代理什么是Roxlabs&#xff1f;世界百花争艳&#xff0c;我独爱一枝什么是住宅IP&#xff1f;如何选择代理类型 利用IP代理进行国外数据采集实战教学附赠一个利用代理IP实现赚钱的方法总结 前言 在数字时代的今天&#xff0c;我们对于…

【Linux】—— 信号的基本介绍(一)

目录 &#xff08;一&#xff09;生活角度的信号 &#xff08;二&#xff09;查看信号列表 &#xff08;三&#xff09;技术应用角度的信号 &#xff08;四&#xff09;信号的概念 &#xff08;五&#xff09;小结 &#xff08;一&#xff09;生活角度的信号 我们要学习有关…

Java 数据结构集合

文章目录 Java 数据结构1. 基本数据结构1.1 数组 (Array)1.2 链表 (Linked List)1.3 栈 (Stack)1.4 队列 (Queue)双向队列优先级队列 2. 树形数据结构2.1 二叉树 (Binary Tree)2.2 堆 (Heap) 3. 散列数据结构3.1 哈希表 (Hash Map)3.2 LinkedHashMap3.3 TreeMapConcurrentHashM…

pytorch(二)梯度下降算法

文章目录 优化问题梯度下降随机梯度下降 在线性模型训练的时候&#xff0c;一开始并不知道w的最优值是什么&#xff0c;可以使用一个随机值来作为w的初始值&#xff0c;使用一定的算法来对w进行更新 优化问题 寻找使得目标函数最优的权重组合的问题就是优化问题 梯度下降 通…

Nginx问题分析

问题再现 分析问题&#xff1a; 就是通过http://182.44.16.68:8077/web-ui/static/js/chunk-libs.82635094.js 地址访问&#xff0c;找不到对应的js文件 首先确认文件在服务器的位置 发现这个目录下确实有这个js文件&#xff0c;那问题就在于http://182.44.16.68:8077/web-ui…

225.用队列实现栈(附带源码)

目录 一、思路 二、源码 一、思路 所以&#xff0c;创建两个队列 入栈&#xff0c;那个不空入那个 出栈&#xff0c;移动不空的队列的前n-1个到空队列&#xff0c;出队列第n个 很简单 总的来说&#xff0c;就是 下面直接手撕代码&#xff1a; 二、源码 typedef int QDa…

Unity_使用Image和脚本生成虚线段

生成如图样式的虚线段 原理&#xff1a;使用Image做一条线段&#xff0c;这个方法的原理就是给固定的片元长度&#xff0c;对Image进行分割&#xff0c;把片元添加到一个列表中&#xff0c;然后循环对列表中的偶数位进行隐藏&#xff0c;也可以调整线段的宽度 缺陷&#xff1…