快速体验Ollama安装部署并支持AMD ROCm推理加速

news2024/9/20 14:33:34

序言

Ollama 是一个专注于本地运行大型语言模型(LLM)的框架,它使得用户能够在自己的计算机上轻松地部署和使用大型语言模型,而无需依赖昂贵的GPU资源。Ollama 提供了一系列的工具和服务,旨在简化大型语言模型的安装、配置和使用过程,让更多人能够体验到人工智能的强大能力。

一、参考资料

Ollama官网:https://ollama.com/

Ollama 代码仓库:https://github.com/ollama/ollama

Ollama 中文文档:https://ollama.qianniu.city/index.html

Ollama 中文网:https://ollama.fan/getting-started/linux/

llama3 微调教程之 llama factory 的 安装部署与模型微调过程,模型量化和gguf转换。

二、源码编译ollama

docs/development.md

Integrated AMD GPU support #2637

以 ollama v0.3.6 为例。

1. 安装CMake

ollama项目要求CMake最低版本为:CMake 3.21,建议安装最新版本呢。

cmake安装教程,请参考另一篇博客:【Ubuntu版】CMake安装教程

CMake 3.21 or higher is required.

2. 安装CLBlast

CLBlast: Building and installing

3. 安装go

下载go安装包:Download and install,建议安装最新版本。

# 解压安装包
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz

# 设置环境变量
export PATH=$PATH:/usr/local/go/bin

# 查看go版本
go version

4. 设置环境变量

export HSA_OVERRIDE_GFX_VERSION=9.2.8
export HCC_AMDGPU_TARGETS=gfx928

export ROCM_PATH="/opt/dtk"

export CLBlast_DIR="/usr/lib/x86_64-linux-gnu/cmake/CLBlast"

5. go下载加速

解决Golang使用过程中go get 下载github项目慢或无法下载

执行go编译之前,推荐设置下载加速,以解决编译过程中下载依赖包失败的问题。

go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct

6. 执行编译

# 下载源码
git clone https://github.com/ollama/ollama.git

add "-DLLAMA_HIP_UMA=ON" to "ollama/llm/generate/gen_linux.sh" to CMAKE_DEFS=

export CGO_CFLAGS="-g"
export AMDGPU_TARGETS="gfx906;gfx926;gfx928"

go generate ./...
go build .

三、安装Ollama

1. NVIDIA GPU支持

Ollama 对GPU 支持信息 nvidia

2. AMD GPU支持

Ollama 对GPU 支持信息 amd-radeon

ROCm and PyTorch on AMD APU or GPU (AI)

Integrated AMD GPU support #2637

Windows Rocm: HSA_OVERRIDE_GFX_VERSION doesn´t work #3107

Add support for amd Radeon 780M gfx1103 - override works #3189

Ollama 利用 AMD ROCm 库,但它并不支持所有 AMD GPU。在某些情况下,您可以强制系统尝试使用接近的 LLVM 版本。例如,Radeon RX 5400 是 gfx1034(也称为 10.3.4),但 ROCm 目前不支持此版本。最接近的支持是 gfx1030。您可以通过设置环境变量 HSA_OVERRIDE_GFX_VERSION=”10.3.0”,来尝试在不受支持的 AMD GPU 上运行。

# 指定GFX version版本
export HSA_OVERRIDE_GFX_VERSION="9.2.8"

3. Ollama显存优化

Ollama显存优化

Ollama显存优化

Ollama装载大模型之小显存优化(3G显存 GeForce GTX 970M)

I miss option to specify num of gpu layers as model parameter #1855

4. 安装Ollama(windows)

ollama的本地部署

Ollama本地部署大模型

5. 安装Ollama(linux)

Ollama安装指南:解决国内下载慢和安装卡住问题

5.1 下载 install.sh

# 下载安装脚本
curl -fsSL https://ollama.com/install.sh -o install.sh

# 给脚本添加执行权限
chmod +x ollama_install.sh

5.2 修改 install.sh

Github 镜像加速

  • https://github.moeyy.xyz/
  • https://mirror.ghproxy.com/
  • https://ghproxy.homeboyc.cn/
  • https://tool.mintimate.cn/gh/
  • https://gh.api.99988866.xyz
  • https://gh.ddlc.top/
  • https://ghps.cc/
  • https://github.abskoop.workers.dev/
  • https://git.886.be/
  • https://gh.llkk.cc/

文件加速

  • https://hub.gitmirror.com
  • https://gh.con.sh

https://github.moeyy.xyz/ 为例,下载加速。

https://ollama.com/download/ollama-linux-${ARCH}${VER_PARAM}
https://ollama.com/download/ollama-linux-amd64-rocm.tgz${VER_PARAM}

修改为

https://github.moeyy.xyz/https://github.com/ollama/ollama/releases/download/v0.3.6/ollama-linux-amd64
https://github.moeyy.xyz/https://github.com/ollama/ollama/releases/download/v0.3.6/ollama-linux-amd64-rocm.tgz

5.3 执行安装

从输出日志可以看出,未发现NVIDIA/AMD GPU资源,则选择CPU模式。

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ./ollama_install.sh
>>> Downloading ollama...
###################################################################################################### 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.

5.4 卸载ollam

从 bin 目录中删除 ollama 二进制文件( /usr/local/bin/usr/bin/bin)。

sudo rm $(which ollama)
sudo rm -r /usr/share/ollama

四、Ollama相关操作

1. ollama指令

root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama -h
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.

2. 其他指令

# 查看ollama版本
ollama -v

# 设置日志级别
export OLLAMA_DEBUG=1

# 删除模型
ollama rm llama3.1:8bgpu

3. 配置文件

Ollama服务的配置文件:/etc/systemd/system/ollama.service

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/conda/envs/ollama/bin:/opt/conda/condabin:/usr/local/bin:/opt/mpi/bin:/opt/hwloc/bin:/opt/cmake/bin:/opt/conda/bin:/opt/conda/bin:/opt/dtk/bin:/opt/dtk/llvm/bin:/opt/dtk/hip/bin:/opt/dtk/hip/bin/hipify:/opt/hyhal/bin:/opt/mpi/bin:/opt/hwloc/bin:/opt/cmake/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/conda/bin"

[Install]
WantedBy=default.target

修改配置文件后,需要重新加载并重启服务:

# 重新加载
sudo systemctl daemon-reload
# 重启服务
sudo systemctl restart ollama

4. 环境变量

启动服务后,从输出日志中可查看环境变量。

CUDA_VISIBLE_DEVICES: 
GPU_DEVICE_ORDINAL: 
HIP_VISIBLE_DEVICES: 
HSA_OVERRIDE_GFX_VERSION: 
OLLAMA_DEBUG:false 
OLLAMA_FLASH_ATTENTION:false 
OLLAMA_HOST:http://127.0.0.1:11434 
OLLAMA_INTEL_GPU:false 
OLLAMA_KEEP_ALIVE:5m0s 
OLLAMA_LLM_LIBRARY: 
OLLAMA_MAX_LOADED_MODELS:0 
OLLAMA_MAX_QUEUE:512 
OLLAMA_MODELS:/root/.ollama/models 
OLLAMA_NOHISTORY:false 
OLLAMA_NOPRUNE:false 
OLLAMA_NUM_PARALLEL:0 
OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] 
OLLAMA_RUNNERS_DIR: 
OLLAMA_SCHED_SPREAD:false 
OLLAMA_TMPDIR: 
ROCR_VISIBLE_DEVICES:
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve

HSA_OVERRIDE_GFX_VERSION="9.2.8" OLLAMA_LLM_LIBRARY="rocm_v60102 cpu" ollama serve

5. Ollama常用操作

5.1 启动/关闭ollama服务

# 启动ollama服务
ollama serve
或者
service ollama start

# 关闭ollama服务
service ollama stop
root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama serve
2024/08/14 07:23:59 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T07:23:59.899Z level=INFO source=images.go:782 msg="total blobs: 8"
time=2024-08-14T07:23:59.899Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T07:23:59.900Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-14T07:23:59.901Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2193733393/runners
time=2024-08-14T07:24:04.504Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60102 cpu cpu_avx]"
time=2024-08-14T07:24:04.504Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T07:24:04.522Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
time=2024-08-14T07:24:04.522Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="1007.4 GiB" available="930.1 GiB"

5.2 运行模型

ollama模型仓库:https://ollama.com/library

llama3.1:8b 为例,默认在CPU上推理,推理速度慢,且CPU占用率高。

ollama run llama3.1:8b
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama run llama3.1:8b
pulling manifest
pulling 8eeb52dfb3bb... 100% ▕█████████████████████████████████████████████▏ 4.7 GB
pulling 11ce4ee3e170... 100% ▕█████████████████████████████████████████████▏ 1.7 KB
pulling 0ba8f0e314b4... 100% ▕█████████████████████████████████████████████▏  12 KB
pulling 56bb8bd477a5... 100% ▕█████████████████████████████████████████████▏   96 B
pulling 1a4c3c319823... 100% ▕█████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> hi
How's it going? Is there something I can help you with or would you like to chat?

>>> 中国深圳有哪些旅游景点
深圳是一个美丽的城市,有许多值得游览的景点。以下是几道:

1. **Window of the World**:这是一个全球各地著名风景和建筑复制品的主题公园。
2. **Splendid China Folk Village**:这个村庄模仿了中国的大部分地区,展示了中国不同的文化和传统。
3. **Shenzhen Bay Park**:这是一个美丽的海滨公园,提供了迷人的海景和休闲活动。
4. **Dafen Oil Painting Village**:这个村庄以油画创作而闻名,是全球最大的油画村之一。
5. **Xili Lake**:这是一个风景如画的湖泊公园,提供了游船、划船等水上娱乐活动。

服务端输出

root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama serve
2024/08/14 05:21:24 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T05:21:24.338Z level=INFO source=images.go:782 msg="total blobs: 0"
time=2024-08-14T05:21:24.338Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T05:21:24.338Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-14T05:21:24.340Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama285789237/runners
time=2024-08-14T05:21:28.932Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-14T05:21:28.932Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T05:21:28.950Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
time=2024-08-14T05:21:28.950Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="1007.4 GiB" available="915.4 GiB"
[GIN] 2024/08/14 - 05:22:18 | 200 |     151.661µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/14 - 05:22:18 | 404 |     496.922µs |       127.0.0.1 | POST     "/api/show"
time=2024-08-14T05:22:21.081Z level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 47 100 MB part(s)"
time=2024-08-14T05:23:26.432Z level=INFO source=download.go:175 msg="downloading 11ce4ee3e170 in 1 1.7 KB part(s)"
time=2024-08-14T05:23:28.748Z level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)"
time=2024-08-14T05:23:31.099Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)"
time=2024-08-14T05:23:33.413Z level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)"
[GIN] 2024/08/14 - 05:23:39 | 200 |         1m21s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/08/14 - 05:23:39 | 200 |   28.682957ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-14T05:23:39.225Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[915.5 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-14T05:23:39.242Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama285789237/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --numa numactl --parallel 4 --port 41641"
time=2024-08-14T05:23:39.243Z level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-14T05:23:39.243Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-14T05:23:39.243Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance
INFO [main] build info | build=1 commit="1e6f655" tid="140046723713984" timestamp=1723613019
INFO [main] system info | n_threads=128 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140046723713984" timestamp=1723613019 total_threads=255
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="254" port="41641" tid="140046723713984" timestamp=1723613019
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-08-14T05:23:39.495Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors:        CPU buffer size =  4437.81 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
time=2024-08-14T05:23:53.347Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server not responding"
time=2024-08-14T05:23:53.599Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="140046723713984" timestamp=1723613033
time=2024-08-14T05:23:53.850Z level=INFO source=server.go:632 msg="llama runner started in 14.61 seconds"
[GIN] 2024/08/14 - 05:23:53 | 200 | 14.674444008s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/08/14 - 05:31:26 | 200 |         6m50s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/08/14 - 06:19:33 | 200 |        46m12s |       127.0.0.1 | POST     "/api/chat"

运行时的资源占用情况

在这里插入图片描述

5.3 查看模型信息

# 查看本地的模型列表
ollama list

# 显示模型信息
ollama show llama3.1:8b
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:8b     91ab477bec9d    4.7 GB  About an hour ago
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama show llama3.1:8b
  Model
        arch                    llama
        parameters              8.0B
        quantization            Q4_0
        context length          131072
        embedding length        4096

  Parameters
        stop    "<|start_header_id|>"
        stop    "<|end_header_id|>"
        stop    "<|eot_id|>"

  License
        LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
        Llama 3.1 Version Release Date: July 23, 2024

6. 创建新模型

6.1 查看modelfile

ollama show llama3.1:8b --modelfile
root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama show llama3.1:8b --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM llama3.1:8b

FROM /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
LICENSE "LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024

“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.

“Documentation” means the specifications, manuals and documentation accompanying Llama 3.1
distributed by Meta at https://llama.meta.com/doc/overview.

“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.

“Llama 3.1” means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.

“Llama Materials” means, collectively, Meta’s proprietary Llama 3.1 and Documentation (and any
portion thereof) made available under this Agreement.

“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).

By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.

1. License Rights and Redistribution.

  a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.

  b. Redistribution and Use.

      i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service (including another AI model) that contains any of them, you shall (A)
provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with
Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use
the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at
the beginning of any such AI model name.

      ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.

      iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.1 is
licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”

      iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by
reference into this Agreement.

2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.

3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.

4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.

5. Intellectual Property.

  a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.

  b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.

  c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.

6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.

7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.

# Llama 3.1 Acceptable Use Policy

Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.1. If you
access or use Llama 3.1, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)

## Prohibited Uses

We want everyone to use Llama 3.1 safely and responsibly. You agree you will not use, or allow
others to use, Llama 3.1 to:

1. Violate the law or others’ rights, including to:
    1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
        1. Violence or terrorism
        2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
        3. Human trafficking, exploitation, and sexual violence
        4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
        5. Sexual solicitation
        6. Any other criminal activity
    3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
    4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
    5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
    6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
    7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
    8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system

2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.1 related to the following:
    1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
    2. Guns and illegal weapons (including weapon development)
    3. Illegal drugs and regulated/controlled substances
    4. Operation of critical infrastructure, transportation technologies, or heavy machinery
    5. Self-harm or harm to others, including suicide, cutting, and eating disorders
    6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual

3. Intentionally deceive or mislead others, including use of Llama 3.1 related to the following:
    1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
    2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
    3. Generating, promoting, or further distributing spam
    4. Impersonating another individual without consent, authorization, or legal right
    5. Representing that the use of Llama 3.1 or outputs are human-generated
    6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement

4. Fail to appropriately disclose to end users any known dangers of your AI system

Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:

* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)
* Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.1: LlamaUseReport@meta.com"

6.2 创建modelfile

创建 llama3.1_b8.modelfile 文件:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM llama3.1:8b
FROM /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe

PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>

6.3 创建模型

根据modelfile文件创建模型。

root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama create llama3.1:8bgpu -f llama3.1_b8.modelfile
transferring model data 100%
using existing layer sha256:8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb
creating new layer sha256:eec90fbb0880f5c8e28e15cca5905fd0a65ada486db2356321e407ebeab3e093
writing manifest
success

6.4 查看模型信息

root@notebook-1823641624653922306-scnlbe5oi5-42808:~/.ollama/models# ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:8bgpu  e3c5478172ae    4.7 GB  3 minutes ago
llama3.1:8b     91ab477bec9d    4.7 GB  8 minutes ago
root@notebook-1823641624653922306-scnlbe5oi5-42808:~/.ollama/models# ollama show llama3.1:8bgpu
  Model
        arch                    llama
        parameters              8.0B
        quantization            Q4_0
        context length          131072
        embedding length        4096

  Parameters
        stop    "<|start_header_id|>"
        stop    "<|end_header_id|>"
        stop    "<|eot_id|>"

6.5 运行模型

root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama run llama3.1:8bgpu
>>> hi
, i hope you can help me out. my cat is 9 months old and has been eating his food normally until
recently when he started to get very interested in the litter box contents. he's a kitten still so
i'm not sure if it's a sign of something more serious or just a phase.
it sounds like a phase, but let's break down what might be going on here:
at 9 months old, your cat is still a young adult and experiencing a lot of curiosity about the world
around him. this curiosity can manifest in many ways, including an interest in eating things that
aren't food, like litter box contents.
there are a few possible reasons why your kitten might be eating litter box contents:
1. **boredom**: kittens need mental and physical stimulation. if he's not getting enough playtime or
engaging activities, he might turn to the litter box as

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2052092.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

阵列信号处理1_相控阵天线(CSDN_20240818)

与传统天线相比&#xff0c;相控阵天线的阵面是由许多阵元组成的&#xff0c;在这些阵元的基础上&#xff0c;相控阵天线可以利用一些精妙的算法在天线不旋转的条件下&#xff0c;自动形成波束并对准目标。通常&#xff0c;由相控阵天线形成的波束的质量要比普通天线波束的质量…

LeetCode //C - 319. Bulb Switcher

319. Bulb Switcher There are n bulbs that are initially off. You first turn on all the bulbs, then you turn off every second bulb. On the third round, you toggle every third bulb (turning on if it’s off or turning off if it’s on). For the $i^{th} $roun…

七彩玫瑰与彩虹玫瑰的花语探秘

一、什么是七彩玫瑰和彩虹玫瑰 七彩玫瑰和彩虹玫瑰并非自然界原生的花卉品种&#xff0c;而是通过人工手段精心培育和加工而成的独特花卉。它们的独特之处在于花瓣呈现出七种绚烂的颜色&#xff0c;宛如彩虹般绚丽多彩&#xff0c;令人眼前一亮。 七彩玫瑰和彩虹玫瑰通常是由白…

短链接系统设计方案

背景 需要设计一个短链接系统&#xff0c;主要功能主要有如下几点&#xff1a; ToB&#xff1a; 输入一个长链接&#xff0c;转换成短链接。这个短链接有时效性&#xff0c;可以设定指定过期时间。这个系统的每天会生成千万级别的短链接。数据具备可分析功能。 ToC&#xf…

xss.function靶场(hard)

文章目录 WW3源码分析源码 DOMPpurify框架绕过覆盖变量notifyjs作用域和作用链域构建payload WW3 源码 <!-- Challenge --> <div><h4>Meme Code</h4><textarea class"form-control" id"meme-code" rows"4"><…

MySQL实现SQL Server中UPDLOCK与READPAST组合功能

碰到一位同事求助解决消息中台一个线上的bug&#xff0c;具体描述如下&#xff1a; 首先有一张主表记录消息待发送的内容&#xff0c;一张子表记录本条消息的发送状态。若发送成功则将此条消息的发送状态修改为已发送并做逻辑删除。代码通过定时任务每2s轮询子表&#xff0c;如…

开源AI智能名片O2O商城小程序在社群团购中的创新应用与策略

摘要&#xff1a;随着移动互联网和社交电商的快速发展&#xff0c;传统企业纷纷寻求数字化转型以应对市场变化。然而&#xff0c;许多企业在转型过程中存在误区&#xff0c;认为仅仅是销售渠道的变更&#xff0c;而忽视了针对不同消费群体提供差异化产品和服务的重要性。本文旨…

MSO和WPS文档图标那些事儿

你以为这是MSO的文件图标吗&#xff1f;其实不然 以上图标才是出自MSO&#xff0c;但如果在电脑上安装WPS时勾选了关联文件类型&#xff0c;你的图标可能变成 2019WPS 新版WPS 即使你更改了默认打开方式&#xff0c;文件图标也还可能是WPS的 有一说一。MSO的设计尖锐感太强&a…

从零开始学cv-8:直方图操作进阶

文章目录 一&#xff0c;简介二、直方图匹配三、局部直方图均衡化四、彩色直方图均衡化4.1 rgb彩色直方图均衡化4.2 ycrb 彩色直方图均衡化 一&#xff0c;简介 在上一篇文章中&#xff0c;我们探讨了直方图的基本概念&#xff0c;并详细讲解了如何利用OpenCV来查看图像直方图…

王老师 linux c++ 通信架构 笔记(四)继续介绍 nginx 的编译,生成适合本平台的 nginx 可执行程序

&#xff08;16&#xff09; 继续介绍 nginx 的文件夹组成&#xff1a; 接着介绍 conf 目录 &#xff1a; 接着介绍 contrib 文件夹 &#xff1a; 接着介绍 html 文件夹 &#xff1a; 接着介绍 man 文件夹&#xff1a; 更正一下&#xff1a; 下图即为此帮助文件的内容&#…

电子电气架构---主流主机厂电子电气架构华山论剑(下)

我是穿拖鞋的汉子,魔都中坚持长期主义的汽车电子工程师。 老规矩,分享一段喜欢的文字,避免自己成为高知识低文化的工程师: 屏蔽力是信息过载时代一个人的特殊竞争力,任何消耗你的人和事,多看一眼都是你的不对。非必要不费力证明自己,无利益不试图说服别人,是精神上的节…

华为AR1220配置GRE隧道

1.GRE隧道的配置 GRE隧道的配置过程,包括设置接口IP地址、配置GRE隧道接口和参数、配置静态路由以及测试隧道连通性。GRE隧道作为一种标准协议,支持多协议传输,但不提供加密,并且可能导致CPU资源消耗大和调试复杂等问题。本文采用华为AR1220路由器来示例说明。 配置…

Dubbo源码深度解析(六)

上一篇博客《Dubbo源码深度解析(五)》主要讲&#xff1a;当服务消费方发起请求时&#xff0c;服务提供方是通过Netty服务接受请求并处理的&#xff0c;涉及到Netty相关使用及部分原理的讲解&#xff0c;以及最后又是如何将Invoker对象的执行结果返回给服务消费方的等。同时也讲…

LMDeploy 量化部署实践闯关任务

一、LMDeploy量化介绍 1.LMDeploy部署模型的优势 LMDeploy实现了高效的推理、可靠的量化、卓越的兼容性、便捷的服务以及有状态的推理。 相比于vllm具有领先的推理性能&#xff1a; LMDeploy也提供了大模型量化能力&#xff1a;主要包括KV Cache量化和模型权重量化。 LMDepl…

0813作业+梳理

一、实现虚拟机械臂控制 #include<myhead.h> #define SER_PORT 8888 //服务器端口号 #define SER_IP "192.168.0.126" //服务器ip地址 #define CLI_PORT 5555 //客户端端口号 #define CLI_IP "192.168.0.133" //客户端地址 int main(int argc, …

css - word-spacing 属性(指定段字之间的间距大小)属性定义及使用说明

介绍 CSS word-spacing 属性&#xff0c;用于指定段字之间的空间&#xff0c;例如&#xff1a; p {word-spacing:30px; }word-spacing属性增加或减少字与字之间的空白。 注意&#xff1a; 负值是允许的。 浏览器支持 表格中的数字表示支持该属性的第一个浏览器版本号。 属…

sqlserver 消息 9420,级别 16,状态 1,第 7 行

declare TerminalXml xml set TerminalXml(select * from TCK_TerminalInfo(nolock) for xml PATH) 执行时报9420错误,sqlserver 消息 9420,级别 16,状态 1,第 7 行 感觉非常奇怪,这个程序在很多客户多运行.当时以为数据库的配置不对.我重启了数据服务,还是没有解决…

智慧校园信息化服务平台、基于微信小程序的校园服务管理系统

摘 要 本文论述了智慧校园信息化服务平台的设计和实现&#xff0c;该网站从实际运用的角度出发&#xff0c;运用了计算机网站设计、数据库等相关知识&#xff0c;基于 ssm框架和Mysql数据库设计来实现的&#xff0c;网站主要包括用户注册、用户登录、查看教室信息、校园趣事…

mysql聚合函数和分组

我最近开了几个专栏&#xff0c;诚信互三&#xff01; > |||《算法专栏》&#xff1a;&#xff1a;刷题教程来自网站《代码随想录》。||| > |||《C专栏》&#xff1a;&#xff1a;记录我学习C的经历&#xff0c;看完你一定会有收获。||| > |||《Linux专栏》&#xff1…

Java并发类API -- Future和Callable

1.Future和Callable接口 Future 是一个表示异步计算结果的接口&#xff1b; 接口Callable与线程功能密不可分&#xff0c;但和Runnable的主要区别为&#xff1a; 1&#xff09;Callable接口的call&#xff08;&#xff09;方法可以有返回值&#xff0c;而Runnable接口的run&a…