快速体验Ollama安装部署并支持AMD GPU推理加速

news2024/9/22 15:31:06

序言

Ollama 是一个专注于本地运行大型语言模型(LLM)的框架,它使得用户能够在自己的计算机上轻松地部署和使用大型语言模型,而无需依赖昂贵的GPU资源。Ollama 提供了一系列的工具和服务,旨在简化大型语言模型的安装、配置和使用过程,让更多人能够体验到人工智能的强大能力。

一、参考资料

Ollama官网:https://ollama.com/

Ollama 代码仓库:https://github.com/ollama/ollama

Ollama 中文文档:https://ollama.qianniu.city/index.html

Ollama 中文网:https://ollama.fan/getting-started/linux/

llama3 微调教程之 llama factory 的 安装部署与模型微调过程,模型量化和gguf转换。

二、源码编译ollama

docs/development.md

Integrated AMD GPU support #2637

以 ollama v0.3.6 为例。

1. 安装CMake

ollama项目要求CMake最低版本为:CMake 3.21,建议安装最新版本呢。

cmake安装教程,请参考另一篇博客:【Ubuntu版】CMake安装教程

CMake 3.21 or higher is required.

2. 安装CLBlast

CLBlast: Building and installing

3. 安装go

下载go安装包:Download and install,建议安装最新版本。

# 解压安装包
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz

# 设置环境变量
export PATH=$PATH:/usr/local/go/bin

# 查看go版本
go version

4. 设置环境变量

export HSA_OVERRIDE_GFX_VERSION=9.2.8
export HCC_AMDGPU_TARGETS=gfx928

export ROCM_PATH="/opt/dtk"

export CLBlast_DIR="/usr/lib/x86_64-linux-gnu/cmake/CLBlast"

5. go下载加速

解决Golang使用过程中go get 下载github项目慢或无法下载

执行go编译之前,推荐设置下载加速,以解决编译过程中下载依赖包失败的问题。

go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct

6. 执行编译

# 下载源码
git clone https://github.com/ollama/ollama.git

add "-DLLAMA_HIP_UMA=ON" to "ollama/llm/generate/gen_linux.sh" to CMAKE_DEFS=

export CGO_CFLAGS="-g"
export AMDGPU_TARGETS="gfx906;gfx926;gfx928"

go generate ./...
go build .

三、安装Ollama

1. NVIDIA GPU支持

Ollama 对GPU 支持信息 nvidia

2. AMD GPU支持

Ollama 对GPU 支持信息 amd-radeon

ROCm and PyTorch on AMD APU or GPU (AI)

Integrated AMD GPU support #2637

Windows Rocm: HSA_OVERRIDE_GFX_VERSION doesn´t work #3107

Add support for amd Radeon 780M gfx1103 - override works #3189

Ollama 利用 AMD ROCm 库,但它并不支持所有 AMD GPU。在某些情况下,您可以强制系统尝试使用接近的 LLVM 版本。例如,Radeon RX 5400 是 gfx1034(也称为 10.3.4),但 ROCm 目前不支持此版本。最接近的支持是 gfx1030。您可以通过设置环境变量 HSA_OVERRIDE_GFX_VERSION=”10.3.0”,来尝试在不受支持的 AMD GPU 上运行。

# 指定GFX version版本
export HSA_OVERRIDE_GFX_VERSION="9.2.8"

3. Ollama显存优化

Ollama显存优化

Ollama显存优化

Ollama装载大模型之小显存优化(3G显存 GeForce GTX 970M)

I miss option to specify num of gpu layers as model parameter #1855

4. 安装Ollama(windows)

ollama的本地部署

Ollama本地部署大模型

5. 安装Ollama(linux)

Ollama安装指南:解决国内下载慢和安装卡住问题

5.1 下载 install.sh

# 下载安装脚本
curl -fsSL https://ollama.com/install.sh -o install.sh

# 给脚本添加执行权限
chmod +x ollama_install.sh

5.2 修改 install.sh

Github 镜像加速

  • https://github.moeyy.xyz/
  • https://mirror.ghproxy.com/
  • https://ghproxy.homeboyc.cn/
  • https://tool.mintimate.cn/gh/
  • https://gh.api.99988866.xyz
  • https://gh.ddlc.top/
  • https://ghps.cc/
  • https://github.abskoop.workers.dev/
  • https://git.886.be/
  • https://gh.llkk.cc/

文件加速

  • https://hub.gitmirror.com
  • https://gh.con.sh

https://github.moeyy.xyz/ 为例,下载加速。

https://ollama.com/download/ollama-linux-${ARCH}${VER_PARAM}
https://ollama.com/download/ollama-linux-amd64-rocm.tgz${VER_PARAM}

修改为

https://github.moeyy.xyz/https://github.com/ollama/ollama/releases/download/v0.3.6/ollama-linux-amd64
https://github.moeyy.xyz/https://github.com/ollama/ollama/releases/download/v0.3.6/ollama-linux-amd64-rocm.tgz

5.3 执行安装

从输出日志可以看出,未发现NVIDIA/AMD GPU资源,则选择CPU模式。

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ./ollama_install.sh
>>> Downloading ollama...
###################################################################################################### 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.

5.4 卸载ollam

从 bin 目录中删除 ollama 二进制文件( /usr/local/bin/usr/bin/bin)。

sudo rm $(which ollama)
sudo rm -r /usr/share/ollama

四、Ollama相关操作

1. ollama指令

root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama -h
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.

2. 其他指令

# 查看ollama版本
ollama -v

# 设置日志级别
export OLLAMA_DEBUG=1

# 删除模型
ollama rm llama3.1:8bgpu

3. 配置文件

Ollama服务的配置文件:/etc/systemd/system/ollama.service

(ollama) root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/conda/envs/ollama/bin:/opt/conda/condabin:/usr/local/bin:/opt/mpi/bin:/opt/hwloc/bin:/opt/cmake/bin:/opt/conda/bin:/opt/conda/bin:/opt/dtk/bin:/opt/dtk/llvm/bin:/opt/dtk/hip/bin:/opt/dtk/hip/bin/hipify:/opt/hyhal/bin:/opt/mpi/bin:/opt/hwloc/bin:/opt/cmake/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/conda/bin"

[Install]
WantedBy=default.target

修改配置文件后,需要重新加载并重启服务:

# 重新加载
sudo systemctl daemon-reload
# 重启服务
sudo systemctl restart ollama

4. 环境变量

启动服务后,从输出日志中可查看环境变量。

CUDA_VISIBLE_DEVICES: 
GPU_DEVICE_ORDINAL: 
HIP_VISIBLE_DEVICES: 
HSA_OVERRIDE_GFX_VERSION: 
OLLAMA_DEBUG:false 
OLLAMA_FLASH_ATTENTION:false 
OLLAMA_HOST:http://127.0.0.1:11434 
OLLAMA_INTEL_GPU:false 
OLLAMA_KEEP_ALIVE:5m0s 
OLLAMA_LLM_LIBRARY: 
OLLAMA_MAX_LOADED_MODELS:0 
OLLAMA_MAX_QUEUE:512 
OLLAMA_MODELS:/root/.ollama/models 
OLLAMA_NOHISTORY:false 
OLLAMA_NOPRUNE:false 
OLLAMA_NUM_PARALLEL:0 
OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] 
OLLAMA_RUNNERS_DIR: 
OLLAMA_SCHED_SPREAD:false 
OLLAMA_TMPDIR: 
ROCR_VISIBLE_DEVICES:
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve

HSA_OVERRIDE_GFX_VERSION="9.2.8" OLLAMA_LLM_LIBRARY="rocm_v60102 cpu" ollama serve

5. Ollama常用操作

5.1 启动/关闭ollama服务

# 启动ollama服务
ollama serve
或者
service ollama start

# 关闭ollama服务
service ollama stop
root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama serve
2024/08/14 07:23:59 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T07:23:59.899Z level=INFO source=images.go:782 msg="total blobs: 8"
time=2024-08-14T07:23:59.899Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T07:23:59.900Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-14T07:23:59.901Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2193733393/runners
time=2024-08-14T07:24:04.504Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60102 cpu cpu_avx]"
time=2024-08-14T07:24:04.504Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T07:24:04.522Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
time=2024-08-14T07:24:04.522Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="1007.4 GiB" available="930.1 GiB"

5.2 运行模型

ollama模型仓库:https://ollama.com/library

llama3.1:8b 为例,默认在CPU上推理,推理速度慢,且CPU占用率高。

ollama run llama3.1:8b
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama run llama3.1:8b
pulling manifest
pulling 8eeb52dfb3bb... 100% ▕█████████████████████████████████████████████▏ 4.7 GB
pulling 11ce4ee3e170... 100% ▕█████████████████████████████████████████████▏ 1.7 KB
pulling 0ba8f0e314b4... 100% ▕█████████████████████████████████████████████▏  12 KB
pulling 56bb8bd477a5... 100% ▕█████████████████████████████████████████████▏   96 B
pulling 1a4c3c319823... 100% ▕█████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> hi
How's it going? Is there something I can help you with or would you like to chat?

>>> 中国深圳有哪些旅游景点
深圳是一个美丽的城市,有许多值得游览的景点。以下是几道:

1. **Window of the World**:这是一个全球各地著名风景和建筑复制品的主题公园。
2. **Splendid China Folk Village**:这个村庄模仿了中国的大部分地区,展示了中国不同的文化和传统。
3. **Shenzhen Bay Park**:这是一个美丽的海滨公园,提供了迷人的海景和休闲活动。
4. **Dafen Oil Painting Village**:这个村庄以油画创作而闻名,是全球最大的油画村之一。
5. **Xili Lake**:这是一个风景如画的湖泊公园,提供了游船、划船等水上娱乐活动。

服务端输出

root@notebook-1813389960667746306-scnlbe5oi5-73886:~# ollama serve
2024/08/14 05:21:24 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T05:21:24.338Z level=INFO source=images.go:782 msg="total blobs: 0"
time=2024-08-14T05:21:24.338Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T05:21:24.338Z level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-14T05:21:24.340Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama285789237/runners
time=2024-08-14T05:21:28.932Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-14T05:21:28.932Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T05:21:28.950Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
time=2024-08-14T05:21:28.950Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="1007.4 GiB" available="915.4 GiB"
[GIN] 2024/08/14 - 05:22:18 | 200 |     151.661µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/14 - 05:22:18 | 404 |     496.922µs |       127.0.0.1 | POST     "/api/show"
time=2024-08-14T05:22:21.081Z level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 47 100 MB part(s)"
time=2024-08-14T05:23:26.432Z level=INFO source=download.go:175 msg="downloading 11ce4ee3e170 in 1 1.7 KB part(s)"
time=2024-08-14T05:23:28.748Z level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)"
time=2024-08-14T05:23:31.099Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)"
time=2024-08-14T05:23:33.413Z level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)"
[GIN] 2024/08/14 - 05:23:39 | 200 |         1m21s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/08/14 - 05:23:39 | 200 |   28.682957ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-14T05:23:39.225Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[915.5 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-14T05:23:39.242Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama285789237/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --numa numactl --parallel 4 --port 41641"
time=2024-08-14T05:23:39.243Z level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-14T05:23:39.243Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-14T05:23:39.243Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance
INFO [main] build info | build=1 commit="1e6f655" tid="140046723713984" timestamp=1723613019
INFO [main] system info | n_threads=128 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140046723713984" timestamp=1723613019 total_threads=255
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="254" port="41641" tid="140046723713984" timestamp=1723613019
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-08-14T05:23:39.495Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors:        CPU buffer size =  4437.81 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
time=2024-08-14T05:23:53.347Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server not responding"
time=2024-08-14T05:23:53.599Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="140046723713984" timestamp=1723613033
time=2024-08-14T05:23:53.850Z level=INFO source=server.go:632 msg="llama runner started in 14.61 seconds"
[GIN] 2024/08/14 - 05:23:53 | 200 | 14.674444008s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/08/14 - 05:31:26 | 200 |         6m50s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/08/14 - 06:19:33 | 200 |        46m12s |       127.0.0.1 | POST     "/api/chat"

运行时的资源占用情况

在这里插入图片描述

5.3 查看模型信息

# 查看本地的模型列表
ollama list

# 显示模型信息
ollama show llama3.1:8b
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:8b     91ab477bec9d    4.7 GB  About an hour ago
(ollama) root@notebook-1813389960667746306-scnlbe5oi5-73886:/public/home/scnlbe5oi5/Downloads/cache# ollama show llama3.1:8b
  Model
        arch                    llama
        parameters              8.0B
        quantization            Q4_0
        context length          131072
        embedding length        4096

  Parameters
        stop    "<|start_header_id|>"
        stop    "<|end_header_id|>"
        stop    "<|eot_id|>"

  License
        LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
        Llama 3.1 Version Release Date: July 23, 2024

6. 创建新模型

6.1 查看modelfile

ollama show llama3.1:8b --modelfile
root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama show llama3.1:8b --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM llama3.1:8b

FROM /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
LICENSE "LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024

“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.

“Documentation” means the specifications, manuals and documentation accompanying Llama 3.1
distributed by Meta at https://llama.meta.com/doc/overview.

“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.

“Llama 3.1” means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.

“Llama Materials” means, collectively, Meta’s proprietary Llama 3.1 and Documentation (and any
portion thereof) made available under this Agreement.

“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).

By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.

1. License Rights and Redistribution.

  a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.

  b. Redistribution and Use.

      i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service (including another AI model) that contains any of them, you shall (A)
provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with
Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use
the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at
the beginning of any such AI model name.

      ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.

      iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.1 is
licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”

      iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by
reference into this Agreement.

2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.

3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.

4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.

5. Intellectual Property.

  a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.

  b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.

  c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.

6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.

7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.

# Llama 3.1 Acceptable Use Policy

Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.1. If you
access or use Llama 3.1, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)

## Prohibited Uses

We want everyone to use Llama 3.1 safely and responsibly. You agree you will not use, or allow
others to use, Llama 3.1 to:

1. Violate the law or others’ rights, including to:
    1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
        1. Violence or terrorism
        2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
        3. Human trafficking, exploitation, and sexual violence
        4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
        5. Sexual solicitation
        6. Any other criminal activity
    3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
    4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
    5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
    6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
    7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
    8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system

2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.1 related to the following:
    1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
    2. Guns and illegal weapons (including weapon development)
    3. Illegal drugs and regulated/controlled substances
    4. Operation of critical infrastructure, transportation technologies, or heavy machinery
    5. Self-harm or harm to others, including suicide, cutting, and eating disorders
    6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual

3. Intentionally deceive or mislead others, including use of Llama 3.1 related to the following:
    1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
    2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
    3. Generating, promoting, or further distributing spam
    4. Impersonating another individual without consent, authorization, or legal right
    5. Representing that the use of Llama 3.1 or outputs are human-generated
    6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement

4. Fail to appropriately disclose to end users any known dangers of your AI system

Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:

* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)
* Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.1: LlamaUseReport@meta.com"

6.2 创建modelfile

创建 llama3.1_b8.modelfile 文件:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM llama3.1:8b
FROM /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe

PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>

6.3 创建模型

根据modelfile文件创建模型。

root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama create llama3.1:8bgpu -f llama3.1_b8.modelfile
transferring model data 100%
using existing layer sha256:8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb
creating new layer sha256:eec90fbb0880f5c8e28e15cca5905fd0a65ada486db2356321e407ebeab3e093
writing manifest
success

6.4 查看模型信息

root@notebook-1823641624653922306-scnlbe5oi5-42808:~/.ollama/models# ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:8bgpu  e3c5478172ae    4.7 GB  3 minutes ago
llama3.1:8b     91ab477bec9d    4.7 GB  8 minutes ago
root@notebook-1823641624653922306-scnlbe5oi5-42808:~/.ollama/models# ollama show llama3.1:8bgpu
  Model
        arch                    llama
        parameters              8.0B
        quantization            Q4_0
        context length          131072
        embedding length        4096

  Parameters
        stop    "<|start_header_id|>"
        stop    "<|end_header_id|>"
        stop    "<|eot_id|>"

6.5 运行模型

root@notebook-1823641624653922306-scnlbe5oi5-42808:/public/home/scnlbe5oi5/Downloads/cache# ollama run llama3.1:8bgpu
>>> hi
, i hope you can help me out. my cat is 9 months old and has been eating his food normally until
recently when he started to get very interested in the litter box contents. he's a kitten still so
i'm not sure if it's a sign of something more serious or just a phase.
it sounds like a phase, but let's break down what might be going on here:
at 9 months old, your cat is still a young adult and experiencing a lot of curiosity about the world
around him. this curiosity can manifest in many ways, including an interest in eating things that
aren't food, like litter box contents.
there are a few possible reasons why your kitten might be eating litter box contents:
1. **boredom**: kittens need mental and physical stimulation. if he's not getting enough playtime or
engaging activities, he might turn to the litter box as

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2051016.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

深入理解JVM运行时数据区(内存布局 )5大部分 | 异常讨论

前言&#xff1a; JVM运行时数据区&#xff08;内存布局&#xff09;是Java程序执行时用于存储各种数据的内存区域。这些区域在JVM启动时被创建&#xff0c;并在JVM关闭时销毁。它们的布局和管理方式对Java程序的性能和稳定性有着重要影响。 目录 一、由以下5大部分组成 1.…

【html+css 绚丽Loading】 - 000004 玄天旋轮

前言&#xff1a;哈喽&#xff0c;大家好&#xff0c;今天给大家分享htmlcss 绚丽Loading&#xff01;并提供具体代码帮助大家深入理解&#xff0c;彻底掌握&#xff01;创作不易&#xff0c;如果能帮助到大家或者给大家一些灵感和启发&#xff0c;欢迎收藏关注哦 &#x1f495…

STM32 编码器模式详解

编码器模式 stm32的定时器带的也有编码器模式。 所用的编码器是有ABZ三相&#xff0c;其中ab相是用来计数&#xff0c;z相输出零点信号。 AB相根据旋转的方向不同&#xff0c;输出的波形如下图所示&#xff1a; 从图上可以看出来&#xff0c;cw方向A相会超前B相90度左右&#…

egret 拖尾的实现 MotionStreak

背景&#xff1a;egret项目中需要用到拖尾效果&#xff0c;引擎原生没有提供&#xff0c;参考cocos2dx 的 MotionStreak实现拖尾效果。 原理 拖尾的原理很简单&#xff0c;定时记录节点的位置&#xff0c;根据运行的轨迹和指定的拖尾宽度生成拖尾网格&#xff0c;然后将纹理绘…

VS2019开发跨平台(Linux)程序时,怎么配置第三方库的路径

一、问题描述&#xff1a; 使用跨平台编译时&#xff0c;VS2019总是提示链接openssl库有问题&#xff1b; 二、错误时的配置&#xff1a; 1、前提 openssl在Linux系统默认下是1.0.0版本&#xff0c;而自己准备好的是1.1.1版本&#xff0c;并且路径完全不在一个地方&#xf…

【Linux-进程】系统初识:冯诺依曼体系结构

系列文章&#xff1a;《Linux入门》 目录 冯诺依曼体系结构 1&#xff09;硬件上 &#x1f337;1.什么是冯诺依曼体系结构&#xff1f; &#x1f337;2.冯诺依曼结构的五个主要组成部分 1.运算器 2.控制器 3.存储器 4.输入输出 设备 ⁉️3.为什么还需要内存呢&#xf…

c++数据结构算法复习基础-- 4 -- 线性表-单向循环链表-常用操作接口-复杂度分析

1、单向循环链表一 1&#xff09;特点 每一个节点除了数据域&#xff0c;还有一个next指针域指向下一个节点(存储了下一个节点的地址) 末尾节点的指针域指向了头节点 析构函数思路图 2&#xff09;代码实现 //定义结点 //单向循环链表 class CircleLink { public://构造函数…

使用python基于fastapi发布接口(一)

FastAPI官网地址 FastAPI基于Python 3.6+和Starlette框架,天生就带着高性能和异步的基因。 FastAPI的文档生成功能简直是开发者的福音! 你不再需要手动编写API文档,FastAPI能自动帮你搞定。 FastAPI还超级灵活,支持各种数据库和认证方式,无论是SQLite、PostgreSQL还是M…

【xilinx】TPM可信平台模块与 Zynq UltraScale+ PS SPI 接口

本博客&#xff08;Venu Inaganti&#xff09;介绍了可信平台模块 (TPM) 与 Zynq UltraScale PS SPI 控制器的连接。 目前唯一具有 TPM 的评估板是 KR260/KV260 SOM&#xff0c;因此为了帮助正在试验 Zynq UltraScale 设备的用户&#xff0c;本文介绍了如何通过 PMOD 连接器与…

【MongoDB】Java连接MongoDB

连接URI 连接 URI提供驱动程序用于连接到 MongoDB 部署的指令集。该指令集指示驱动程序应如何连接到 MongoDB&#xff0c;以及在连接时应如何运行。下图解释了示例连接 URI 的各个部分&#xff1a; 连接的URI 主要分为 以下四个部分 第一部分 连接协议 示例中使用的 连接到具有…

计算机视觉中的上采样与下采样:深入浅出实例代码解析

文章目录 一、引言二、下采样&#xff08;Downsampling&#xff09;三、上采样&#xff08;Upsampling&#xff09;1. 最近邻插值2.双线性插值3.转置卷积&#xff08;Deconvolution&#xff09;4.代码部分 四、总结 在计算机视觉领域&#xff0c;尤其是在深度学习和卷积神经网络…

宝塔面板部署webman项目+nginx反向代理

新建站点 新建一个站点&#xff0c;php版本选择纯净态即可&#xff0c;反正都是用不上的&#xff0c;域名填写你申请得到的域名 拉取代码 新建一个目录&#xff0c;然后将代码部署到本地 启动项目 推荐使用宝塔面板的进程守护管理器启动项目&#xff0c;其实就是用superviso…

ATT格式与Intel格式x86汇编指令的区别

AT&T公司 这个公司的创始人就是发明电话的贝尔&#xff0c;而Unix和C语言都是出自贝尔实验室的产物。 Intel公司 世界上第一片CPU是1971年发明的&#xff0c;型号是Intel生产的4004微处理器。 两种格式的区别 AT&T格式Intel格式目的操作数d、源操作数s op s, d 注…

vue2中使用i18n配置elementUi切换语言

1、下载插件 npm i vue-i18n8.22.2 2、新建文件夹i18n 3、编写index.js文件 import Vue from "vue"; import VueI18n from "vue-i18n"; import locale from element-ui/lib/locale; // 引入 elementui 的多语言 import enLocale from element-ui/lib/l…

【MySQL】C/C++连接MySQL客户端,MySQL函数接口认知,图形化界面进行连接

【MySQL】C/C引入MySQL客户端 安装mysqlclient库mysql接口介绍初始化mysql_init链接数据库mysql_real_connect下发mysql命令mysql_query获取出错信息mysql_error获取执行结果mysql_store_result获取结果行数mysql_num_rows获取结果列数mysql_num_fields判断结果列数mysql_field…

域自适应,你适应了嘛?

“最难的深度学习是谁&#xff1f;” “嗯&#xff0c;是迁徙学习吧&#xff1f;” “要分情况&#xff0c;不过&#xff0c;应该是迁徙学习吧&#xff1f; ” “不是迁徙学习嘛&#xff1f;” 目录 域自适应是啥&#xff1f; 域自适应的方法&#xff1f; 基于差异的方法…

Kafka系列之:Kafka Connect深入探讨 - 错误处理和死信队列

Kafka系列之&#xff1a;Kafka Connect深入探讨 - 错误处理和死信队列 一、快速失败二、YOLO&#xff1a;默默忽略坏消息三、如果一条消息掉在树林里&#xff0c;会发出声音吗&#xff1f;四、将消息路由到死信队列五、记录消息失败原因&#xff1a;消息头六、记录消息失败原因…

k8s Pod生命周期详解

文章目录 一、创建Pod二、启动Pod三、销毁Pod 共分为三步&#xff1a;创建Pod、启动Pod、销毁Pod 一、创建Pod K8S创建Pod的过程 二、启动Pod 1、kubelet调用容器运行时创建Pause容器&#xff0c;准备一个容器环境 2、创建初始化容器init container。如果有多个&#xff0c;…

打印网页使内容包含有效网络连接Print webpage with workable hyperlinks

小虎想打印网页&#xff0c;并且将里面有链接的文字带文字一起打印保存。 解决方法 利用谷歌浏览器的打印功能即可&#xff1a; Use print options in chrome.

构建一个Markdown编辑器:Fyne综合案例

在本文中&#xff0c;我们将通过一个完整的案例来介绍如何使用Go语言的Fyne库来构建一个简单的Markdown编辑器。Fyne是一个易于使用的库&#xff0c;它允许开发者使用Go语言来创建跨平台的GUI应用程序。 1. 项目结构 首先&#xff0c;我们需要创建一个Go项目&#xff0c;并引…