书生大模型实战营-基础关-XTuner 微调个人小助手认知

news2024/9/21 22:58:17

XTuner 微调个人小助手认知

  • 环境配置
  • 模型效果预览
  • 微调
    • 数据准备
    • 微调配置
    • 微调训练
    • 权重格式转换
    • 模型合并
    • 页面对话

环境配置

# 创建虚拟环境
conda create -n xtuner0812 python=3.10 -y

# 激活虚拟环境(注意:后续的所有操作都需要在这个虚拟环境中进行)
conda activate xtuner0812

# 安装一些必要的库
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia -y
# 安装其他依赖
pip install transformers==4.39.3
pip install streamlit==1.36.0

#安装XTuner
mkdir -p /root/demo

cd /root/demo

git clone -b v0.1.21  https://github.com/InternLM/XTuner /root/demo/XTuner
cd /root/demo/XTuner

# 执行安装
pip install -e '.[deepspeed]' -i https://mirrors.aliyun.com/pypi/simple/

模型效果预览

python -m streamlit run /root/Tutorial/tools/xtuner_streamlit_demo.py

在这里插入图片描述

微调

数据准备

使用以下脚本生成微调数据:

import json

# 设置用户的名字
name = '靓仔'
# 设置需要重复添加的数据次数
n =  1875

# 初始化数据
data = [
    {"conversation": [{"input": "请介绍一下你自己", "output": "我是{}的小助手,内在是上海AI实验室书生·浦语的1.8B大模型哦".format(name)}]},
    {"conversation": [{"input": "你是谁", "output": "我是{}的小助手,内在是上海AI实验室书生·浦语的1.8B大模型哦".format(name)}]},
    {"conversation": [{"input": "你在实战营做什么", "output": "我在这里帮助{}完成XTuner微调个人小助手的任务".format(name)}]},
    {"conversation": [{"input": "你可以做什么", "output": "我在这里帮助{}完成XTuner微调个人小助手的任务".format(name)}]}
]

# 通过循环,将初始化的对话数据重复添加到data列表中
for i in range(n):
    data.append(data[0])
    data.append(data[1])

# 将data列表中的数据写入到'datas/assistant.json'文件中
with open('datas/assistant.json', 'w', encoding='utf-8') as f:
    # 使用json.dump方法将数据以JSON格式写入文件
    # ensure_ascii=False 确保中文字符正常显示
    # indent=4 使得文件内容格式化,便于阅读
    json.dump(data, f, ensure_ascii=False, indent=4)

微调配置

# 查看所有配置文件
xtuner list-cfg 

# 匹配internlm2相关的配置文件
xtuner list-cfg -p internlm2

# 复制配置文件到指定目录
xtuner copy-cfg internlm2_chat_1_8b_qlora_alpaca_e3 /root/demo/xtuner_demo/

修改后的配置文件如下:

# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
                            LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,
                          BitsAndBytesConfig)

from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import alpaca_map_fn, template_map_fn_factory
from xtuner.engine.hooks import (DatasetInfoHook, EvaluateChatHook,
                                 VarlenAttnArgsToMessageHubHook)
from xtuner.engine.runner import TrainLoop
from xtuner.model import SupervisedFinetune
from xtuner.parallel.sequence import SequenceParallelSampler
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE

#######################################################################
#                          PART 1  Settings                           #
#######################################################################
# Model
pretrained_model_name_or_path = '/root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b' #修改此处模型路径
use_varlen_attn = False

# Data
alpaca_en_path = '/root/demo/xtuner_demo/datas/assistant.json' #修改此处数据路径
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 2048
pack_to_max_length = True

# parallel
sequence_parallel_size = 1

# Scheduler & Optimizer
batch_size = 1  # per_device
accumulative_counts = 16
accumulative_counts *= sequence_parallel_size
dataloader_num_workers = 0
max_epochs = 3
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1  # grad clip
warmup_ratio = 0.03

# Save
save_steps = 500
save_total_limit = 2  # Maximum checkpoints to keep (-1 means unlimited)

# Evaluate the generation performance during the training
evaluation_freq = 500
SYSTEM = SYSTEM_TEMPLATE.alpaca
evaluation_inputs = [
    '请介绍一下你自己', 'Please introduce yourself'
] #修改此处

#######################################################################
#                      PART 2  Model & Tokenizer                      #
#######################################################################
tokenizer = dict(
    type=AutoTokenizer.from_pretrained,
    pretrained_model_name_or_path=pretrained_model_name_or_path,
    trust_remote_code=True,
    padding_side='right')

model = dict(
    type=SupervisedFinetune,
    use_varlen_attn=use_varlen_attn,
    llm=dict(
        type=AutoModelForCausalLM.from_pretrained,
        pretrained_model_name_or_path=pretrained_model_name_or_path,
        trust_remote_code=True,
        torch_dtype=torch.float16,
        quantization_config=dict(
            type=BitsAndBytesConfig,
            load_in_4bit=True,
            load_in_8bit=False,
            llm_int8_threshold=6.0,
            llm_int8_has_fp16_weight=False,
            bnb_4bit_compute_dtype=torch.float16,
            bnb_4bit_use_double_quant=True,
            bnb_4bit_quant_type='nf4')),
    lora=dict(
        type=LoraConfig,
        r=64,
        lora_alpha=16,
        lora_dropout=0.1,
        bias='none',
        task_type='CAUSAL_LM'))

#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(
    type=process_hf_dataset,
    dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)), #修改此处数据加载方式
    tokenizer=tokenizer,
    max_length=max_length,
    dataset_map_fn=None, #修改此处
    template_map_fn=dict(
        type=template_map_fn_factory, template=prompt_template),
    remove_unused_columns=True,
    shuffle_before_pack=True,
    pack_to_max_length=pack_to_max_length,
    use_varlen_attn=use_varlen_attn)

sampler = SequenceParallelSampler \
    if sequence_parallel_size > 1 else DefaultSampler
train_dataloader = dict(
    batch_size=batch_size,
    num_workers=dataloader_num_workers,
    dataset=alpaca_en,
    sampler=dict(type=sampler, shuffle=True),
    collate_fn=dict(type=default_collate_fn, use_varlen_attn=use_varlen_attn))

#######################################################################
#                    PART 4  Scheduler & Optimizer                    #
#######################################################################
# optimizer
optim_wrapper = dict(
    type=AmpOptimWrapper,
    optimizer=dict(
        type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),
    clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),
    accumulative_counts=accumulative_counts,
    loss_scale='dynamic',
    dtype='float16')

# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md  # noqa: E501
param_scheduler = [
    dict(
        type=LinearLR,
        start_factor=1e-5,
        by_epoch=True,
        begin=0,
        end=warmup_ratio * max_epochs,
        convert_to_iter_based=True),
    dict(
        type=CosineAnnealingLR,
        eta_min=0.0,
        by_epoch=True,
        begin=warmup_ratio * max_epochs,
        end=max_epochs,
        convert_to_iter_based=True)
]

# train, val, test setting
train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)

#######################################################################
#                           PART 5  Runtime                           #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [
    dict(type=DatasetInfoHook, tokenizer=tokenizer),
    dict(
        type=EvaluateChatHook,
        tokenizer=tokenizer,
        every_n_iters=evaluation_freq,
        evaluation_inputs=evaluation_inputs,
        system=SYSTEM,
        prompt_template=prompt_template)
]

if use_varlen_attn:
    custom_hooks += [dict(type=VarlenAttnArgsToMessageHubHook)]

# configure default hooks
default_hooks = dict(
    # record the time of every iteration.
    timer=dict(type=IterTimerHook),
    # print log every 10 iterations.
    logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),
    # enable the parameter scheduler.
    param_scheduler=dict(type=ParamSchedulerHook),
    # save checkpoint per `save_steps`.
    checkpoint=dict(
        type=CheckpointHook,
        by_epoch=False,
        interval=save_steps,
        max_keep_ckpts=save_total_limit),
    # set sampler seed in distributed evrionment.
    sampler_seed=dict(type=DistSamplerSeedHook),
)

# configure environment
env_cfg = dict(
    # whether to enable cudnn benchmark
    cudnn_benchmark=False,
    # set multi process parameters
    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
    # set distributed parameters
    dist_cfg=dict(backend='nccl'),
)

# set visualizer
visualizer = None

# set log level
log_level = 'INFO'

# load from which checkpoint
load_from = None

# whether to resume training from the loaded checkpoint
resume = False

# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)

# set log processor
log_processor = dict(by_epoch=False)

微调训练

# 训练
xtuner train ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py

训练结束后,会在工作目录下得到一个work_dirs的目录,里面存有训练过程中的日志,以及训练后的权重。

权重格式转换

export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert pth_to_hf ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py /root/demo/xtuner_demo/work_dirs/internlm2_chat_1_8b_qlora_alpaca_e3_copy/iter_192.pth ./hf

模型合并

对于 LoRA 或者 QLoRA 微调出来的模型其实并不是一个完整的模型,而是一个额外的层(Adapter),训练完的这个层最终还是要与原模型进行合并才能被正常的使用。

对于全量微调的模型(full)其实是不需要进行整合这一步的,因为全量微调修改的是原模型的权重而非微调一个新的 Adapter ,因此是不需要进行模型整合的。

export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert merge /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b ./hf ./merged --max-shard-size 2GB

页面对话

import copy
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optional

import streamlit as st
import torch
from torch import nn
from transformers.generation.utils import (LogitsProcessorList,
                                           StoppingCriteriaList)
from transformers.utils import logging

from transformers import AutoTokenizer, AutoModelForCausalLM  # isort: skip

logger = logging.get_logger(__name__)


model_name_or_path = "/root/demo/xtuner_demo/merged"

@dataclass
class GenerationConfig:
    # this config is used for chat to provide more diversity
    max_length: int = 2048
    top_p: float = 0.75
    temperature: float = 0.1
    do_sample: bool = True
    repetition_penalty: float = 1.000


@torch.inference_mode()
def generate_interactive(
    model,
    tokenizer,
    prompt,
    generation_config: Optional[GenerationConfig] = None,
    logits_processor: Optional[LogitsProcessorList] = None,
    stopping_criteria: Optional[StoppingCriteriaList] = None,
    prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor],
                                                List[int]]] = None,
    additional_eos_token_id: Optional[int] = None,
    **kwargs,
):
    inputs = tokenizer([prompt], padding=True, return_tensors='pt')
    input_length = len(inputs['input_ids'][0])
    for k, v in inputs.items():
        inputs[k] = v.cuda()
    input_ids = inputs['input_ids']
    _, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
    if generation_config is None:
        generation_config = model.generation_config
    generation_config = copy.deepcopy(generation_config)
    
    model_kwargs = generation_config.update(**kwargs)
    
    bos_token_id, eos_token_id = (  # noqa: F841  # pylint: disable=W0612
        generation_config.bos_token_id,
        generation_config.eos_token_id,
    )
    if isinstance(eos_token_id, int):
        eos_token_id = [eos_token_id]
    if additional_eos_token_id is not None:
        eos_token_id.append(additional_eos_token_id)
    has_default_max_length = kwargs.get(
        'max_length') is None and generation_config.max_length is not None
    if has_default_max_length and generation_config.max_new_tokens is None:
        warnings.warn(
            f"Using 'max_length''s default ({repr(generation_config.max_length)}) \
                to control the generation length. "
            'This behaviour is deprecated and will be removed from the \
                config in v5 of Transformers -- we'
            ' recommend using `max_new_tokens` to control the maximum \
                length of the generation.',
            UserWarning,
        )
    elif generation_config.max_new_tokens is not None:
        generation_config.max_length = generation_config.max_new_tokens + \
            input_ids_seq_length
        if not has_default_max_length:
            logger.warn(  # pylint: disable=W4902
                f"Both 'max_new_tokens' (={generation_config.max_new_tokens}) "
                f"and 'max_length'(={generation_config.max_length}) seem to "
                "have been set. 'max_new_tokens' will take precedence. "
                'Please refer to the documentation for more information. '
                '(https://huggingface.co/docs/transformers/main/'
                'en/main_classes/text_generation)',
                UserWarning,
            )

    if input_ids_seq_length >= generation_config.max_length:
        input_ids_string = 'input_ids'
        logger.warning(
            f"Input length of {input_ids_string} is {input_ids_seq_length}, "
            f"but 'max_length' is set to {generation_config.max_length}. "
            'This can lead to unexpected behavior. You should consider'
            " increasing 'max_new_tokens'.")

    # 2. Set generation parameters if not already defined
    logits_processor = logits_processor if logits_processor is not None \
        else LogitsProcessorList()
    stopping_criteria = stopping_criteria if stopping_criteria is not None \
        else StoppingCriteriaList()

    logits_processor = model._get_logits_processor(
        generation_config=generation_config,
        input_ids_seq_length=input_ids_seq_length,
        encoder_input_ids=input_ids,
        prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
        logits_processor=logits_processor,
    )

    stopping_criteria = model._get_stopping_criteria(
        generation_config=generation_config,
        stopping_criteria=stopping_criteria)
    logits_warper = model._get_logits_warper(generation_config)

    unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
    scores = None
    while True:
        model_inputs = model.prepare_inputs_for_generation(
            input_ids, **model_kwargs)
        # forward pass to get next token
        outputs = model(
            **model_inputs,
            return_dict=True,
            output_attentions=False,
            output_hidden_states=False,
        )

        next_token_logits = outputs.logits[:, -1, :]

        # pre-process distribution
        next_token_scores = logits_processor(input_ids, next_token_logits)
        next_token_scores = logits_warper(input_ids, next_token_scores)

        # sample
        probs = nn.functional.softmax(next_token_scores, dim=-1)
        if generation_config.do_sample:
            next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
        else:
            next_tokens = torch.argmax(probs, dim=-1)

        # update generated ids, model inputs, and length for next step
        input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
        model_kwargs = model._update_model_kwargs_for_generation(
            outputs, model_kwargs, is_encoder_decoder=False)
        unfinished_sequences = unfinished_sequences.mul(
            (min(next_tokens != i for i in eos_token_id)).long())

        output_token_ids = input_ids[0].cpu().tolist()
        output_token_ids = output_token_ids[input_length:]
        for each_eos_token_id in eos_token_id:
            if output_token_ids[-1] == each_eos_token_id:
                output_token_ids = output_token_ids[:-1]
        response = tokenizer.decode(output_token_ids)

        yield response
        # stop when each sentence is finished
        # or if we exceed the maximum length
        if unfinished_sequences.max() == 0 or stopping_criteria(
                input_ids, scores):
            break


def on_btn_click():
    del st.session_state.messages


@st.cache_resource
def load_model():
    model = (AutoModelForCausalLM.from_pretrained(model_name_or_path,
                                                  trust_remote_code=True).to(
                                                      torch.bfloat16).cuda())
    tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,
                                              trust_remote_code=True)
    return model, tokenizer


def prepare_generation_config():
    with st.sidebar:
        max_length = st.slider('Max Length',
                               min_value=8,
                               max_value=32768,
                               value=2048)
        top_p = st.slider('Top P', 0.0, 1.0, 0.75, step=0.01)
        temperature = st.slider('Temperature', 0.0, 1.0, 0.1, step=0.01)
        st.button('Clear Chat History', on_click=on_btn_click)

    generation_config = GenerationConfig(max_length=max_length,
                                         top_p=top_p,
                                         temperature=temperature)

    return generation_config


user_prompt = '<|im_start|>user\n{user}<|im_end|>\n'
robot_prompt = '<|im_start|>assistant\n{robot}<|im_end|>\n'
cur_query_prompt = '<|im_start|>user\n{user}<|im_end|>\n\
    <|im_start|>assistant\n'


def combine_history(prompt):
    messages = st.session_state.messages
    meta_instruction = ('')
    total_prompt = f"<s><|im_start|>system\n{meta_instruction}<|im_end|>\n"
    for message in messages:
        cur_content = message['content']
        if message['role'] == 'user':
            cur_prompt = user_prompt.format(user=cur_content)
        elif message['role'] == 'robot':
            cur_prompt = robot_prompt.format(robot=cur_content)
        else:
            raise RuntimeError
        total_prompt += cur_prompt
    total_prompt = total_prompt + cur_query_prompt.format(user=prompt)
    return total_prompt


def main():
    # torch.cuda.empty_cache()
    print('load model begin.')
    model, tokenizer = load_model()
    print('load model end.')


    st.title('InternLM2-Chat-1.8B')

    generation_config = prepare_generation_config()

    # Initialize chat history
    if 'messages' not in st.session_state:
        st.session_state.messages = []

    # Display chat messages from history on app rerun
    for message in st.session_state.messages:
        with st.chat_message(message['role'], avatar=message.get('avatar')):
            st.markdown(message['content'])

    # Accept user input
    if prompt := st.chat_input('What is up?'):
        # Display user message in chat message container
        with st.chat_message('user'):
            st.markdown(prompt)
        real_prompt = combine_history(prompt)
        # Add user message to chat history
        st.session_state.messages.append({
            'role': 'user',
            'content': prompt,
        })

        with st.chat_message('robot'):
            message_placeholder = st.empty()
            for cur_response in generate_interactive(
                    model=model,
                    tokenizer=tokenizer,
                    prompt=real_prompt,
                    additional_eos_token_id=92542,
                    **asdict(generation_config),
            ):
                # Display robot response in chat message container
                message_placeholder.markdown(cur_response + '▌')
            message_placeholder.markdown(cur_response)
        # Add robot response to chat history
        st.session_state.messages.append({
            'role': 'robot',
            'content': cur_response,  # pylint: disable=undefined-loop-variable
        })
        torch.cuda.empty_cache()


if __name__ == '__main__':
    main()

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2039323.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

锂电池剩余寿命预测 | Matlab基于LSTM-Attention的锂电池剩余寿命预测

目录 预测效果基本介绍程序设计参考资料 预测效果 基本介绍 Matlab基于LSTM-Attention的锂电池剩余寿命预测&#xff08;单变量&#xff09;&#xff0c;长短期记忆神经网络融合注意力机制&#xff08;自注意力机制&#xff0c;多头注意力机制&#xff09;&#xff08;单变量&…

有效字的字母异位词

给定两个字符串 s 和 t &#xff0c;编写一个函数来判断 t 是否是 s 的字母异位词。 注意&#xff1a;若 s 和 t 中每个字符出现的次数都相同&#xff0c;则称 s 和 t 互为字母异位词。 示例 1: 输入: s "anagram", t "nagaram" 输出: true示例 2: 输…

8.14-LVS主从+nginx的haproxy+mysql的haproxy+读写分离

一、LVS-主从数据库 # nat # 添加规则 [rootDS ~]# ipvsadm -A -t 192.168.2.130:3306 -s rr [rootDS ~]# ipvsadm -a -t 192.168.2.130:3306 -r 192.168.2.40:3306 -m [rootDS ~]# ipvsadm -a -t 192.168.2.130:3306 -r 192.168.2.42:3310 -m [rootDS ~]# ipvsadm -Ln IP Vir…

javaweb学习笔记(8.10)

一、JS 1.1JS简介 Web标准&#xff1a;由3WC制订 三个组成部分&#xff1a; HTML---》网页的基础结构 CSS---》网页的表现效果 JavaScript---》网页的行为 简介&#xff1a;JS是一门跨平台、面向对象的脚本语言。用来控制网页行为的&#xff0c;使网页交互。 1.2JS的引入…

贷奇乐漏洞学习 --- 两个变态WAF绕过

代码分析 第一个WAF 代码 function dowith_sql($str) {$check preg_match(/select|insert|update|delete|\|\/\*|\*|\.\.\/|\.\/|union|into|load_file|outfile/is, $str);if ($check) {echo "非法字符!";exit();}return $str;} 实现原理 这段PHP代码定义了一个…

Linux日常运维-主机名hosts

作者介绍&#xff1a;简历上没有一个精通的运维工程师。希望大家多多关注作者&#xff0c;下面的思维导图也是预计更新的内容和当前进度(不定时更新)。 本小章内容就是Linux进阶部分的日常运维部分&#xff0c;掌握这些日常运维技巧或者方法在我们的日常运维过程中会带来很多方…

探索消费新纪元:循环购模式的奥秘

在这个日新月异的消费时代&#xff0c;你是否听说过“消费1000送2000&#xff0c;每天领钱&#xff0c;提现无忧”的奇闻&#xff1f;或许你会疑惑&#xff0c;商家这是在慷慨解囊&#xff0c;还是在布下什么神秘的局&#xff1f;今天&#xff0c;让我作为你的私域电商向导&…

Linux应用--IO多路复用

一、I/O多路复用简介 socket通信&#xff0c;在Linux系统其是就是文件描述符&#xff0c;对应于内核中的缓冲区&#xff08;包含读缓冲区与写缓冲区&#xff09;&#xff0c;实质上是对读写缓冲区的操作&#xff1b;多路复用&#xff0c;多条路复用成一条路。 I/O多路复用使得程…

爬虫动态http代理ip:提高数据抓取的有效工具

爬虫动态HTTP代理IP的概述与应用 在网络爬虫的世界中&#xff0c;动态HTTP代理IP是一个非常重要的工具。它不仅能帮助用户提高数据抓取的效率&#xff0c;还能有效避免被目标网站封禁。本文将为您详细介绍什么是动态HTTP代理IP、其优势、使用场景及如何获取和配置。 1. 什么是…

NVDLA专题8:具体模块介绍——Convolution Accumulator

概述 卷积累加器(Convolution Accumulator&#xff0c; CACC)是CMAC之后的卷积流水线的一个阶段,CACC的定义在NV_NVDLA_cacc.v&#xff0c;module定义如下&#xff1a; module NV_NVDLA_cacc (cacc2sdp_ready //|< i,csb2cacc_req_pd //|<…

ZooKeeper服务器下载|安装|配置|启动|关闭|端口占用冲突解决

1、下载ZooKeeper ZooKeeper官网&#xff1a;https://zookeeper.apache.org/ 下载ZooKeeper二进制包 2、安装ZooKeeper 对ZooKeeper压缩包解压即可 tar -zxvf apache-zookeeper-3.9.2-bin.tar.gz -C /usr/local/3、配置ZooKeeper 来到ZooKeeper配置文件页面 cd conf复制z…

一文详解ETC1压缩纹理——OpenGL中ETC1纹理加载与渲染实践

ETC1(Ericsson Texture Compression)是一种有损纹理压缩技术,2005年初由爱立信研究院参与研发,目的是用于减少移动设备和嵌入式系统中纹理存储的内存占用,应用场景见于游戏、VR、AR等需要大量的纹理资源来创建高质量的视觉效果以及复杂的动画效果场景。 ETC1可提供RGB888像…

宠物医院收银系统源码

1.系统开发语言 核心开发语言: PHP、HTML5、Dart 后台接口: PHP7.3 后合管理网站: HTML5vue2.0element-uicssjs 线下收银台&#xff08;安卓/Windows版&#xff09;: Dart3 框架&#xff1a;Flutter 3.19.6 助手: uniapp 商城: uniapp 2.系统概况 针对宠物医院的一套一体化收…

Unity 流光shader的思路

先看一下直线方程&#xff1a; 1.x0.5y0: 2.x0.5y20: 3.由上面的函数图像可以看出 zxky (k是常量)&#xff0c;表示所有斜率为k的直线集合&#xff0c;z是直线在x轴的截距&#xff0c;每个z的取值都确定一条唯一的斜率为k的直线。 4.那么给z一个取值范围就可以画出一条斜的条…

vulnhub系列:devguru

vulnhub系列&#xff1a;devguru 靶机下载 一、信息收集 nmap扫描存活&#xff0c;根据mac地址寻找IP nmap 192.168.23.0/24nmap扫描端口&#xff0c;开放端口&#xff1a;22、80、8585 nmap 192.168.23.147 -p- -sV -Pn -O访问80端口 dirb目录扫描&#xff0c;存在 git 源…

c shell 脚本学习使用

1.cd /进入该目录等 2.rm -rf filename 删除文件 3、ctrl allt 打开终端窗口 4、ls 查看该路径下的文件 5 mkdir filename 创建文件夹 6、sudo chmod 777 filename 给予权限 首先对于vcs而言&#xff0c;建立其脚本有以下几个步骤: 1、setup synopsys_sim.setu…

mysql中log

目录 MySQL 日志系统概述 日志类型 日志的作用和重要性 Mermaid图示 1. Undo Log 和 Redo Log 的协同工作图 2. Redo Log 确保持久性的流程图 Undo Log&#xff08;回滚日志&#xff09; 事务的原子性&#xff08;Atomicity&#xff09;保障 事务回滚机制 MVCC&#…

MySQL的初步认识

目录 1、MySQL的定义 2、数据库操作 2.1 创建数据库 2.2 查看数据库 2.3 选中数据库 2.4 删除数据库 3、数据表的操作 3.1 创建表 3.2 查看所有表 3.3 查看指定表结构 3.4 删除表 1、MySQL的定义 数据库技术主要是用来解决数据处理的非数值计算问题&#xff0c;数据处…

植物大战僵尸融合版

1.这是植物大战僵尸融合版 2.百度网盘 链接&#xff1a;https://pan.baidu.com/s/1yUytNeloiQs5tlss16fVOg 提取码&#xff1a;yspa 时间从2024年8月14号开始分享30天&#xff0c;10个人访问&#xff0c;先来先得。

项目管理软件中的项目集是什么?项目集管理哪些人适合学习?

在现今的数字化时代&#xff0c;项目管理软件已经成为企业高效运作不可或缺的工具。其中&#xff0c;项目集这一概念在项目管理领域内越来越受到重视。那么&#xff0c;项目管理软件中的项目集究竟是什么呢&#xff1f;它又适合哪些人进行学习和应用呢&#xff1f; 项目、大项目…