实用篇 | 一文快速构建人工智能前端展示streamlit应用

news2024/11/30 10:54:50

----------------------- 🎈API 相关直达 🎈--------------------------

🚀Gradio: 实用篇 | 关于Gradio快速构建人工智能模型实现界面,你想知道的都在这里-CSDN博客

🚀Streamlit :实用篇 | 一文快速构建人工智能前端展示streamlit应用-CSDN博客

🚀Flask: 实用篇 | 一文学会人工智能中API的Flask编写(内含模板)-CSDN博客

Streamlit是一个用于机器学习、数据可视化的 Python 框架,它能几行代码就构建出一个精美的在线 app 应用。相比于Gradio,能展示更多的功能~

目录

1.Streamlit的安装

2.Streamlit的语法

2.1.基本语法

2.2.进阶语法

2.2.1.图片,语音,视频

2.2.2.进程提示

2.3.高级语法

2.3.1.@st.cache_data

2.3.2.st.cache_resource

3.创建一个简单的app

实时读取数据并作图

4.人工智能深度学习项目Streamlit实例

4.1.实例1:文本生成

4.1.1ChatGLM的交互

4.1.2.OpenAI的交互

4.2.图像类

4.2.1.图像分类

4.2.2.图片生成

4.3.语音类

4.3.1.语音合成

 4.3.2.语音转文本

参考文献


官网:Get started - Streamlit Docs

1.Streamlit的安装

# 安装
pip install streamlit
pip install streamlit-chat


# 测试
streamlit hello

会出现一些案例

2.Streamlit的语法

2.1.基本语法

import streamlit as st

最常用的几种

  • 标题st.title() : st.title("标题")
  • 写入st.write(): st.write("Hello world ")
  • 文本st.text():单行文本
  • 多行文本框st.text_area():st.text_area("文本框",value=''key=None)
  • 滑动条st.slider():st.slider(““)
  • 按钮st.button():st.button(“按钮“)
  • 输入文本st.text_input():st.text_input(“请求用户输入“)
  • 单选框组件st.radio()

2.2.进阶语法

2.2.1.图片,语音,视频

都可以输入向量值,比特值,加载文件,文件路径

  • st.image()
  • st.audio()
  • st.video()

2.2.2.进程提示

  • st.progress() 显示进度
  • st.spinner()显示执行状态
  • st.error()显示错误信息
  • st.warning - 显示警告信息

2.3.高级语法

2.3.1.@st.cache_data

当使用 Streamlit 的缓存注释标记函数时,它会告诉 Streamlit 每当调用函数时,它应该检查两件事:

  • 用于函数调用的输入参数
  • 函数内部的代码

2.3.2.st.cache_resource

用于缓存返回全局资源(例如数据库连接、ML 模型)的函数的装饰器。

缓存的对象在所有用户、会话和重新运行之间共享。他们 必须是线程安全的,因为它们可以从多个线程访问 同时。如果线程安全是一个问题,请考虑改用 st.session_state 来存储每个会话的资源。

默认情况下,cache_resource函数的所有参数都必须是可哈希的。 名称以 _ 开头的任何参数都不会进行哈希处理。

3.创建一个简单的app

实时读取数据并作图

import streamlit as st
import pandas as pd
import numpy as np

st.title('Uber pickups in NYC')

DATA_COLUMN = 'data/time'
DATA_URL = ('https://s3-us-west-2.amazonaws.com/'
            'streamlit-demo-data/uber-raw-data-sep14.csv.gz')

# 增加缓存
@st.cache_data
# 下载数据函数
def load_data(nrows):
    # 读取csv文件
    data = pd.rea_csv(data_url,nrows=nrows)
    # 转换小写字母
    lowercase = lambda x:tr(x).lower()
    # 将数据重命名 
    data.rename(lowercase,axis='columns',inplace=True)
    # 将数据以panda的数据列的形式展示出来
    data[DATA_COLUMN] = pd.to_datatime(data[DATA_COLUMN])
    # 返回最终数据
    return data

# 直接打印文本信息
data_load_state = st.text('正在下载')
# 下载一万条数据中的数据
data = load_data(10000)
# 最后输出文本显示
data_load_state.text("完成!(using st.cache_data)")

# 检查原始数据
if st.checkbox('Show raw data'):
    st.subheader('Raw data')
    st.write(data)

# 绘制直方图
# 添加一个子标题
st.subheader('Number of pickups by hour')

# 使用numpy生成一个直方图,按小时排列
hist_values = np.histogram(data[DATE_COLUMN].dt.hour, bins=24, range=(0,24))[0]
# 使用Streamlit 的 st.bar_chart() 方法来绘制直方图
st.bar_chart(hist_values)

# 使用滑动块筛选结果
hour_to_filter = st.slider('hour', 0, 23, 17)
# 实时更新
filtered_data = data[data[DATE_COLUMN].dt.hour == hour_to_filter]

# 为地图添加一个副标题
st.subheader('Map of all pickups at %s:00' % hour_to_filter)
# 使用st.map()函数绘制数据
st.map(filtered_data)

运行

streamlit run demo.py

4.人工智能深度学习项目Streamlit实例

4.1.实例1:文本生成

4.1.1ChatGLM的交互

from transformers import AutoModel, AutoTokenizer
import streamlit as st
from streamlit_chat import message


st.set_page_config(
    page_title="ChatGLM-6b 演示",
    page_icon=":robot:"
)


@st.cache_resource
def get_model():
    tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
    model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
    model = model.eval()
    return tokenizer, model


MAX_TURNS = 20
MAX_BOXES = MAX_TURNS * 2


def predict(input, max_length, top_p, temperature, history=None):
    tokenizer, model = get_model()
    if history is None:
        history = []

    with container:
        if len(history) > 0:
            if len(history)>MAX_BOXES:
                history = history[-MAX_TURNS:]
            for i, (query, response) in enumerate(history):
                message(query, avatar_style="big-smile", key=str(i) + "_user")
                message(response, avatar_style="bottts", key=str(i))

        message(input, avatar_style="big-smile", key=str(len(history)) + "_user")
        st.write("AI正在回复:")
        with st.empty():
            for response, history in model.stream_chat(tokenizer, input, history, max_length=max_length, top_p=top_p,
                                               temperature=temperature):
                query, response = history[-1]
                st.write(response)

    return history


container = st.container()

# create a prompt text for the text generation
prompt_text = st.text_area(label="用户命令输入",
            height = 100,
            placeholder="请在这儿输入您的命令")

max_length = st.sidebar.slider(
    'max_length', 0, 4096, 2048, step=1
)
top_p = st.sidebar.slider(
    'top_p', 0.0, 1.0, 0.6, step=0.01
)
temperature = st.sidebar.slider(
    'temperature', 0.0, 1.0, 0.95, step=0.01
)

if 'state' not in st.session_state:
    st.session_state['state'] = []

if st.button("发送", key="predict"):
    with st.spinner("AI正在思考,请稍等........"):
        # text generation
        st.session_state["state"] = predict(prompt_text, max_length, top_p, temperature, st.session_state["state"])

4.1.2.OpenAI的交互

from openai import OpenAI
import streamlit as st

with st.sidebar:
    openai_api_key = st.text_input("OpenAI API Key", key="chatbot_api_key", type="password")
    "[Get an OpenAI API key](https://platform.openai.com/account/api-keys)"
    "[View the source code](https://github.com/streamlit/llm-examples/blob/main/Chatbot.py)"
    "[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/streamlit/llm-examples?quickstart=1)"

st.title("💬 Chatbot")
st.caption("🚀 A streamlit chatbot powered by OpenAI LLM")
if "messages" not in st.session_state:
    st.session_state["messages"] = [{"role": "assistant", "content": "How can I help you?"}]

for msg in st.session_state.messages:
    st.chat_message(msg["role"]).write(msg["content"])

if prompt := st.chat_input():
    if not openai_api_key:
        st.info("Please add your OpenAI API key to continue.")
        st.stop()

    client = OpenAI(api_key=openai_api_key)
    st.session_state.messages.append({"role": "user", "content": prompt})
    st.chat_message("user").write(prompt)
    response = client.chat.completions.create(model="gpt-3.5-turbo", messages=st.session_state.messages)
    msg = response.choices[0].message.content
    st.session_state.messages.append({"role": "assistant", "content": msg})
    st.chat_message("assistant").write(msg)

4.2.图像类

4.2.1.图像分类

import streamlit as st

st.markdown('<h1 style="color:black;">Vgg 19 Image classification model</h1>', unsafe_allow_html=True)
st.markdown('<h2 style="color:gray;">The image classification model classifies image into following categories:</h2>', unsafe_allow_html=True)
st.markdown('<h3 style="color:gray;"> street,  buildings, forest, sea, mountain, glacier</h3>', unsafe_allow_html=True)


# 背景图片background image to streamlit

@st.cache(allow_output_mutation=True)
# 以base64的方式传输文件
def get_base64_of_bin_file(bin_file):
    with open(bin_file, 'rb') as f:
        data = f.read()
    return base64.b64encode(data).decode()
#设置背景图片,颜色等
def set_png_as_page_bg(png_file):
    bin_str = get_base64_of_bin_file(png_file) 
    page_bg_img = '''
    <style>
    .stApp {
    background-image: url("data:image/png;base64,%s");
    background-size: cover;
    background-repeat: no-repeat;
    background-attachment: scroll; # doesn't work
    }
    </style>
    ''' % bin_str
    
    st.markdown(page_bg_img, unsafe_allow_html=True)
    return

set_png_as_page_bg('/content/background.webp')

# 上传png/jpg的照片
upload= st.file_uploader('Insert image for classification', type=['png','jpg'])
c1, c2= st.columns(2)
if upload is not None:
  im= Image.open(upload)
  img= np.asarray(im)
  image= cv2.resize(img,(224, 224))
  img= preprocess_input(image)
  img= np.expand_dims(img, 0)
  c1.header('Input Image')
  c1.image(im)
  c1.write(img.shape)

 # 下载预训练模型
 # 输入尺寸
  input_shape = (224, 224, 3)
 # 定义优化器
  optim_1 = Adam(learning_rate=0.0001)
 # 分类数
  n_classes=6
 # 定义模型
  vgg_model = model(input_shape, n_classes, optim_1, fine_tune=2)
 # 下载权重
  vgg_model.load_weights('/content/drive/MyDrive/vgg/tune_model19.weights.best.hdf5')

  #预测
  vgg_preds = vgg_model.predict(img)
  vgg_pred_classes = np.argmax(vgg_preds, axis=1)
  c2.header('Output')
  c2.subheader('Predicted class :')
  c2.write(classes[vgg_pred_classes[0]] )

4.2.2.图片生成

import streamlit as st 
from dotenv import load_dotenv
import os 
import openai
from diffusers import StableDiffusionPipeline
import torch

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

#function to generate AI based images using OpenAI Dall-E
def generate_images_using_openai(text):
    response = openai.Image.create(prompt= text, n=1, size="512x512")
    image_url = response['data'][0]['url']
    return image_url


#function to generate AI based images using Huggingface Diffusers
def generate_images_using_huggingface_diffusers(text):
    pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
    pipe = pipe.to("cuda")
    prompt = text
    image = pipe(prompt).images[0] 
    return image

#Streamlit Code
choice = st.sidebar.selectbox("Select your choice", ["Home", "DALL-E", "Huggingface Diffusers"])

if choice == "Home":
    st.title("AI Image Generation App")
    with st.expander("About the App"):
        st.write("This is a simple image generation app that uses AI to generates images from text prompt.")

elif choice == "DALL-E":
    st.subheader("Image generation using Open AI's DALL-E")
    input_prompt = st.text_input("Enter your text prompt")
    if input_prompt is not None:
        if st.button("Generate Image"):
            image_url = generate_images_using_openai(input_prompt)
            st.image(image_url, caption="Generated by DALL-E")

elif choice == "Huggingface Diffusers":
    st.subheader("Image generation using Huggingface Diffusers")
    input_prompt = st.text_input("Enter your text prompt")
    if input_prompt is not None:
        if st.button("Generate Image"):
            image_output = generate_images_using_huggingface_diffusers(input_prompt)
            st.info("Generating image.....")
            st.success("Image Generated Successfully")
            st.image(image_output, caption="Generated by Huggingface Diffusers")

4.3.语音类

4.3.1.语音合成

import torch
import streamlit as st
# 这里使用coqui-tts,直接pip install tts就可以
from TTS.api import TTS
import tempfile
import os



device = "cuda" if torch.cuda.is_available() else "cpu"
# 模型选择
model_name = 'tts_models/en/jenny/jenny'
tts = TTS(model_name).to(device)

st.title('Coqui TTS')

# 输入文本
text_to_speak = st.text_area('Entire article text here:', '')

# 点击按钮监听
if st.button('Listen'):
    if text_to_speak:

        # temp path needed for audio to listen to
        # 定义合成语音文件名称
        temp_audio_path = './temp_audio.wav'
        # 使用tts库中的tts_to_file函数
        tts.tts_to_file(text=text_to_speak, file_path=temp_audio_path)

        #输出语音
        st.audio(temp_audio_path, format='audio/wav')


        os.unlink(temp_audio_path)


 4.3.2.语音转文本

import logging
import logging.handlers
import queue
import threading
import time
import urllib.request
import os
from collections import deque
from pathlib import Path
from typing import List

import av
import numpy as np
import pydub
import streamlit as st
from twilio.rest import Client

from streamlit_webrtc import WebRtcMode, webrtc_streamer

HERE = Path(__file__).parent

logger = logging.getLogger(__name__)


# This code is based on https://github.com/streamlit/demo-self-driving/blob/230245391f2dda0cb464008195a470751c01770b/streamlit_app.py#L48  # noqa: E501
def download_file(url, download_to: Path, expected_size=None):
    # Don't download the file twice.
    # (If possible, verify the download using the file length.)
    if download_to.exists():
        if expected_size:
            if download_to.stat().st_size == expected_size:
                return
        else:
            st.info(f"{url} is already downloaded.")
            if not st.button("Download again?"):
                return

    download_to.parent.mkdir(parents=True, exist_ok=True)

    # These are handles to two visual elements to animate.
    weights_warning, progress_bar = None, None
    try:
        weights_warning = st.warning("Downloading %s..." % url)
        progress_bar = st.progress(0)
        with open(download_to, "wb") as output_file:
            with urllib.request.urlopen(url) as response:
                length = int(response.info()["Content-Length"])
                counter = 0.0
                MEGABYTES = 2.0 ** 20.0
                while True:
                    data = response.read(8192)
                    if not data:
                        break
                    counter += len(data)
                    output_file.write(data)

                    # We perform animation by overwriting the elements.
                    weights_warning.warning(
                        "Downloading %s... (%6.2f/%6.2f MB)"
                        % (url, counter / MEGABYTES, length / MEGABYTES)
                    )
                    progress_bar.progress(min(counter / length, 1.0))
    # Finally, we remove these visual elements by calling .empty().
    finally:
        if weights_warning is not None:
            weights_warning.empty()
        if progress_bar is not None:
            progress_bar.empty()


# This code is based on https://github.com/whitphx/streamlit-webrtc/blob/c1fe3c783c9e8042ce0c95d789e833233fd82e74/sample_utils/turn.py
@st.cache_data  # type: ignore
def get_ice_servers():
    """Use Twilio's TURN server because Streamlit Community Cloud has changed
    its infrastructure and WebRTC connection cannot be established without TURN server now.  # noqa: E501
    We considered Open Relay Project (https://www.metered.ca/tools/openrelay/) too,
    but it is not stable and hardly works as some people reported like https://github.com/aiortc/aiortc/issues/832#issuecomment-1482420656  # noqa: E501
    See https://github.com/whitphx/streamlit-webrtc/issues/1213
    """

    # Ref: https://www.twilio.com/docs/stun-turn/api
    try:
        account_sid = os.environ["TWILIO_ACCOUNT_SID"]
        auth_token = os.environ["TWILIO_AUTH_TOKEN"]
    except KeyError:
        logger.warning(
            "Twilio credentials are not set. Fallback to a free STUN server from Google."  # noqa: E501
        )
        return [{"urls": ["stun:stun.l.google.com:19302"]}]

    client = Client(account_sid, auth_token)

    token = client.tokens.create()

    return token.ice_servers



def main():
    st.header("Real Time Speech-to-Text")
    st.markdown(
        """
This demo app is using [DeepSpeech](https://github.com/mozilla/DeepSpeech),
an open speech-to-text engine.

A pre-trained model released with
[v0.9.3](https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3),
trained on American English is being served.
"""
    )

    # https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3
    MODEL_URL = "https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.pbmm"  # noqa
    LANG_MODEL_URL = "https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.scorer"  # noqa
    MODEL_LOCAL_PATH = HERE / "models/deepspeech-0.9.3-models.pbmm"
    LANG_MODEL_LOCAL_PATH = HERE / "models/deepspeech-0.9.3-models.scorer"

    download_file(MODEL_URL, MODEL_LOCAL_PATH, expected_size=188915987)
    download_file(LANG_MODEL_URL, LANG_MODEL_LOCAL_PATH, expected_size=953363776)

    lm_alpha = 0.931289039105002
    lm_beta = 1.1834137581510284
    beam = 100

    sound_only_page = "Sound only (sendonly)"
    with_video_page = "With video (sendrecv)"
    app_mode = st.selectbox("Choose the app mode", [sound_only_page, with_video_page])

    if app_mode == sound_only_page:
        app_sst(
            str(MODEL_LOCAL_PATH), str(LANG_MODEL_LOCAL_PATH), lm_alpha, lm_beta, beam
        )
    elif app_mode == with_video_page:
        app_sst_with_video(
            str(MODEL_LOCAL_PATH), str(LANG_MODEL_LOCAL_PATH), lm_alpha, lm_beta, beam
        )


def app_sst(model_path: str, lm_path: str, lm_alpha: float, lm_beta: float, beam: int):
    webrtc_ctx = webrtc_streamer(
        key="speech-to-text",
        mode=WebRtcMode.SENDONLY,
        audio_receiver_size=1024,
        rtc_configuration={"iceServers": get_ice_servers()},
        media_stream_constraints={"video": False, "audio": True},
    )

    status_indicator = st.empty()

    if not webrtc_ctx.state.playing:
        return

    status_indicator.write("Loading...")
    text_output = st.empty()
    stream = None

    while True:
        if webrtc_ctx.audio_receiver:
            if stream is None:
                from deepspeech import Model

                model = Model(model_path)
                model.enableExternalScorer(lm_path)
                model.setScorerAlphaBeta(lm_alpha, lm_beta)
                model.setBeamWidth(beam)

                stream = model.createStream()

                status_indicator.write("Model loaded.")

            sound_chunk = pydub.AudioSegment.empty()
            try:
                audio_frames = webrtc_ctx.audio_receiver.get_frames(timeout=1)
            except queue.Empty:
                time.sleep(0.1)
                status_indicator.write("No frame arrived.")
                continue

            status_indicator.write("Running. Say something!")

            for audio_frame in audio_frames:
                sound = pydub.AudioSegment(
                    data=audio_frame.to_ndarray().tobytes(),
                    sample_width=audio_frame.format.bytes,
                    frame_rate=audio_frame.sample_rate,
                    channels=len(audio_frame.layout.channels),
                )
                sound_chunk += sound

            if len(sound_chunk) > 0:
                sound_chunk = sound_chunk.set_channels(1).set_frame_rate(
                    model.sampleRate()
                )
                buffer = np.array(sound_chunk.get_array_of_samples())
                stream.feedAudioContent(buffer)
                text = stream.intermediateDecode()
                text_output.markdown(f"**Text:** {text}")
        else:
            status_indicator.write("AudioReciver is not set. Abort.")
            break


def app_sst_with_video(
    model_path: str, lm_path: str, lm_alpha: float, lm_beta: float, beam: int
):
    frames_deque_lock = threading.Lock()
    frames_deque: deque = deque([])

    async def queued_audio_frames_callback(
        frames: List[av.AudioFrame],
    ) -> av.AudioFrame:
        with frames_deque_lock:
            frames_deque.extend(frames)

        # Return empty frames to be silent.
        new_frames = []
        for frame in frames:
            input_array = frame.to_ndarray()
            new_frame = av.AudioFrame.from_ndarray(
                np.zeros(input_array.shape, dtype=input_array.dtype),
                layout=frame.layout.name,
            )
            new_frame.sample_rate = frame.sample_rate
            new_frames.append(new_frame)

        return new_frames

    webrtc_ctx = webrtc_streamer(
        key="speech-to-text-w-video",
        mode=WebRtcMode.SENDRECV,
        queued_audio_frames_callback=queued_audio_frames_callback,
        rtc_configuration={"iceServers": get_ice_servers()},
        media_stream_constraints={"video": True, "audio": True},
    )

    status_indicator = st.empty()

    if not webrtc_ctx.state.playing:
        return

    status_indicator.write("Loading...")
    text_output = st.empty()
    stream = None

    while True:
        if webrtc_ctx.state.playing:
            if stream is None:
                from deepspeech import Model

                model = Model(model_path)
                model.enableExternalScorer(lm_path)
                model.setScorerAlphaBeta(lm_alpha, lm_beta)
                model.setBeamWidth(beam)

                stream = model.createStream()

                status_indicator.write("Model loaded.")

            sound_chunk = pydub.AudioSegment.empty()

            audio_frames = []
            with frames_deque_lock:
                while len(frames_deque) > 0:
                    frame = frames_deque.popleft()
                    audio_frames.append(frame)

            if len(audio_frames) == 0:
                time.sleep(0.1)
                status_indicator.write("No frame arrived.")
                continue

            status_indicator.write("Running. Say something!")

            for audio_frame in audio_frames:
                sound = pydub.AudioSegment(
                    data=audio_frame.to_ndarray().tobytes(),
                    sample_width=audio_frame.format.bytes,
                    frame_rate=audio_frame.sample_rate,
                    channels=len(audio_frame.layout.channels),
                )
                sound_chunk += sound

            if len(sound_chunk) > 0:
                sound_chunk = sound_chunk.set_channels(1).set_frame_rate(
                    model.sampleRate()
                )
                buffer = np.array(sound_chunk.get_array_of_samples())
                stream.feedAudioContent(buffer)
                text = stream.intermediateDecode()
                text_output.markdown(f"**Text:** {text}")
        else:
            status_indicator.write("Stopped.")
            break


if __name__ == "__main__":
    import os

    DEBUG = os.environ.get("DEBUG", "false").lower() not in ["false", "no", "0"]

    logging.basicConfig(
        format="[%(asctime)s] %(levelname)7s from %(name)s in %(pathname)s:%(lineno)d: "
        "%(message)s",
        force=True,
    )

    logger.setLevel(level=logging.DEBUG if DEBUG else logging.INFO)

    st_webrtc_logger = logging.getLogger("streamlit_webrtc")
    st_webrtc_logger.setLevel(logging.DEBUG)

    fsevents_logger = logging.getLogger("fsevents")
    fsevents_logger.setLevel(logging.WARNING)

    main()

参考文献

【1】API Reference - Streamlit Docs

【2】andfanilo/streamlit-lottie: Streamlit component to render Lottie animations (github.com)turner-anderson/streamlit-cropper: A simple image cropper for Streamlit (github.com)andfanilo/streamlit-lottie: Streamlit component to render Lottie animations (github.com) 

【3】awetomate/text-to-speech-streamlit: Text-to-Speech solution using Google's Cloud TTS API and a Streamlit front end (github.com) 【4】Using streamlit for an STT / TTS model demo? - 🧩 Streamlit Components - Streamlit

【5】AI-App/Streamlit-TTS (github.com)

【6】Building a Voice Assistant using ChatGPT API | Vahid's ML-Blog (vahidmirjalili.com) 

【7】streamlit/llm-examples: Streamlit LLM app examples for getting started (github.com) 

【8】whitphx/streamlit-stt-app: Real time web based Speech-to-Text app with Streamlit (github.com)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1299648.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

JavaEE 09 锁策略

1.锁策略 1.1 乐观锁与悲观锁 其实前三个锁是同一种锁,只是站在不同的角度上去进行描述,此处的乐观与悲观其实是指在预测的角度上看会发生锁竞争的概率大小,概率大的则是悲观锁,概率小的则是乐观锁 乐观锁在加锁的时候就会做较少的事情,加锁的速度较快,但是消耗的cpu资源等也会…

代码随想录二刷 |二叉树 |94.二叉树的中序遍历

代码随想录二刷 &#xff5c;二叉树 &#xff5c;二叉树的中序遍历 题目描述解题思路代码实现迭代法递归法 题目描述 94.二叉树的中序遍历 给定一个二叉树的根节点 root &#xff0c;返回 它的 中序 遍历 。 示例 1&#xff1a; 输入&#xff1a;root [1,null,2,3] 输出&a…

系统架构设计师教程(二)计算机系统基础知识

系统架构设计师 2.1 计算机系统概述2.2 计算机硬件2.2.1 计算机硬件组成2.2.2 处理器2.2.3 存储器2.2.4 总线2.2.5 接口2.2.6 外部设备 2.3 计算机软件2.3.1 计算机软件概述2.3.2 操作系统2.3.3 数据库关系数据库关系数据库设计的特点及方法关系数据库设计的基本步骤 分布式数据…

[Linux] nginx防盗链与优化

一、Nginx的页面优化 1.1 Nginx的网页压缩 在Nginx的ngx_http_gzip_module压缩模块提供对文件内容压缩的功能。进行相关的配置修改&#xff0c;就能实现Nginx页面的压缩&#xff0c;达到节约带宽&#xff0c;提升用户访问速度 vim /usr/local/nginx/conf/nginx.conf http { .…

【trino权威指南】使用trino详解:trino client安装、查询sql、DBeaver连接trino、java通过JDBC连接trino

文章目录 一. Trino CLI1. 安装client2. 使用client执行sql 二. JDBC driver 连接Trino1. 通过DBeaver用户界面连接2. JDBC Driver in java2.1. 环境配置2.2. 注册和配置driver2.3. 连接参数2.4. 查询例子 一. Trino CLI 1. 安装client Trino CLI提供了一个基于终端的交互式s…

Linux权限命令详解

Linux权限命令详解 文章目录 Linux权限命令详解一、什么是权限&#xff1f;二、权限的本质三、Linux中的用户四、linux中文件的权限4.1 文件访问者的分类&#xff08;人&#xff09;4.2 文件类型和访问权限&#xff08;事物属性&#xff09; 五、快速掌握修改权限的做法【第一种…

【刷题篇】动态规划(六)

文章目录 1、最大子数组和2、环形子数组的最大和3、乘积最大子数组4、乘积为正数的最长子数组长度5、 等差数列划分6、最长湍流子数组 1、最大子数组和 给你一个整数数组 nums &#xff0c;请你找出一个具有最大和的连续子数组&#xff08;子数组最少包含一个元素&#xff09;&…

Cache替换算法

目录 一. 随机算法(RAND)二. 先进先出算法(FIFO)三. 近期最少使用算法(LRU)四. 最不经常使用算法&#xff08;LFU) 要解决的问题: Cache很小&#xff0c;主存很大。如果cache满了怎么办? \quad 也要关注各种算法的英文缩写 \quad 一. 随机算法(RAND) \quad 随机算法―一实现简…

Qt优秀开源项目之十九:跨平台记事本Notes

官网&#xff1a;https://www.get-notes.com github&#xff1a;https://github.com/nuttyartist/notes 一.特性 1.完全基于Qt和C 2.完全开源和跨平台&#xff08;Linux、macOS、Windows&#xff09; 3.运行速度快&#xff0c;界面美如画 4.支持Markdown 5.支持使用嵌套文件夹…

C# 任务并行类库Parallel调用示例

写在前面 Task Parallel Library 是微软.NET框架基础类库&#xff08;BCL&#xff09;中的一个&#xff0c;主要目的是为了简化并行编程&#xff0c;可以实现在不同的处理器上并行处理不同任务&#xff0c;以提升运行效率。Parallel常用的方法有For/ForEach/Invoke三个静态方法…

Ubuntu安装向日葵【远程控制】

文章目录 引言下载向日葵安装向日葵运行向日葵卸载向日葵参考资料 引言 向日葵是一款非常好用的远程控制软件。这一篇博文介绍了如何在 Ubuntu Linux系统 中安装贝瑞向日葵。&#x1f3c3;&#x1f4a5;&#x1f4a5;&#x1f4a5;❗️ 下载向日葵 向日葵官网: https://sunl…

Springboot自定义start首发预告

Springboot自定义start首发预告 基于Springboot的自定义start , 减少项目建设重复工作, 如 依赖 , 出入参包装 , 日志打印 , mybatis基本配置等等等. 优点 模块化 可插拔 易于维护和升级 定制化 社区支持(后期支持) 发布时间 预告: 2023-12-10 预计发布: 2024-1-1 , 元旦首…

鸿蒙应用开发ArkTS基础组件的使用

语雀知识库地址&#xff1a;语雀HarmonyOS知识库 飞书知识库地址&#xff1a;飞书HarmonyOS知识库 本文示例代码地址&#xff1a;Gitee 仓库地址 嗨&#xff0c;各位好呀&#xff0c;我是小白 上一篇文章我为大家介绍了如何使用 ArkTS 开发鸿蒙应用&#xff0c;对 HarmonyOS 项…

【IDEA】解决mac版IDEA,进行单元测试时控制台不能输入问题

我的IDEA版本 编辑VM配置 //增加如下配置&#xff0c;重启IDEA -Deditable.java.test.consoletrue测试效果

2023最新最全【Wireshark 】 安装教程(附安装包)

简介 wireshark是非常流行的网络封包分析工具&#xff0c;功能十分强大。可以截取各种网络封包&#xff0c;显示网络封包的详细信息。使用wireshark的人必须了解网络协议&#xff0c;否则就看不懂wireshark了。 为了安全考虑&#xff0c;wireshark只能查看封包&#xff0c;而…

【React Hooks】useReducer()

useReducer 的三个参数是可选的&#xff0c;默认就是initialState&#xff0c;如果在调用的时候传递第三个参数那么他就会改变为你传递的参数&#xff0c;实际开发不建议这样写。会增加代码的不可读性。 使用方法&#xff1a; 必须将 useReducer 的第一个参数&#xff08;函数…

代码随想录算法训练营第四十天|139.单词拆分,多重背包,背包问题

139. 单词拆分 - 力扣&#xff08;LeetCode&#xff09; 给你一个字符串 s 和一个字符串列表 wordDict 作为字典。请你判断是否可以利用字典中出现的单词拼接出 s 。 注意&#xff1a;不要求字典中出现的单词全部都使用&#xff0c;并且字典中的单词可以重复使用。 示例 1&a…

ES6原生音乐播放器(有接口)

视频展示 ES6音乐播放器 项目介绍 GutHub地址&#xff1a;GitHub - baozixiangqianchong/ES6_MusicPlayer: 音乐播放器 ES6_MusicPlayer 是基于JavaScriptES6Ajax等通过原生构建的项目。能够充分锻炼JS能力。 本项目有主页、详情页、歌单页面三部分组成 ├── assets&…

【Hive】启动beeline连接hive

1、解决报错2、在datagrip上连接hive 1、解决报错 刚开始一直报错&#xff1a;启动不起来 hive-site.xml需要配置hiveserver2相关的 在hive-site.xml文件中添加如下配置信息 <!-- 指定hiveserver2连接的host --> <property><name>hive.server2.thrift.bin…

力扣思维题/经典面试题——下一个排序

https://leetcode.cn/problems/next-permutation/description/ 字节面试题&#xff0c;非常经典的逻辑思维题 1、找到第一个下降点&#xff0c;说明这个点可以变得稍微大一点以至于让整个排列变得更加大 为什么&#xff0c;仔细想想&#xff0c;后面都是倒序了怎么都不可能变得…