Java操作Influxdb2.x

news2024/11/16 9:35:49

本片文章不讲怎么安装,只讲安装后如何用JAVA代码操作库表

  • 1.创建数据库
  • 2.为bucket添加TELEGRAF配置
  • 3.TELEGRAF配置参数说明
  • 4.配置数据库的访问权限API TOKENS
  • 5.JAVA代码操作库表
    • 5.1 yaml
    • 5.2 pom依赖
    • 5.3 config
    • 5.4 controller
    • 5.5 查询方法、结果集提取方法

1.创建数据库

Influxdb2.x是有管理界面平台的,以本地为例,游览器访问 :http://127.0.0.1:8086,登录后,即可看到该界面,根据图片顺序操作即可

在这里插入图片描述

这里的bucket(桶)就是数据库

在这里插入图片描述

选择(配置)数据库数据保存策略

在这里插入图片描述

2.为bucket添加TELEGRAF配置

在这里插入图片描述

这里选择第1步创建的数据库

在这里插入图片描述

来源这里数据sys后会自动筛选,点击即可

在这里插入图片描述

点击后,右下角的创建按钮会亮起,点击按钮进行配置

在这里插入图片描述

数据库的配置文件名称后,点击保存和测试,配置内容不需要自己填写会自动生成

在这里插入图片描述

保存后出现该界面代表创建完成,会返回给两个配置信息:export INFLUX_TOKEN 和 telegraf --config
点击后关闭界面

在这里插入图片描述

点击配置文件名称会打开配置文件的内容

在这里插入图片描述

配置内容如下:

# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## Maximum number of unwritten metrics per output.  Increasing this value
  ## allows for longer periods of output downtime without dropping metrics at the
  ## cost of higher maximum memory usage.
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Log at debug level.
  # debug = false
  ## Log only error level messages.
  # quiet = false

  ## Log target controls the destination for logs and can be one of "file",
  ## "stderr" or, on Windows, "eventlog".  When set to "file", the output file
  ## is determined by the "logfile" setting.
  # logtarget = "file"

  ## Name of the file to be logged to when using the "file" logtarget.  If set to
  ## the empty string then logs are written to stderr.
  # logfile = ""

  ## The logfile will be rotated after the time interval specified.  When set
  ## to 0 no time based rotation is performed.  Logs are rotated only when
  ## written to, if there is no log activity rotation may be delayed.
  # logfile_rotation_interval = "0d"

  ## The logfile will be rotated when it becomes larger than the specified
  ## size.  When set to 0 no size based rotation is performed.
  # logfile_rotation_max_size = "0MB"

  ## Maximum number of rotated archives to keep, any older logs are deleted.
  ## If set to -1, no archives are removed.
  # logfile_rotation_max_archives = 5

  ## Pick a timezone to use when logging or type 'local' for local time.
  ## Example: America/Chicago
  # log_with_timezone = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false
[[outputs.influxdb_v2]]
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ##   ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
  urls = ["http://127.0.0.1:8086"]

  ## Token for authentication.
  token = "$INFLUX_TOKEN"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "org"

  ## Destination bucket to write into.
  bucket = "db2"

  ## The value of this tag will be used to determine the bucket.  If this
  ## tag is not set the 'bucket' option is used as the default.
  # bucket_tag = ""

  ## If true, the bucket tag will not be added to the metric.
  # exclude_bucket_tag = false

  ## Timeout for HTTP messages.
  # timeout = "5s"

  ## Additional HTTP headers
  # http_headers = {"X-Special-Header" = "Special-Value"}

  ## HTTP Proxy override, if unset values the standard proxy environment
  ## variables are consulted to determine which proxy, if any, should be used.
  # http_proxy = "http://corporate.proxy:3128"

  ## HTTP User-Agent
  # user_agent = "telegraf"

  ## Content-Encoding for write request body, can be set to "gzip" to
  ## compress body or "identity" to apply no encoding.
  # content_encoding = "gzip"

  ## Enable or disable uint support for writing uints influxdb 2.0.
  # influx_uint_support = false

  ## Optional TLS Config for use on HTTP connections.
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false
# Read metrics about system load & uptime
[[inputs.system]]
  # no configuration

3.TELEGRAF配置参数说明

在这里插入图片描述

这四个key的值就对应的是JAVA应用程序中yaml中的配置的四个属性值,分别是url、token、org、bucket
注意:2.x版本是通过这四个属性来访问的,不再是账号和密码了
其中token需要提一嘴,token的值就是第二步创建完配置文件后返回的两个配置文件中的 export INFLUX_TOKEN

得到这四个配置属性后就可以操作数据库了吗 ???
NONONO,网上的资料比较杂乱,很多文章并没有讲到这一步,我是在这一步踩坑了,继续往看

经过测试发现了问题,注意这个TOKEN是数据库配置的TOKEN虽然可以连接到数据库并成功插入数据,但是并不具备访问的权限的,也就是说只能保存不能进行其他操作。查询报错:HTTP status code: 404; Message: failed to initialize execute state: could not find bucket “XX”

应用程序通过依赖中的API来访问的库,报错的原因其实就是缺少了最重要的API访问权限配置,网上的资料里没讲这块,贼坑

4.配置数据库的访问权限API TOKENS

在这里插入图片描述

勾选需要通过API访问的库和库的配置文件,其他权限根据自己情况来

在这里插入图片描述
点击创建后,会弹出生成的API访问的TOKENS,该TOKENS直接替换掉yaml配置文件中的token即可
在这里插入图片描述

5.JAVA代码操作库表

5.1 yaml

#influx配置
influx2:
  url: http://127.0.0.1:8086
  token: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==写自己的
  org: org
  bucket: db2

5.2 pom依赖

这里我没选择高版本依赖是因为和项目中的依赖存在冲突,高版本依赖提供了对2.x以上版本的兼容API
高版本和低版本的依赖都可以操作2.x版本,这里根据自己的实际情况来决定即可

        <!--InfluxDB-->
        <dependency>
            <groupId>com.influxdb</groupId>
            <artifactId>influxdb-client-java</artifactId>
            <!--<version>6.9.0</version>-->
            <version>3.0.1</version>
        </dependency>

5.3 config

package net.influx.com.config;


import com.influxdb.client.InfluxDBClient;
import com.influxdb.client.InfluxDBClientFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

/**
 * @author luo zhuo tao
 * @create 2023/8/29
 */
@Configuration
@ConfigurationProperties(prefix = "influx2")
public class InfluxdbConfig {

    private static final Logger logger = LoggerFactory.getLogger(InfluxdbConfig.class);

    private String url;
    private String token;
    private String org;
    private String bucket;

    @Bean
    public InfluxDBClient influxDBClient(){
        InfluxDBClient influxDBClient = InfluxDBClientFactory.create(url,token.toCharArray(),org,bucket);
        //日志级别可用可不用
        influxDBClient.setLogLevel(LogLevel.BASIC);
        if (influxDBClient.ping()){
            logger.info("InfluxDB时序数据库2.x---------------------------------------------连接成功!");
        }else {
            logger.info("InfluxDB时序数据库2.x---------------------------------------------连接失败!");
        }
        return influxDBClient;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public void setToken(String token) {
        this.token = token;
    }

    public void setOrg(String org) {
        this.org = org;
    }

    public void setBucket(String bucket) {
        this.bucket = bucket;
    }
}

5.4 controller

package net.influx.com.controller;

import com.alibaba.fastjson.JSON;
import com.influxdb.client.*;
import com.influxdb.client.domain.InfluxQLQuery;
import com.influxdb.client.domain.WritePrecision;
import com.influxdb.client.write.Point;
import com.influxdb.query.FluxTable;
import com.influxdb.query.InfluxQLQueryResult;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import javax.annotation.Resource;
import java.time.Instant;
import java.time.OffsetDateTime;
import java.util.List;
import java.util.UUID;

/**
 * @author luo zhuo tao
 * @create 2023/8/29
 */

@RestController
@RequestMapping("influxdb")
public class InfluxdbController {

    private static final Logger logger = LoggerFactory.getLogger(InfluxdbController.class);

    @Resource
    private InfluxDBClient influxDBClient;

    @Value("${influx2.org:''}")
    private String org;

    @Value("${influx2.bucket:''}")
    private String bucket;

    private String table = "test1";

    @GetMapping("test")
    public String test() {
        /**
         * 写入:WriteApiBlocking 同步写入API WriteApi 异步写入API
         */
        if (false) {
            WriteApiBlocking writeApiBlocking = influxDBClient.getWriteApiBlocking();
            Point point = Point
                    .measurement(table)
                    .addField(String.valueOf(System.currentTimeMillis()), UUID.randomUUID().toString())
                    .time(Instant.now(), WritePrecision.NS);
            writeApiBlocking.writePoint(point);
        }

        /**
         * 查询:QueryApi 同步查询API InfluxQLQueryApi SQL查询API
         */
        if (false){
            InfluxQLQueryApi influxQLQueryApi = influxDBClient.getInfluxQLQueryApi();
            InfluxQLQuery influxQLQuery = new InfluxQLQuery("SELECT * FROM test1", bucket);
            InfluxQLQueryResult query = influxQLQueryApi.query(influxQLQuery);
            logger.info("query:{}", JSON.toJSONString(query));
            findAll();
        }

        /**
         * 删除
         */
        DeleteApi deleteApi = influxDBClient.getDeleteApi();
        deleteApi.delete(OffsetDateTime.now(), OffsetDateTime.now(),"",bucket,org);
        return "查询成功";
    }




    /**
     * @param measurement 表名
     */
    public void save(String measurement) {
        WriteOptions writeOptions = WriteOptions.builder()
                .batchSize(5000)
                .flushInterval(1000)
                .bufferLimit(10000)
                .jitterInterval(1000)
                .retryInterval(5000)
                .build();
        try (WriteApi writeApi = influxDBClient.getWriteApi(writeOptions)) {
            Point point = Point
                    .measurement(measurement)
                    .addField("MMSI".concat(UUID.randomUUID().toString()), UUID.randomUUID().toString())
                    .time(Instant.now(), WritePrecision.NS);
            writeApi.writePoint(bucket, org, point);
        }
    }


    public List<FluxTable> findAll() {
        String flux = "from(bucket: \"db3\")\n" +
                "  |> range(start:0)\n" +
                "  |> filter(fn: (r) => r[\"_measurement\"] == \"test1\")\n" +
                "  |> yield(name: \"mean\")";
        QueryApi queryApi = influxDBClient.getQueryApi();
        List<FluxTable> tables = queryApi.query(flux, org);
        logger.info("tables:{}", JSON.toJSONString(tables));
        return tables;
    }
}

5.5 查询方法、结果集提取方法

这里用了两种方式查询,一个是直接通过key查、一个是根据时间维度查询,具体的自己去研究flux语法这里不详细讲

package net.superlucy.departure.monitor.app.service.impl;

import cn.hutool.core.collection.CollectionUtil;
import com.influxdb.client.InfluxDBClient;
import com.influxdb.client.QueryApi;
import com.influxdb.client.WriteApi;
import com.influxdb.client.WriteOptions;
import com.influxdb.client.domain.WritePrecision;
import com.influxdb.client.write.Point;
import com.influxdb.query.FluxRecord;
import com.influxdb.query.FluxTable;
import net.superlucy.departure.monitor.app.service.InfluxdbService;
import net.superlucy.departure.monitor.app.util.CommonUtil;
import net.superlucy.departure.monitor.dto.enums.InfluxdbEnum;
import net.superlucy.departure.monitor.dto.model.DepartureShipPosition;
import org.apache.commons.compress.utils.Lists;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;

import javax.annotation.Resource;
import java.time.Instant;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

/**
 * @author luo zhuo tao
 * @create 2023/9/4
 */
@Service
public class InfluxdbServiceImpl implements InfluxdbService {

    private static final Logger logger = LoggerFactory.getLogger(InfluxdbServiceImpl.class);

    /**
     * 通过MMSI号查询SQL:响应单条数据
     */
    private String queryValueFluxOne = "from(bucket: \"%s\") " +
            "|> range(start: %s) " +
            "|> filter(fn: (r) => r._measurement == \"%s\" and r._field == \"%s\")" +
            "" +
            "";

    /**
     * 通过时间范围查询SQL:响应多条数据
     */
    private String queryValueFluxTwo = "from(bucket: \"%s\") " +
            "|> range(start: %s) " +
            "|> filter(fn: (r) => r._measurement == \"%s\")" +
            "" +
            "";

    @Resource
    private InfluxDBClient influxDBClient;

    @Value("${influx2.org:''}")
    private String org;

    @Value("${influx2.bucket:''}")
    private String bucket;


    @Override
    public Map<String, Object> findOne(InfluxdbEnum influxdbEnum, String mmsi) {
        String flux = String.format(queryValueFluxOne, bucket, 0, influxdbEnum.getValue(), mmsi);
        QueryApi queryApi = influxDBClient.getQueryApi();
        List<FluxTable> tables = queryApi.query(flux, org);
        return qryVal(tables);
    }

    public Map<String, Object> qryVal(List<FluxTable> tables) {
        Map<String, Object> map = new HashMap<>();
        if (CollectionUtil.isNotEmpty(tables)) {
            for (FluxTable table : tables) {
                List<FluxRecord> records = table.getRecords();
                for (FluxRecord fluxRecord : records) {
                    map.put("value", fluxRecord.getValue().toString());
                    map.put("field", fluxRecord.getField());
                    map.put("valueTime", Date.from(fluxRecord.getTime()));
                }
            }
        }
        return map;
    }

    @Override
    public List<Map<String, Object>> findList(InfluxdbEnum influxdbEnum, String date) {
        String flux = String.format(queryValueFluxTwo, bucket, date, influxdbEnum.getValue());
        QueryApi queryApi = influxDBClient.getQueryApi();
        List<FluxTable> tables = queryApi.query(flux, org);
        return qryValList(tables);
    }

    @Override
    public Map<String, DepartureShipPosition> getDynamicList(InfluxdbEnum influxdbEnum, String date) {
        String flux = String.format(queryValueFluxTwo, bucket, date, influxdbEnum.getValue());
        QueryApi queryApi = influxDBClient.getQueryApi();
        List<FluxTable> tables = queryApi.query(flux, org);
        return dynamicList(tables);
    }

    /**
     * 查询所有船舶最新位置信息
     * @param tables
     * @return
     */
    private Map<String, DepartureShipPosition> dynamicList(List<FluxTable> tables) {
        Map<String, DepartureShipPosition> map = new HashMap<>();
        if (CollectionUtil.isNotEmpty(tables)) {
            for (FluxTable table : tables) {
                List<FluxRecord> records = table.getRecords();
                //直接用时间维度查询,会出
                // 现同一个Field多条数据的情况,这里只需要最新的数据,时间的排序是从远到近的,所以直接拿最后一条即可
                FluxRecord fluxRecord = records.get(records.size() - 1);
                DepartureShipPosition position = new DepartureShipPosition();
                String mmsi = fluxRecord.getField();
                String value = fluxRecord.getValue().toString();
                /**
                 * 动态格式转换方法是我自己业务里面的方法,不用管
                 * String mmsi = fluxRecord.getField();
                 * String value = fluxRecord.getValue().toString();
                 * 这两个get方法是已经获取到存储的数据结果了,后续处理根据自己业务需求来即可
                 */
                // 动态格式转换
                DepartureShipPosition dynamic = CommonUtil.dynamic(position, value);
                map.put(mmsi,dynamic);
            }
        }
        return map;
    }


    /**
     *
     * @param tables
     * @return
     */
    public List<Map<String, Object>> qryValList(List<FluxTable> tables) {
        List<Map<String, Object>> mapList = Lists.newArrayList();
        if (CollectionUtil.isNotEmpty(tables)) {
            for (FluxTable table : tables) {
                List<FluxRecord> records = table.getRecords();
                //直接用时间维度查询,会出现同一个Field多条数据的情况,这里只需要最新的数据,时间的排序是从远到近的,所以直接拿最后一条即可
                FluxRecord fluxRecord = records.get(records.size() - 1);
                Map<String, Object> map = new HashMap<>(1);
                    map.put("value", fluxRecord.getValue().toString());
                    map.put("field", fluxRecord.getField());
                    map.put("valueTime", Date.from(fluxRecord.getTime()));
                    mapList.add(map);
            }
        }
        return mapList;
    }


    /**
     * @param measurement 表名
     * @param k           MMSI号
     * @param v           ASI数据
     */
    @Override
    public void save(String measurement, String k, String v) {
        WriteOptions writeOptions = WriteOptions.builder()
                .batchSize(5000)
                .flushInterval(1000)
                .bufferLimit(10000)
                .jitterInterval(1000)
                .retryInterval(5000)
                .build();
        try (WriteApi writeApi = influxDBClient.getWriteApi(writeOptions)) {
            Point point = Point
                    .measurement(measurement)
                    .addField(k, v)
                    .time(Instant.now(), WritePrecision.NS);
            writeApi.writePoint(bucket, org, point);
        }
    }
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1012501.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Dataworks实现接口调用

RestAPI调用 功能&#xff1a;实现Restful风格的API调用 步骤一&#xff1a;配置RestAPI数据源&#xff0c;在url中填Restful风格的url&#xff0c;若是需要账号密码登录等可以切换“验证方法” 步骤二&#xff1a;在离线同步中创建离线同步任务&#xff0c;数据来源选择配置…

windows环境变量滥用维权/提权

本文转载于&#xff1a;https://bbs.zkaq.cn/t/31090.html 0x01 前提 通过滥用系统的路径搜索机制来欺骗高权限用户执行看似合法的系统二进制文件&#xff0c;实际上是恶意的代码或程序&#xff0c;从而导致升权限并执行恶意操作。 攻击的关键前提&#xff1a; 路径搜索顺序&am…

联合华为“围剿”苹果,他笑得憋都憋不住

作者&#xff1a;狗蛋 螳螂观察快评&#xff1a;联合华为围剿苹果&#xff0c;荣耀总裁赵明笑得憋都憋不住&#xff0c;离开华为的荣耀过得怎么样了&#xff1f;#华为 #荣耀发布会 #赵明 #苹果开售 #中国芯片 华为5g手机王者归来&#xff0c;苹果慌了&#xff0c;其他友商也慌…

【论文记录】Boosting Detection in Crowd Analysis via Underutilized Output Features

Boosting Detection in Crowd Analysis via Underutilized Output Features Abstract Crowd Hat使用一种混合的2D-1D压缩技术进行细化空间特征与获取特定人群信息的空间和数量分布。进一步的&#xff0c;Crowd Hat采用自适应区域的NMS阈值与一个解耦然后对齐的范式来解决基于…

C++【视频笔记个人思考_Wang】

时间进度C是什么&#xff1f;多态什么是多态&#xff1f;生活中的多态C中的多态 赋值兼容赋值兼容规则实现安全转换 时间进度 Day101 ok Day804 ok Day805 ok C是什么&#xff1f; C大部分包含C语言。 C完全兼容C语言。 C在C语言的基础上添加&#xff1a;封装、继承、多态…

代码混淆和加固,保障应用程序的安全性

摘要&#xff1a;本文将详细介绍iOS技术博主在保护应用程序代码安全方面的两种重要方式&#xff1a;代码混淆和代码加固。通过代码混淆和加固&#xff0c;博主可以有效防止他人对应用程序进行逆向工程和篡改&#xff0c;提高应用程序的安全性。 引言&#xff1a;作为iOS技术博…

嵌入式Linux驱动开发(I2C专题)(六)

完善虚拟的I2C_Adapter驱动并模拟EEPROM 参考资料&#xff1a; Linux内核文档: Linux-4.9.88\Documentation\devicetree\bindings\i2c\i2c-gpio.txtLinux-5.4\Documentation\devicetree\bindings\i2c\i2c-gpio.yaml Linux内核驱动程序&#xff1a;使用GPIO模拟I2C Linux-4.9.…

【洁洁送书第七期】现在学 Java 找工作还有优势吗

java 现在学 Java 找工作还有优势吗&#xff1f;活力四射的 JavaTIOBE 编程语言排行榜从零开始学会 JavaJava 语言运行过程基础知识进阶知识高级知识talk is cheap, show me the code结语 文末赠书 现在学 Java 找工作还有优势吗&#xff1f; 在某乎上可以看到大家对此问题的…

C++算法进阶系列之倍增算法 ST 表

1. 引言 前文使用倍增算法实现了快速求幂的运算&#xff0c;本文继续讲解ST表&#xff0c;ST表即倍增表&#xff0c;本质就是动态规划表&#xff0c;记忆化了不同子问题域中的结果&#xff0c;用于实时查询。只是动态规划过程和传统的稍有点不一样&#xff0c;采用了倍增思想。…

一文带你走进软件测试的大门

目录 前言 需求 用户需求 软件需求 从测试人员的角度看需求 测试用例 测试环境 测试数据 预期结果 操作步骤 为什么要有测试用例 Bug的概念 世界上的第一个bug bug的定义 开发模型和测试模型 软件的生命周期 开发模型 瀑布模型 螺旋模型 增量、迭代 敏捷 …

服务器搭建(TCP套接字)-基础版(客户端)

一、socket 1.1、vim man查看socket :!man socket1.2、 依赖的头文件 #include <sys/types.h> #include <sys/socket.h>1.3、原型 int socket(int domain, int type, int protocol);domain说明AF_INETIPV4协议AF_INET6IPV6协议AF_LOCALUnix域协议 type说明S…

华为云云耀云服务器L实例评测|单节点环境下部署ClickHouse21.1.9.41数据库

文章目录 前言云耀云服务器L实例简介clickhouse数据库简介 一、配置环境购买云耀云服务器L实例查看云耀云服务器L实例状态重置密码查看弹性公网IP地址 FinalShell连接服务器二、搭建ClickHouse单机服务下载ClickHouse安装包解压安装依次解压启动clickhouse相关目录 三、允许远程…

为什么IPIDE代理IP可以帮助TikTok引流?

近期&#xff0c;TikTok菲律宾站点保温杯30天销量破30万&#xff0c;GMV达到近千万人民币。在当今社交媒体迅速崛起的时代&#xff0c;营销策略也在逐渐改变。TikTok作为一款非常热门的短视频社交应用&#xff0c;拥有庞大的用户群体&#xff0c;为跨境企业在国际市场上推广产品…

5个免费样机素材网站,设计必备,赶紧马住!

5个设计师必备的样机素材网站&#xff0c;免费下载~ 1、菜鸟图库 https://www.sucai999.com/searchlist/3217.html?vNTYxMjky 菜鸟图库有多种类型的设计素材&#xff0c;像平面、电商、UI、办公等素材这里面都能找到&#xff0c;样机素材非常丰富&#xff0c;比如服装的、电子…

Arcgis在属性表中添加外部Excel信息

Arcgis在属性表中添加外部Excel信息 现需要把EXCEL中这个每个像元的叶面积指数值&#xff0c;导入每个像元的属性表中。 在点的图层上右击&#xff0c;找到连接和关联&#xff0c;点击连接 字段是用于连接属性表和EXCEL的键&#xff0c;两个字段下的数据都是主键&#xff0…

揭秘梦幻般的Glam风格是什么?

如果你热爱一切漂亮、奢华和总体而言都非常华丽&#xff0c;那么华丽风格可能非常适合你。这种风格从传统的外观出发&#xff0c;通过大量的装饰细节增添了一些华丽的元素&#xff0c;创造出一个令人惊叹、闪闪发光、全方位优雅的外观。如果华丽风格引起了你的兴趣&#xff0c;…

微信小程序自动化发布

参考:https://developers.weixin.qq.com/miniprogram/dev/devtools/ci.html 参考:https://www.npmjs.com/package/miniprogram-ci npm install miniprogram-ci -S上传文件 xx.js const isNodeJs typeof process ! undefined && process.release ! null &&…

RabbitMQ消息可靠性(二)-- 消费者消息确认

和生产者的消息确认机制不同&#xff0c;因为消息接收本来就是在监听消息&#xff0c;符合条件的消息就会消费下来。 所以&#xff0c;消息接收的确认机制主要存在三种模式&#xff1a; ①自动确认&#xff0c; 这也是默认的消息确认情况。 AcknowledgeMode.NONE RabbitMQ成功…

Google高性能开源框架gRPC:快速搭建及HTTP/2抓包

一、什么是gRPC gRPC是google发起的一个*远程过程调用&#xff08;rpc&#xff09;*开源框架&#xff0c;可以在任何语言中&#xff0c;用任何编程语言编写。gRPC基于HTTP/2协议&#xff0c;使用Protocol Buffers作为序列化工具。 gRPC官网&#xff1a;https://grpc.io/ RPC …

目标跟踪:Mobile Vision Transformer-based Visual Object Tracking

论文作者&#xff1a;Goutam Yelluru Gopal,Maria A. Amer 作者单位&#xff1a;Concordia University 论文链接&#xff1a;https://arxiv.org/pdf/2309.05829v1.pdf 项目链接&#xff1a;https://github.com/goutamyg/MVT 内容简介&#xff1a; 1&#xff09;方向&#…