尚硅谷大数据技术-Kafka视频教程-笔记01【Kafka 入门】

news2024/11/17 23:34:00

视频地址:【尚硅谷】Kafka3.x教程(从入门到调优,深入全面)_哔哩哔哩_bilibili

  1. 尚硅谷大数据技术-Kafka视频教程-笔记01【Kafka 入门】
  2. 尚硅谷大数据技术-Kafka视频教程-笔记02【Kafka 外部系统集成】
  3. 尚硅谷大数据技术-Kafka视频教程-笔记03【Kafka 生产调优手册】
  4. 尚硅谷大数据技术-Kafka视频教程-笔记04【Kafka 源码解析】

目录

01_尚硅谷大数据技术之Kafka

第 1 章 Kafka 概述

p001

p002

p003

p004

p005

第 2 章 Kafka 快速入门

p006

p007

p008

p009

第 3 章 Kafka 生产者

p010

p011

p012

p013

p014

第 4 章 Kafka Broker

第 5 章 Kafka 消费者

第 6 章 Kafka-Eagle 监控

第 7 章 Kafka-Kraft 模式


01_尚硅谷大数据技术之Kafka

第 1 章 Kafka 概述

p001

p002

p003

  1. flume:时刻监控数据文件的变化,每产生一条数据日志都能监控的到,并将数据传送到hadoop集群。
  2. kafka:数据量太大,对数据进行缓冲。
  1. 同步处理:时刻处理,一步一步地做完。
  2. 异步处理:先处理核心事务。

p004

消息队列的两种模式:

  1. 点对点模式:
    1. 只产生一个主题的数据;
    2. 数据消费后就删除了。
  2. 发布/订阅模式:
    1. 可以有多个主题的数据;
    2. 数据消费后不删除;
    3. 多个消费者相互独立。

p005

  1. zookeeper:kafka中的一部分数据存储到kafka中,zookeeper帮助kafka存储记录服务器节点运行的状态,zk记录谁是leader。
  2. kafka:数据分区存储。

第 2 章 Kafka 快速入门

p006

  1. Apache Kafka
  2. Apache Kafka

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#zookeeper.connect=localhost:2181
zookeeper.connect=node001:2181,node002:2181,node003:2181/kafka

zk采用目录树进行存储,根目录下有zookeeper节点,不采用node003:2181/kafka方式进行存储的话,kafka的信息就会打散到zookeeper里面去,对kafka集群进行注销或删除的话,需要挨个删除,不利于后续管理。

[atguigu@node001 ~]$ vim /opt/module/kafka/kafka_2.12-3.0.0/config/server.properties 
[atguigu@node001 ~]$ sudo vim /etc/profile.d/my_env.sh
[atguigu@node001 ~]$ source /etc/profile
[atguigu@node001 ~]$ sudo /home/atguigu/bin/xsync /etc/profile.d/my_env.sh
==================== node001 ====================
sending incremental file list

sent 47 bytes  received 12 bytes  39.33 bytes/sec
total size is 1,201  speedup is 20.36
==================== node002 ====================
sending incremental file list
my_env.sh

sent 599 bytes  received 47 bytes  1,292.00 bytes/sec
total size is 1,201  speedup is 1.86
==================== node003 ====================
sending incremental file list
my_env.sh

sent 599 bytes  received 47 bytes  1,292.00 bytes/sec
total size is 1,201  speedup is 1.86
[atguigu@node001 ~]$ 
[atguigu@node001 ~]$ zookeeper.sh start
---------- zookeeper node001 启动 ----------
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
---------- zookeeper node002 启动 ----------
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
---------- zookeeper node003 启动 ----------
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[atguigu@node001 ~]$ 
[atguigu@node001 ~]$ 
[atguigu@node001 ~]$ xcall jps
=============== node001 ===============
4291 QuorumPeerMain
4346 Jps
=============== node002 ===============
3570 QuorumPeerMain
3630 Jps
=============== node003 ===============
3426 QuorumPeerMain
3478 Jps
[atguigu@node001 ~]$ cd /opt/module/kafka/kafka_2.12-3.0.0/
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-server-start.sh 
USAGE: bin/kafka-server-start.sh [-daemon] server.properties [--override property=value]*
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-server-start.sh -daemon config/server.properties
[atguigu@node001 kafka_2.12-3.0.0]$ jpsall 
================ node001 ================
4817 Jps
4291 QuorumPeerMain
4756 Kafka
================ node002 ================
3570 QuorumPeerMain
3724 Jps
================ node003 ================
3426 QuorumPeerMain
3564 Jps
[atguigu@node001 kafka_2.12-3.0.0]$ 

p007

#!/bin/bash

case $1 in
"start"){
	for i in node001 node002 node003
	do
		echo "--------------- $i Kafka 启动 ---------------"
		ssh $i "/opt/module/kafka/kafka_2.12-3.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka/kafka_2.12-3.0.0/config/server.properties"
	done
};;
"stop"){
	for i in node001 node002 node003
	do
		echo "--------------- $i Kafka 停止 ---------------"
		ssh $i "/opt/module/kafka/kafka_2.12-3.0.0/bin/kafka-server-stop.sh "
    done
};;
"status") {
	for i in node001 node002 node003
	do
		echo "--------------- $i Kafka 状态 ---------------"
		ssh $i "/opt/module/kafka/kafka_2.12-3.0.0/bin/kafka-topics.sh "
	done
}
;;
esac

p008

2.2 Kafka 命令行操作

[atguigu@node001 kafka_2.12-3.0.0]$ pwd
/opt/module/kafka/kafka_2.12-3.0.0
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh # 查看操作主题命令参数
Create, delete, describe, or change a topic.
Option                                   Description                            
------                                   -----------                            
--alter                                  Alter the number of partitions,        
                                           replica assignment, and/or           
                                           configuration for the topic.         
--at-min-isr-partitions                  if set when describing topics, only    
                                           show partitions whose isr count is   
                                           equal to the configured minimum.     
--bootstrap-server <String: server to    REQUIRED: The Kafka server to connect  
  connect to>                              to.                                  
--command-config <String: command        Property file containing configs to be 
  config property file>                    passed to Admin Client. This is used 
                                           only with --bootstrap-server option  
                                           for describing and altering broker   
                                           configs.                             
--config <String: name=value>            A topic configuration override for the 
                                           topic being created or altered. The  
                                           following is a list of valid         
                                           configurations:                      
                                                cleanup.policy                        
                                                compression.type                      
                                                delete.retention.ms                   
                                                file.delete.delay.ms                  
                                                flush.messages                        
                                                flush.ms                              
                                                follower.replication.throttled.       
                                           replicas                             
                                                index.interval.bytes                  
                                                leader.replication.throttled.replicas 
                                                local.retention.bytes                 
                                                local.retention.ms                    
                                                max.compaction.lag.ms                 
                                                max.message.bytes                     
                                                message.downconversion.enable         
                                                message.format.version                
                                                message.timestamp.difference.max.ms   
                                                message.timestamp.type                
                                                min.cleanable.dirty.ratio             
                                                min.compaction.lag.ms                 
                                                min.insync.replicas                   
                                                preallocate                           
                                                remote.storage.enable                 
                                                retention.bytes                       
                                                retention.ms                          
                                                segment.bytes                         
                                                segment.index.bytes                   
                                                segment.jitter.ms                     
                                                segment.ms                            
                                                unclean.leader.election.enable        
                                         See the Kafka documentation for full   
                                           details on the topic configs. It is  
                                           supported only in combination with --
                                           create if --bootstrap-server option  
                                           is used (the kafka-configs CLI       
                                           supports altering topic configs with 
                                           a --bootstrap-server option).        
--create                                 Create a new topic.                    
--delete                                 Delete a topic                         
--delete-config <String: name>           A topic configuration override to be   
                                           removed for an existing topic (see   
                                           the list of configurations under the 
                                           --config option). Not supported with 
                                           the --bootstrap-server option.       
--describe                               List details for the given topics.     
--disable-rack-aware                     Disable rack aware replica assignment  
--exclude-internal                       exclude internal topics when running   
                                           list or describe command. The        
                                           internal topics will be listed by    
                                           default                              
--help                                   Print usage information.               
--if-exists                              if set when altering or deleting or    
                                           describing topics, the action will   
                                           only execute if the topic exists.    
--if-not-exists                          if set when creating topics, the       
                                           action will only execute if the      
                                           topic does not already exist.        
--list                                   List all available topics.             
--partitions <Integer: # of partitions>  The number of partitions for the topic 
                                           being created or altered (WARNING:   
                                           If partitions are increased for a    
                                           topic that has a key, the partition  
                                           logic or ordering of the messages    
                                           will be affected). If not supplied   
                                           for create, defaults to the cluster  
                                           default.                             
--replica-assignment <String:            A list of manual partition-to-broker   
  broker_id_for_part1_replica1 :           assignments for the topic being      
  broker_id_for_part1_replica2 ,           created or altered.                  
  broker_id_for_part2_replica1 :                                                
  broker_id_for_part2_replica2 , ...>                                           
--replication-factor <Integer:           The replication factor for each        
  replication factor>                      partition in the topic being         
                                           created. If not supplied, defaults   
                                           to the cluster default.              
--topic <String: topic>                  The topic to create, alter, describe   
                                           or delete. It also accepts a regular 
                                           expression, except for --create      
                                           option. Put topic name in double     
                                           quotes and use the '\' prefix to     
                                           escape regular expression symbols; e.
                                           g. "test\.topic".                    
--topics-with-overrides                  if set when describing topics, only    
                                           show topics that have overridden     
                                           configs                              
--unavailable-partitions                 if set when describing topics, only    
                                           show partitions whose leader is not  
                                           available                            
--under-min-isr-partitions               if set when describing topics, only    
                                           show partitions whose isr count is   
                                           less than the configured minimum.    
--under-replicated-partitions            if set when describing topics, only    
                                           show under replicated partitions     
--version                                Display Kafka version.                 
[atguigu@node001 kafka_2.12-3.0.0]$ 
--bootstrap-server <String: server to    REQUIRED: The Kafka server to connect  
  connect to>                              to.

--topic <String: topic>                  The topic to create, alter, describe   
                                           or delete. It also accepts a regular 
                                           expression, except for --create      
                                           option. Put topic name in double     
                                           quotes and use the '\' prefix to     
                                           escape regular expression symbols; e.
                                           g. "test\.topic".
  1. [atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --list
  2. [atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first --create --partitions 1 --replication-factor 3 # 创建first主题,设置三个副本
  3. [atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first01 --describe
  4. [atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first01 --alter --partitions 3
  5. [atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first01 --describe
  6. [atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first01 --alter --partitions 1 # 报错,分区只能增加,不能减少!
  7. [atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first01 --alter --replication-factor 2 # 不能通过命令行去修改副本
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --list
__consumer_offsets
__transaction_state
action_topic
appVideo_topic
display_topic
dwd_examination_test_paper
dwd_examination_test_question
dwd_interaction_comment
dwd_interaction_favor_add
dwd_interaction_review
dwd_learn_play
dwd_trade_cart_add
dwd_trade_order_detail
dwd_trade_pay_suc_detail
dwd_traffic_action_log
dwd_traffic_display_log
dwd_traffic_error_log
dwd_traffic_page_log
dwd_traffic_play_pre_process
dwd_traffic_start_log
dwd_traffic_unique_visitor_detail
dwd_traffic_user_jump_detail
dwd_user_user_login
dwd_user_user_register
error_topic
first
maxwell
nifi
nifiOutput
page_topic
start_topic
topic_db
topic_log
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first --create --partitions 1 --replication-factor 3
Error while executing topic command : Topic 'first' already exists.
[2024-03-04 16:59:58,015] ERROR org.apache.kafka.common.errors.TopicExistsException: Topic 'first' already exists.
 (kafka.admin.TopicCommand$)
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first01 --create --partitions 1 --replication-factor 3
Created topic first01.
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first01 --describe
Topic: first01  TopicId: 8_ayAUYdRbODZCeFMBE8Cg PartitionCount: 1       ReplicationFactor: 3    Configs: segment.bytes=1073741824
        Topic: first01  Partition: 0    Leader: 2       Replicas: 2,1,0 Isr: 2,1,0
[atguigu@node001 kafka_2.12-3.0.0]$ 

p009

[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server node001:9092 --topic first --create --partitions 1 --replication-factor 3 # 创建first主题,设置三个副本

之后,创建生产者,向first主题发送数据,

[atguigu@node001 ~]$ cd /opt/module/kafka/kafka_2.12-3.0.0/bin
[atguigu@node001 bin]$ ./kafka-console-producer.sh

[atguigu@node001 ~]$ cd /opt/module/kafka/kafka_2.12-3.0.0/bin
[atguigu@node001 bin]$ ./kafka-console-producer.sh
Missing required option(s) [bootstrap-server]
Option                                   Description                            
------                                   -----------                            
--batch-size <Integer: size>             Number of messages to send in a single 
                                           batch if they are not being sent     
                                           synchronously. (default: 200)        
--bootstrap-server <String: server to    REQUIRED unless --broker-list          
  connect to>                              (deprecated) is specified. The server
                                           (s) to connect to. The broker list   
                                           string in the form HOST1:PORT1,HOST2:
                                           PORT2.                               
--broker-list <String: broker-list>      DEPRECATED, use --bootstrap-server     
                                           instead; ignored if --bootstrap-     
                                           server is specified.  The broker     
                                           list string in the form HOST1:PORT1, 
                                           HOST2:PORT2.                         
--compression-codec [String:             The compression codec: either 'none',  
  compression-codec]                       'gzip', 'snappy', 'lz4', or 'zstd'.  
                                           If specified without value, then it  
                                           defaults to 'gzip'                   
--help                                   Print usage information.               
--line-reader <String: reader_class>     The class name of the class to use for 
                                           reading lines from standard in. By   
                                           default each line is read as a       
                                           separate message. (default: kafka.   
                                           tools.                               
                                           ConsoleProducer$LineMessageReader)   
--max-block-ms <Long: max block on       The max time that the producer will    
  send>                                    block for during a send request      
                                           (default: 60000)                     
--max-memory-bytes <Long: total memory   The total memory used by the producer  
  in bytes>                                to buffer records waiting to be sent 
                                           to the server. (default: 33554432)   
--max-partition-memory-bytes <Long:      The buffer size allocated for a        
  memory in bytes per partition>           partition. When records are received 
                                           which are smaller than this size the 
                                           producer will attempt to             
                                           optimistically group them together   
                                           until this size is reached.          
                                           (default: 16384)                     
--message-send-max-retries <Integer>     Brokers can fail receiving the message 
                                           for multiple reasons, and being      
                                           unavailable transiently is just one  
                                           of them. This property specifies the 
                                           number of retries before the         
                                           producer give up and drop this       
                                           message. (default: 3)                
--metadata-expiry-ms <Long: metadata     The period of time in milliseconds     
  expiration interval>                     after which we force a refresh of    
                                           metadata even if we haven't seen any 
                                           leadership changes. (default: 300000)
--producer-property <String:             A mechanism to pass user-defined       
  producer_prop>                           properties in the form key=value to  
                                           the producer.                        
--producer.config <String: config file>  Producer config properties file. Note  
                                           that [producer-property] takes       
                                           precedence over this config.         
--property <String: prop>                A mechanism to pass user-defined       
                                           properties in the form key=value to  
                                           the message reader. This allows      
                                           custom configuration for a user-     
                                           defined message reader. Default      
                                           properties include:                  
                                                parse.key=true|false                  
                                                key.separator=<key.separator>         
                                                ignore.error=true|false               
--request-required-acks <String:         The required acks of the producer      
  request required acks>                   requests (default: 1)                
--request-timeout-ms <Integer: request   The ack timeout of the producer        
  timeout ms>                              requests. Value must be non-negative 
                                           and non-zero (default: 1500)         
--retry-backoff-ms <Integer>             Before each retry, the producer        
                                           refreshes the metadata of relevant   
                                           topics. Since leader election takes  
                                           a bit of time, this property         
                                           specifies the amount of time that    
                                           the producer waits before refreshing 
                                           the metadata. (default: 100)         
--socket-buffer-size <Integer: size>     The size of the tcp RECV size.         
                                           (default: 102400)                    
--sync                                   If set message send requests to the    
                                           brokers are synchronously, one at a  
                                           time as they arrive.                 
--timeout <Integer: timeout_ms>          If set and the producer is running in  
                                           asynchronous mode, this gives the    
                                           maximum amount of time a message     
                                           will queue awaiting sufficient batch 
                                           size. The value is given in ms.      
                                           (default: 1000)                      
--topic <String: topic>                  REQUIRED: The topic id to produce      
                                           messages to.                         
--version                                Display Kafka version.                 
[atguigu@node001 bin]$ 
[atguigu@node001 kafka_2.12-3.0.0]$ bin/kafka-console-producer.sh --bootstrap-server node001:9092 --topic first01 # 生产者
>hello
>123
[atguigu@node002 kafka_2.12-3.0.0]$ bin/kafka-console-consumer.sh --bootstrap-server node001:9092 --topic first01 # 消费者
hello
123
--------------------------------------------------
[atguigu@node002 kafka_2.12-3.0.0]$ bin/kafka-console-consumer.sh --bootstrap-server node001:9092 --topic first01 --from-beginning # --from-beginning,把主题中所有的数据都读取出来(包括历史数据)

第 3 章 Kafka 生产者

p010

kafka由三部分组成:生产者、broker、消费者。

3.1.1 发送原理

在消息发送的过程中,涉及到了两个线程——main 线程和 Sender 线程。在 main 线程 中创建了一个双端队列 RecordAccumulator。main 线程将消息发送给 RecordAccumulator,Sender 线程不断从 RecordAccumulator 中拉取消息发送到 Kafka Broker。

p011

3.2 异步发送 API

3.2.1 普通异步发送

p012

p013

p014

第 4 章 Kafka Broker

第 5 章 Kafka 消费者

第 6 章 Kafka-Eagle 监控

第 7 章 Kafka-Kraft 模式

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2087807.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

VBRAS场景测试方法——如何高效验证网络设备的性能与稳定性

01 vBRAS的产生背景 为了解决传统BRAS中存在的设备资源利用率低、运维复杂和新业务开通缓慢等问题&#xff0c;业界提出了基于转发与 控制分离的vBRAS系统架构。基于转发与控制分离的vBRAS系统架构包括CP和UP两种角色&#xff0c;由二者共同实现BRAS功能。 CP&#xff08;Con…

2024开学季必备好物推荐!这些开学好物不可错过!

随着2024年开学季的到来&#xff0c;无论是重返校园的学生还是刚开学的新朋友&#xff0c;都需要一些实用且高效的工具来助力新学期的学习与生活。为了帮助大家更好地准备&#xff0c;我们精心挑选了一系列开学必备好物。从提升学习效率的学习工具到保证健康生活的日常用品&…

低代码表单 FormCreate 中组件的生成规则详解

在低代码表单组件 FormCreate 中&#xff0c;组件生成规则定义了如何通过 JSON 配置生成表单组件。了解和使用这些规则&#xff0c;您可以灵活地创建和控制各种表单元素。 源码地址: Github | Gitee 数据结构 type Rule {// 生成组件的名称&#xff0c;例如 input, select 等…

【大揭秘】如何利用AI轻松解决工作难题?

在这个信息爆炸的时代&#xff0c;工作中的难题似乎层出不穷。无论是项目管理、数据分析&#xff0c;还是客户沟通&#xff0c;繁杂的任务常常让我们感到无从下手。然而&#xff0c;随着人工智能&#xff08;AI&#xff09;的迅猛发展&#xff0c;越来越多的职场人士开始将其作…

uniapp自定义头部导航栏布局(优化版)

H5与微信小程序效果图 普通版 //utils/system.js//获取系统信息const systemInfo uni.getSystemInfoSync();//获取状态栏的高度&#xff0c;H5状态栏的高度默认是0export const getStatusBarHeight()>systemInfo.statusBarHeight || 0;//获取标题栏高度export const getTi…

HTB-Campfire-1

1、今天打一台htb安全分析的靶机&#xff0c;首先我们先看一下这中类型题的框架&#xff0c;首先是题目指引描述&#xff0c;之后有7个问题&#xff0c;这些问题会一步一步指引我们去溯源分析&#xff0c;话不多说开始我们今天的练习。 题目描述&#xff1a; Alonzo 在他的电脑…

Redis高可用方案:使用Keepalived实现主备双活

注意&#xff1a;请确保已经安装Redis和keepalived&#xff0c;本文不在介绍如何安装。 1、使用版本说明 Redis版本&#xff1a;5.0.2 Keepalived版本&#xff1a;1.3.5 Linux 版本&#xff1a;Centos7.9 查看Redis版本&#xff1a; /usr/local/redis/bin/redis-cli -v查…

Mac系统搭建Sonic总结

1.参考文档 https://sonic-cloud.cn/ https://mp.weixin.qq.com/s/PBnmgsmpXsQxtHU9g_05fA 测试设备&#xff1a;建议使用模拟器 Android&#xff1a;Android Studio自带模拟器 iOS&#xff1a;Xcode自带模拟器 2.所遇问题 1)拉取mysql5.7提示docker: no matching manif…

基于jstat 进行JVM监控

文章目录 引言I jstat 统计信息工具JVM 堆内存布局命令格式元数据空间统计堆内存统计JVM编译方法统计编译统计类加载统计II JVM调优基本概念: 应用程序的响应时间(RT)和吞吐量(QPS)JVM调优原理调优思路调优方法JVM调优技巧建议III 基于jstat 分析垃圾回收情况,进行JVM调优…

基于RS232的VGA显示

前言 基于ROM的VGA显示缺点&#xff1a;需要将图片转化为mif文件&#xff0c;使用的RAM是FPGA内部RAM模拟出来的&#xff0c;占用资源大切换显示图片需要重新转化&#xff0c;对ROM进行写入&#xff0c;使用极不方便&#xff0c;因此这里采用RS232进行VGA显示。 正文 一、基于…

跨境电商静态IP选择:机房IP还是住宅IP?

在跨境电商日益繁荣的今天&#xff0c;选择合适的静态IP代理对于网店的成功至关重要。代理IP不仅影响着店铺的网络连接速度和稳定性&#xff0c;还直接关系到店铺的安全性和防封能力。对于跨境网店而言&#xff0c;有静态机房IP和静态住宅IP两种选择。那么&#xff0c;究竟哪种…

病理切片染色标准化以及虚拟染色的系统总结|专题总结·24-08-30

小罗碎碎念 本期推文主题&#xff1a;虚拟染色及染色标准化在病理AI中的应用 昨晚1群在讨论虚拟染色和染色标准化&#xff0c;2群在讨论病理基础模型&#xff0c;二者恰好互补了&#xff0c;哈哈。 染色标准化的文章大致分为两种类型——一种是专门研究标准化&#xff0c;还有…

大模型Prompt提示设计简介(1)

提示设计是一门艺术&#xff0c;它涉及到精心构思的语句&#xff0c;旨在从语言模型中激发出我们渴望得到的回复。编写一个结构精巧、引人入胜的提示&#xff0c;是确保我们从语言模型那里获得既准确又高质量的答案的关键步骤。在这篇文章中&#xff0c;我们将深入探讨一些基本…

SQL 注入之 sqlmap 实战

在网络安全领域&#xff0c;SQL 注入攻击一直是一个严重的威胁。为了检测和利用 SQL 注入漏洞&#xff0c;安全人员通常会使用各种工具&#xff0c;其中 sqlmap 是一款非常强大且广泛使用的开源 SQL 注入工具。本文将详细介绍 sqlmap 的实战用法。 一、sqlmap 简介 sqlmap 是一…

Nat Commun系列|如何像搭积木一样去搭建你自己的病理AI模型框架|专题总结·24-08-30

小罗碎碎念 前情铺垫 今天的第一篇推文更偏向理论知识&#xff0c;分享了多篇综述&#xff0c;帮助大家快速了解病理切片染色标准化和虚拟染色的内容。 那么这期推文则是补充第一篇推文没有涉及的部分——染色标准化如何作为预处理流程出现在整体的框架中——准备了三篇Nature…

【中仕公考是骗子吗】公务员联考是什么意思?

公务员联考是指由多个省份在同一时间举行招录考试&#xff0c;并且这些省份在考试内容上保持较高的一致性。参与联考的省份往往采用同一套或相近的试卷&#xff0c;在具体的题量、难度或题型分布上可能会根据各自情况进行调整&#xff0c;同时可能加入一些具有本省特色的元素。…

基于web旅游信息平台的设计与实现

三、系统分析 &#xff08;一&#xff09;识别参与者 对于平台功能需求的分析&#xff0c;我们定位了四种参与者&#xff1a;普通用户、注册用户、企业级用户、网站维护人员。现对参与者描述如下&#xff1a; &#xff08;1&#xff09;普通用户 描述&#xff1a;可以注册成…

安全帽佩戴监测摄像机

安全帽是工业生产中必不可少的安全防护装备&#xff0c;能有效保护工人头部免受意外伤害。然而&#xff0c;管理人员往往难以监督工人是否正确佩戴安全帽&#xff0c;这可能导致一些潜在的安全隐患。为了解决这一问题&#xff0c;一种新型的安全帽佩戴监测摄像机 应运而生。 这…

python基础(13魔法方法介绍)

python系列文章目录 python基础&#xff08;01变量&数据类型&运算符&#xff09; python基础&#xff08;02序列共性&#xff09; python基础(03列表和元组) python基础&#xff08;04字符串&字典&#xff09; python基础&#xff08;05集合set&#xff09; pytho…

docker Desktop报错 error pulling image configuration 处理

问题描述 在 docker 拉数据 出现以下错误 error pulling image configurarion&#xff1a; 这个问题 主要是 可能应该某些原因不能网络无法连上镜像 原因分析&#xff1a; 1。 2024年 5月以后 国内很多IP都 。。。懂的都懂&#xff0c;很多 VPN 也是。。。 懂的都懂&#x…