Seaweedfs(master volume filer) docker run参数帮助文档

news2025/2/23 12:15:06

文章目录

  • 进入容器后执行获取
    • weed -h
      • 英文
      • 中文
    • weed server -h
      • 英文
      • 中文
    • weed volume -h
      • 英文
      • 中文
    • 关键点
    • 测试了一下,这个`-volume.minFreeSpace string`有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了
    • 尝试只用参数`-volume.max string`设置最大卷数量(貌似一个是大约1g)

进入容器后执行获取

weed -h

英文

/data # weed

SeaweedFS: store billions of files and serve them fast!

Usage:

        weed command [arguments]

The commands are:

    autocomplete install autocomplete
    autocomplete.uninstall uninstall autocomplete
    backup      incrementally backup a volume to local folder
    benchmark   benchmark by writing millions of files and reading them out
    compact     run weed tool compact on volume file
    download    download files by file id
    export      list or export files from one volume data file
    filer       start a file server that points to a master server, or a list of master servers
    filer.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.toml
    filer.cat   copy one file to local
    filer.copy  copy one or a list of files to a filer folder
    filer.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.toml
    filer.meta.tail see continuous changes on a filer
    filer.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object store
    filer.remote.sync resumable continuously write back updates to remote storage
    filer.replicate replicate file changes to another destination
    filer.sync  resumable continuous synchronization between two active-active or active-passive SeaweedFS clusters
    fix         run weed tool fix on files or whole folders to recreate index file(s) if corrupted
    fuse        Allow use weed with linux's mount command
    iam         start a iam API compatible server
    master      start a master server
    master.follower start a master follower
    mount       mount weed filer to a directory as file system in userspace(FUSE)
    mq.broker   <WIP> start a message queue broker
    s3          start a s3 API compatible server that is backed by a filer
    scaffold    generate basic configuration files
    server      start a master server, a volume server, and optionally a filer and a S3 gateway
    shell       run interactive administrative commands
    update      get latest or specific version from https://github.com/seaweedfs/seaweedfs
    upload      upload one or a list of files
    version     print SeaweedFS version
    volume      start a volume server
    webdav      start a webdav server that is backed by a filer

Use "weed help [command]" for more information about a command.

For Logging, use "weed [logging_options] [command]". The logging options are:
  -alsologtostderr
        log to standard error as well as files (default true)
  -config_dir value
        directory with toml configuration files
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -logdir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -options string
        a file of command line options, each line in optionName=optionValue format
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -v value
        log levels [0|1|2|3|4], default to 0
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging

中文

SeaweedFS: store billions of files and serve them fast!  # 海量文件存储与快速服务

Usage:

        weed command [arguments]  # 使用格式:weed 命令 [参数]

The commands are:

    autocomplete install autocomplete  # 安装自动补全功能
    autocomplete.uninstall uninstall autocomplete  # 卸载自动补全功能
    backup      incrementally backup a volume to local folder  # 增量备份卷数据到本地目录
    benchmark   benchmark by writing millions of files and reading them out  # 通过读写百万文件进行性能测试
    compact     run weed tool compact on volume file  # 压缩卷文件
    download    download files by file id  # 通过文件ID下载文件
    export      list or export files from one volume data file  # 从卷数据文件列出/导出文件
    filer       start a file server that points to a master server, or a list of master servers  # 启动文件服务器连接主节点
    filer.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.toml  # 持续备份文件到replication.toml定义的位置
    filer.cat   copy one file to local  # 复制单个文件到本地
    filer.copy  copy one or a list of files to a filer folder  # 复制文件到filer目录
    filer.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.toml  # 持续备份元数据到备份配置指定位置
    filer.meta.tail see continuous changes on a filer  # 实时查看filer元数据变化
    filer.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object store  # 将本地存储操作同步到远程对象存储
    filer.remote.sync resumable continuously write back updates to remote storage  # 持续同步更新到远程存储
    filer.replicate replicate file changes to another destination  # 文件变更复制到其他目标
    filer.sync  resumable continuous synchronization between two active-active or active-passive SeaweedFS clusters  # 集群间持续同步
    fix         run weed tool fix on files or whole folders to recreate index file(s) if corrupted  # 修复损坏的索引文件
    fuse        Allow use weed with linux's mount command  # 支持Linux挂载命令
    iam         start a iam API compatible server  # 启动IAM兼容API服务
    master      start a master server  # 启动主节点
    master.follower start a master follower  # 启动主节点跟随者
    mount       mount weed filer to a directory as file system in userspace(FUSE)  # 挂载FUSE文件系统
    mq.broker   <WIP> start a message queue broker  # 启动消息队列代理(开发中)
    s3          start a s3 API compatible server that is backed by a filer  # 启动S3兼容服务
    scaffold    generate basic configuration files  # 生成基础配置文件
    server      start a master server, a volume server, and optionally a filer and a S3 gateway  # 启动完整服务(主节点+存储节点+可选组件)
    shell       run interactive administrative commands  # 进入交互式管理命令行
    update      get latest or specific version from https://github.com/seaweedfs/seaweedfs  # 更新SeaweedFS版本
    upload      upload one or a list of files  # 上传单个或多个文件
    version     print SeaweedFS version  # 显示版本信息
    volume      start a volume server  # 启动存储节点
    webdav      start a webdav server that is backed by a filer  # 启动WebDAV服务

日志选项说明(每个命令前均可添加):
  -alsologtostderr
        同时输出日志到标准错误和文件(默认true)
  -config_dir value
        包含toml配置文件的目录
  -log_backtrace_at value
        当记录到指定行时输出堆栈跟踪
  -logdir string
        日志文件存储目录(非空时生效)
  -logtostderr
        日志输出到标准错误而非文件
  -options string
        命令行选项配置文件(每行格式为optionName=optionValue)
  -stderrthreshold value
        高于此级别的日志输出到标准错误
  -v value
        日志级别 [0|1|2|3|4],默认为0
  -vmodule value
        文件过滤日志设置(逗号分隔的pattern=N格式)

weed server -h

英文

/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name
Default Usage:
  -cpuprofile string
        cpu profile output file
  -dataCenter string
        current volume server's data center name
  -debug
        serves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2
  -debug.port int
        http port for debugging (default 6060)
  -dir string
        directories to store data files. dir[,dir]... (default "/tmp")
  -disableHttp
        disable http requests, only gRPC operations are allowed.
  -filer
        whether to start filer
  -filer.collection string
        all data will be stored in this collection
  -filer.concurrentUploadLimitMB int
        limit total concurrent upload size (default 64)
  -filer.defaultReplicaPlacement string
        default replication type. If not specified, use master setting.
  -filer.dirListLimit int
        limit sub dir listing size (default 1000)
  -filer.disableDirListing
        turn off directory listing
  -filer.disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -filer.downloadMaxMBps int
        download max speed for each download request, in MB per second
  -filer.encryptVolumeData
        encrypt data on volume servers
  -filer.filerGroup string
        share metadata with other filers in the same filerGroup
  -filer.localSocket string
        default to /tmp/seaweedfs-filer-<port>.sock
  -filer.maxMB int
        split files larger than the limit (default 4)
  -filer.port int
        filer server http listen port (default 8888)
  -filer.port.grpc int
        filer server grpc listen port
  -filer.port.public int
        filer server public http listen port
  -filer.saveToFilerLimit int
        Small files smaller than this limit can be cached in filer store.
  -filer.ui.deleteDir
        enable filer UI show delete directory button (default true)
  -iam
        whether to start IAM service
  -iam.port int
        iam server http listen port (default 8111)
  -idleTimeout int
        connection idle seconds (default 30)
  -ip string
        ip or server name, also used as identifier (default "172.17.0.6")
  -ip.bind string
        ip address to bind to. If empty, default to same as -ip option.
  -master
        whether to start master server (default true)
  -master.defaultReplication string
        Default replication type if not specified.
  -master.dir string
        data directory to store meta data, default to same as -dir specified
  -master.electionTimeout duration
        election timeout of master servers (default 10s)
  -master.garbageThreshold float
        threshold to vacuum and reclaim spaces (default 0.3)
  -master.heartbeatInterval duration
        heartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms)
  -master.metrics.address string
        Prometheus gateway address
  -master.metrics.intervalSeconds int
        Prometheus push interval in seconds (default 15)
  -master.peers string
        all master nodes in comma separated ip:masterPort list
  -master.port int
        master server http listen port (default 9333)
  -master.port.grpc int
        master server grpc listen port
  -master.raftHashicorp
        use hashicorp raft
  -master.resumeState
        resume previous state on start master server
  -master.volumePreallocate
        Preallocate disk space for volumes.
  -master.volumeSizeLimitMB uint
        Master stops directing writes to oversized volumes. (default 30000)
  -memprofile string
        memory profile output file
  -metricsPort int
        Prometheus metrics listen port
  -mq.broker
        whether to start message queue broker
  -mq.broker.port int
        message queue broker gRPC listen port (default 17777)
  -options string
        a file of command line options, each line in optionName=optionValue format
  -rack string
        current volume server's rack name
  -s3
        whether to start S3 gateway
  -s3.allowDeleteBucketNotEmpty
        allow recursive deleting all entries along with bucket (default true)
  -s3.allowEmptyFolder
        allow empty folders (default true)
  -s3.auditLogConfig string
        path to the audit log config file
  -s3.cert.file string
        path to the TLS certificate file
  -s3.config string
        path to the config file
  -s3.domainName string
        suffix of the host name in comma separated list, {bucket}.{domainName}
  -s3.key.file string
        path to the TLS private key file
  -s3.port int
        s3 server http listen port (default 8333)
  -s3.port.grpc int
        s3 server grpc listen port
  -volume
        whether to start volume server (default true)
  -volume.compactionMBps int
        limit compaction speed in mega bytes per second
  -volume.concurrentDownloadLimitMB int
        limit total concurrent download size (default 64)
  -volume.concurrentUploadLimitMB int
        limit total concurrent upload size (default 64)
  -volume.dir.idx string
        directory to store .idx files
  -volume.disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -volume.fileSizeLimitMB int
        limit file size to avoid out of memory (default 256)
  -volume.hasSlowRead
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)
  -volume.images.fix.orientation
        Adjust jpg orientation when uploading.
  -volume.index string
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")
  -volume.index.leveldbTimeout int
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.
  -volume.inflightUploadDataTimeout duration
        inflight upload data wait timeout of volume servers (default 1m0s)
  -volume.max string
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")
  -volume.minFreeSpace string
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.
  -volume.minFreeSpacePercent string
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")
  -volume.port int
        volume server http listen port (default 8080)
  -volume.port.grpc int
        volume server grpc listen port
  -volume.port.public int
        volume server public port
  -volume.pprof
        enable pprof http handlers. precludes --memprofile and --cpuprofile
  -volume.preStopSeconds int
        number of seconds between stop send heartbeats and stop volume server (default 10)
  -volume.publicUrl string
        publicly accessible address
  -volume.readBufferSizeMB int
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4)
  -volume.readMode string
        [local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy")
  -webdav
        whether to start WebDAV gateway
  -webdav.cacheCapacityMB int
        local cache capacity in MB
  -webdav.cacheDir string
        local cache directory for file chunks (default "/tmp")
  -webdav.cert.file string
        path to the TLS certificate file
  -webdav.collection string
        collection to create the files
  -webdav.disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -webdav.filer.path string
        use this remote path from filer server (default "/")
  -webdav.key.file string
        path to the TLS private key file
  -webdav.port int
        webdav server http listen port (default 7333)
  -webdav.replication string
        replication to create the files
  -whiteList string
        comma separated Ip addresses having write permission. No limit if empty.
Description:
  start both a volume server to provide storage spaces
  and a master server to provide volume=>location mapping service and sequence number of file ids

  This is provided as a convenient way to start both volume server and master server.
  The servers acts exactly the same as starting them separately.
  So other volume servers can connect to this master server also.

  Optionally, a filer server can be started.
  Also optionally, a S3 gateway can be started.
/data #

中文

/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name  # 示例命令
Default Usage:
  -cpuprofile string  # CPU性能分析输出文件
        cpu profile output file  
  -dataCenter string  # 当前卷服务器的数据中心名称
        current volume server's data center name  
  -debug  # 启用调试模式,提供运行时分析数据
        serves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2  
  -debug.port int  # 调试用的HTTP端口号 (默认6060)
        http port for debugging (default 6060)  
  -dir string  # 数据存储目录列表,多个目录用逗号分隔 (默认"/tmp")
        directories to store data files. dir[,dir]... (default "/tmp")  
  -disableHttp  # 禁用HTTP请求,只允许gRPC操作
        disable http requests, only gRPC operations are allowed.  
  -filer  # 是否启动文件管理器服务
        whether to start filer  
  -filer.collection string  # 所有数据将存储在此集合中
        all data will be stored in this collection  
  -filer.concurrentUploadLimitMB int  # 总并发上传大小限制(单位MB)(默认64)
        limit total concurrent upload size (default 64)  
  -filer.defaultReplicaPlacement string  # 默认副本放置策略(未指定时使用主设置)
        default replication type. If not specified, use master setting.  
  -filer.dirListLimit int  # 子目录列表显示数量限制 (默认1000)
        limit sub dir listing size (default 1000)  
  -filer.disableDirListing  # 关闭目录列表功能
        turn off directory listing  
  -filer.disk string  # 磁盘类型标签 [hdd|ssd|<自定义标签>]
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag  
  -filer.downloadMaxMBps int  # 单个下载请求的最大速度(MB/秒)
        download max speed for each download request, in MB per second  
  -filer.encryptVolumeData  # 加密卷服务器上的数据
        encrypt data on volume servers  
  -filer.filerGroup string  # 与同组文件管理器共享元数据
        share metadata with other filers in the same filerGroup  
  -filer.localSocket string  # 本地socket文件路径 (默认/tmp/seaweedfs-filer-<port>.sock)
        default to /tmp/seaweedfs-filer-<port>.sock  
  -filer.maxMB int  # 文件分割阈值(单位MB)(默认4)
        split files larger than the limit (default 4)  
  -filer.port int  # 文件管理器HTTP监听端口 (默认8888)
        filer server http listen port (default 8888)  
  -filer.port.grpc int  # 文件管理器gRPC监听端口
        filer server grpc listen port  
  -filer.port.public int  # 文件管理器公共HTTP监听端口
        filer server public http listen port  
  -filer.saveToFilerLimit int  # 可缓存到文件管理器的小文件大小阈值
        Small files smaller than this limit can be cached in filer store.  
  -filer.ui.deleteDir  # 在文件管理器UI显示删除目录按钮 (默认true)
        enable filer UI show delete directory button (default true)  
  -iam  # 是否启动IAM服务
        whether to start IAM service  
  -iam.port int  # IAM服务HTTP监听端口 (默认8111)
        iam server http listen port (default 8111)  
  -idleTimeout int  # 连接空闲超时秒数 (默认30)
        connection idle seconds (default 30)  
  -ip string  # 服务器IP或名称,也作为标识符 (默认"172.17.0.6")
        ip or server name, also used as identifier (default "172.17.0.6")  
  -ip.bind string  # 绑定的IP地址(空则使用-ip设置)
        ip address to bind to. If empty, default to same as -ip option.  
  -master  # 是否启动主服务器 (默认true)
        whether to start master server (default true)  
  -master.defaultReplication string  # 默认副本策略(未指定时使用)
        Default replication type if not specified.  
  -master.dir string  # 主服务器元数据存储目录(默认同-dir)
        data directory to store meta data, default to same as -dir specified  
  -master.electionTimeout duration  # 主服务器选举超时时间 (默认10s)
        election timeout of master servers (default 10s)  
  -master.garbageThreshold float  # 触发空间回收的垃圾占比阈值 (默认0.3)
        threshold to vacuum and reclaim spaces (default 0.3)  
  -master.heartbeatInterval duration  # 主服务器心跳间隔(随机乘以1~1.25)(默认300ms)
        heartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms)  
  -master.metrics.address string  # Prometheus网关地址
        Prometheus gateway address  
  -master.metrics.intervalSeconds int  # Prometheus推送间隔(秒)(默认15)
        Prometheus push interval in seconds (default 15)  
  -master.peers string  # 所有主节点列表(逗号分隔的ip:port)
        all master nodes in comma separated ip:masterPort list  
  -master.port int  # 主服务器HTTP监听端口 (默认9333)
        master server http listen port (default 9333)  
  -master.port.grpc int  # 主服务器gRPC监听端口
        master server grpc listen port  
  -master.raftHashicorp  # 使用Hashicorp Raft实现
        use hashicorp raft  
  -master.resumeState  # 启动时恢复之前的状态
        resume previous state on start master server  
  -master.volumePreallocate  # 为卷预分配磁盘空间
        Preallocate disk space for volumes.  
  -master.volumeSizeLimitMB uint  # 主服务器停止写入超大卷的阈值(单位MB)(默认30000)
        Master stops directing writes to oversized volumes. (default 30000)  
  -memprofile string  # 内存分析输出文件
        memory profile output file  
  -metricsPort int  # Prometheus指标监听端口
        Prometheus metrics listen port  
  -mq.broker  # 是否启动消息队列代理
        whether to start message queue broker  
  -mq.broker.port int  # 消息队列代理gRPC监听端口 (默认17777)
        message queue broker gRPC listen port (default 17777)  
  -options string  # 命令行选项配置文件(每行格式optionName=optionValue)
        a file of command line options, each line in optionName=optionValue format  
  -rack string  # 当前卷服务器的机架名称
        current volume server's rack name  
  -s3  # 是否启动S3网关
        whether to start S3 gateway  
  -s3.allowDeleteBucketNotEmpty  # 允许递归删除非空桶 (默认true)
        allow recursive deleting all entries along with bucket (default true)  
  -s3.allowEmptyFolder  # 允许空文件夹 (默认true)
        allow empty folders (default true)  
  -s3.auditLogConfig string  # 审计日志配置文件路径
        path to the audit log config file  
  -s3.cert.file string  # TLS证书文件路径
        path to the TLS certificate file  
  -s3.config string  # 配置文件路径
        path to the config file  
  -s3.domainName string  # S3域名后缀(逗号分隔列表,格式{bucket}.{domainName})
        suffix of the host name in comma separated list, {bucket}.{domainName}  
  -s3.key.file string  # TLS私钥文件路径
        path to the TLS private key file  
  -s3.port int  # S3服务HTTP监听端口 (默认8333)
        s3 server http listen port (default 8333)  
  -s3.port.grpc int  # S3服务gRPC监听端口
        s3 server grpc listen port  
  -volume  # 是否启动卷服务器 (默认true)
        whether to start volume server (default true)  
  -volume.compactionMBps int  # 压缩速度限制(MB/秒)
        limit compaction speed in mega bytes per second  
  -volume.concurrentDownloadLimitMB int  # 总并发下载大小限制(单位MB)(默认64)
        limit total concurrent download size (default 64)  
  -volume.concurrentUploadLimitMB int  # 总并发上传大小限制(单位MB)(默认64)
        limit total concurrent upload size (default 64)  
  -volume.dir.idx string  # .idx文件存储目录
        directory to store .idx files  
  -volume.disk string  # 卷磁盘类型标签 [hdd|ssd|<自定义标签>]
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag  
  -volume.fileSizeLimitMB int  # 文件大小限制以避免内存溢出(单位MB)(默认256)
        limit file size to avoid out of memory (default 256)  
  -volume.hasSlowRead  # <实验性> 防止慢速读取阻塞其他请求(默认true)
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)  
  -volume.images.fix.orientation  # 上传时自动调整JPG方向
        Adjust jpg orientation when uploading.  
  -volume.index string  # 索引模式选择 [memory|leveldb|leveldbMedium|leveldbLarge] (默认"memory")
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")  
  -volume.index.leveldbTimeout int  # leveldb存活超时时间(小时),0表示禁用
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.  
  -volume.inflightUploadDataTimeout duration  # 传输中上传数据等待超时时间 (默认1m0s)
        inflight upload data wait timeout of volume servers (default 1m0s)  
  -volume.max string  # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")  
  -volume.minFreeSpace string  # 最小空闲磁盘空间(百分比<=100,或如10GiB)
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.  
  -volume.minFreeSpacePercent string  # 最小空闲磁盘空间百分比(已弃用,改用minFreeSpace)(默认"1")
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")  
  -volume.port int  # 卷服务器HTTP监听端口 (默认8080)
        volume server http listen port (default 8080)  
  -volume.port.grpc int  # 卷服务器gRPC监听端口
        volume server grpc listen port  
  -volume.port.public int  # 卷服务器公共端口
        volume server public port  
  -volume.pprof  # 启用pprof HTTP处理器(与--memprofile/--cpuprofile互斥)
        enable pprof http handlers. precludes --memprofile and --cpuprofile  
  -volume.preStopSeconds int  # 停止发送心跳到停止服务的时间间隔(秒)(默认10)
        number of seconds between stop send heartbeats and stop volume server (default 10)  
  -volume.publicUrl string  # 公开访问地址
        publicly accessible address  
  -volume.readBufferSizeMB int  # <实验性> 读缓冲区大小(MB)(默认4)
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4)  
  -volume.readMode string  # 非本地卷处理模式 [local|proxy|redirect] (默认"proxy")
        [local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy")  
  -webdav  # 是否启动WebDAV网关
        whether to start WebDAV gateway  
  -webdav.cacheCapacityMB int  # 本地缓存容量(MB)
        local cache capacity in MB  
  -webdav.cacheDir string  # 文件块本地缓存目录 (默认"/tmp")
        local cache directory for file chunks (default "/tmp")  
  -webdav.cert.file string  # TLS证书文件路径
        path to the TLS certificate file  
  -webdav.collection string  # 文件创建的目标集合
        collection to create the files  
  -webdav.disk string  # WebDAV磁盘类型标签 [hdd|ssd|<自定义标签>]
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag  
  -webdav.filer.path string  # 使用的远程文件管理器路径 (默认"/")
        use this remote path from filer server (default "/")  
  -webdav.key.file string  # TLS私钥文件路径
        path to the TLS private key file  
  -webdav.port int  # WebDAV服务HTTP监听端口 (默认7333)
        webdav server http listen port (default 7333)  
  -webdav.replication string  # 文件创建的副本策略
        replication to create the files  
  -whiteList string  # 拥有写权限的IP白名单(逗号分隔,空表示无限制)
        comma separated Ip addresses having write permission. No limit if empty.  
Description:
  start both a volume server to provide storage spaces  # 同时启动卷服务器提供存储空间
  and a master server to provide volume=>location mapping service and sequence number of file ids  # 和主服务器提供卷位置映射及文件ID序列服务

  This is provided as a convenient way to start both volume server and master server.  # 本命令是同时启动卷服务器和主服务器的便捷方式
  The servers acts exactly the same as starting them separately.  # 服务表现与单独启动时完全相同
  So other volume servers can connect to this master server also.  # 其他卷服务器也可以连接到此主服务器

  Optionally, a filer server can be started.  # 可选项:可启动文件管理器服务
  Also optionally, a S3 gateway can be started.  # 可选项:可启动S3网关
/data #

weed volume -h

英文

/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:
  -compactionMBps int
        limit background compaction or copying speed in mega bytes per second
  -concurrentDownloadLimitMB int
        limit total concurrent download size (default 256)
  -concurrentUploadLimitMB int
        limit total concurrent upload size (default 256)
  -cpuprofile string
        cpu profile output file
  -dataCenter string
        current volume server's data center name
  -dir string
        directories to store data files. dir[,dir]... (default "/tmp")
  -dir.idx string
        directory to store .idx files
  -disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -fileSizeLimitMB int
        limit file size to avoid out of memory (default 256)
  -hasSlowRead
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)
  -idleTimeout int
        connection idle seconds (default 30)
  -images.fix.orientation
        Adjust jpg orientation when uploading.
  -index string
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")
  -index.leveldbTimeout int
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.
  -inflightUploadDataTimeout duration
        inflight upload data wait timeout of volume servers (default 1m0s)
  -ip string
        ip or server name, also used as identifier (default "172.17.0.6")
  -ip.bind string
        ip address to bind to. If empty, default to same as -ip option.
  -max string
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")
  -memprofile string
        memory profile output file
  -metricsPort int
        Prometheus metrics listen port
  -minFreeSpace string
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.
  -minFreeSpacePercent string
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")
  -mserver string
        comma-separated master servers (default "localhost:9333")
  -options string
        a file of command line options, each line in optionName=optionValue format
  -port int
        http listen port (default 8080)
  -port.grpc int
        grpc listen port
  -port.public int
        port opened to public
  -pprof
        enable pprof http handlers. precludes --memprofile and --cpuprofile
  -preStopSeconds int
        number of seconds between stop send heartbeats and stop volume server (default 10)
  -publicUrl string
        Publicly accessible address
  -rack string
        current volume server's rack name
  -readBufferSizeMB int
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)
  -readMode string
        [local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")
  -whiteList string
        comma separated Ip addresses having write permission. No limit if empty.
Description:
  start a volume server to provide storage spaces

中文

/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:
  -compactionMBps int
        limit background compaction or copying speed in mega bytes per second
        [限制后台压缩或复制速度,单位MB/秒]
  -concurrentDownloadLimitMB int
        limit total concurrent download size (default 256)
        [限制并发下载总大小,默认256MB]
  -concurrentUploadLimitMB int
        limit total concurrent upload size (default 256)
        [限制并发上传总大小,默认256MB]
  -cpuprofile string
        cpu profile output file
        [CPU性能分析输出文件名]
  -dataCenter string
        current volume server's data center name
        [当前卷服务器的数据中心名称]
  -dir string
        directories to store data files. dir[,dir]... (default "/tmp")
        [数据文件存储目录,多个目录用逗号分隔,默认/tmp]
  -dir.idx string
        directory to store .idx files
        [索引文件存储目录]
  -disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
        [磁盘类型标识:hdd/ssd/自定义标签]
  -fileSizeLimitMB int
        limit file size to avoid out of memory (default 256)
        [限制单个文件大小防止内存溢出,默认256MB]
  -hasSlowRead
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)
        [实验性:启用后慢读不会阻塞其他请求,但大文件读取延迟会增加]
  -idleTimeout int
        connection idle seconds (default 30)
        [连接空闲超时时间(秒),默认30秒]
  -images.fix.orientation
        Adjust jpg orientation when uploading.
        [上传时自动调整JPG方向]
  -index string
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")
        [索引存储模式:内存优先或不同级别的LevelDB]
  -index.leveldbTimeout int
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.
        [LevelDB存活时间(小时),超时后卸载以节省资源]
  -inflightUploadDataTimeout duration
        inflight upload data wait timeout of volume servers (default 1m0s)
        [上传数据等待超时时间,默认1分钟]
  -ip string
        ip or server name, also used as identifier (default "172.17.0.6")
        [服务器IP/名称,也作为唯一标识]
  -ip.bind string
        ip address to bind to. If empty, default to same as -ip option.
        [绑定IP地址,默认与-ip相同]
  -max string
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")
        [最大卷数量(自动计算磁盘空间与卷大小的比值)]
  -memprofile string
        memory profile output file
        [内存性能分析输出文件名]
  -metricsPort int
        Prometheus metrics listen port
        [Prometheus指标监听端口]
  -minFreeSpace string
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.
        [最小磁盘剩余空间(百分比或易读字节单位如10GiB),空间不足时将卷设为只读]
  -minFreeSpacePercent string
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")
        [已弃用,改用minFreeSpace参数]
  -mserver string
        comma-separated master servers (default "localhost:9333")
        [主服务器地址列表,用逗号分隔]
  -options string
        a file of command line options, each line in optionName=optionValue format
        [配置文件路径(每行格式为optionName=optionValue)]
  -port int
        http listen port (default 8080)
        [HTTP监听端口]
  -port.grpc int
        grpc listen port
        [gRPC监听端口]
  -port.public int
        port opened to public
        [对外开放端口]
  -pprof
        enable pprof http handlers. precludes --memprofile and --cpuprofile
        [启用pprof性能分析(与--memprofile/--cpuprofile互斥)]
  -preStopSeconds int
        number of seconds between stop send heartbeats and stop volume server (default 10)
        [停止发送心跳到停止服务之间的等待秒数]
  -publicUrl string
        Publicly accessible address
        [公开访问地址]
  -rack string
        current volume server's rack name
        [当前卷服务器的机架名称]
  -readBufferSizeMB int
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)
        [实验性:增大可优化查询性能但增加内存占用,默认4MB]
  -readMode string
        [local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")
        [非本地卷处理模式:本地无/代理请求/重定向]
  -whiteList string
        comma separated Ip addresses having write permission. No limit if empty.
        [白名单IP地址(逗号分隔),空表示无限制]
Description:
  start a volume server to provide storage spaces
  [启动卷服务器提供存储空间]

关键点

-master.garbageThreshold float  # 触发空间回收的垃圾占比阈值 (默认0.3)
        threshold to vacuum and reclaim spaces (default 0.3)  

-volume.max string  # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")  
-volume.minFreeSpace string  # 最小空闲磁盘空间(百分比<=100,或如10GiB),如果达到阈值所有卷将被标记只读(大概写30表示30%)
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.  

测试了一下,这个-volume.minFreeSpace string有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了

在这里插入图片描述

尝试只用参数-volume.max string设置最大卷数量(貌似一个是大约1g)

我尝试设置20:

    docker run \
        -d -i -t --restart always \
        --name $CONTAINER_NAME \
        -p $MASTER_PORT:9333 \
        -p $FILER_PORT:8888 \
        -v $SCRIPT_LOCATION/mount/masterVolumeFiler/data/:/data/ \
        -v /etc/localtime:/etc/localtime:ro \
        --log-driver=json-file \
        --log-opt max-size=100m \
        --log-opt max-file=3 \
        $IMAGE_NAME:$IMAGE_TAG \
        server -filer -volume.max=20

在这里插入图片描述

在不断上传文件过程中,它会分阶段扩张:

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍
ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2299247.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【工业安全】-CVE-2022-35555- Tenda W6路由器 命令注入漏洞

文章目录 1.漏洞描述 2.环境搭建 3.漏洞复现 4.漏洞分析 4.1&#xff1a;代码分析  4.2&#xff1a;流量分析 5.poc代码&#xff1a; 1.漏洞描述 漏洞编号&#xff1a;CVE-2022-35555 漏洞名称&#xff1a;Tenda W6 命令注入 威胁等级&#xff1a;高危 漏洞详情&#xff1…

C#(Winform)通过添加AForge添加并使用系统摄像机

先展示效果 AForge介绍 AForge是一个专门为开发者和研究者基于C#框架设计的, 也是NET平台下的开源计算机视觉和人工智能库 它提供了许多常用的图像处理和视频处理算法、机器学习和神经网络模型&#xff0c;并且具有高效、易用、稳定等特点。 AForge主要包括: 计算机视觉与人…

【LeetCode: 611. 有效三角形的个数 + 排序 + 双指针】

&#x1f680; 算法题 &#x1f680; &#x1f332; 算法刷题专栏 | 面试必备算法 | 面试高频算法 &#x1f340; &#x1f332; 越难的东西,越要努力坚持&#xff0c;因为它具有很高的价值&#xff0c;算法就是这样✨ &#x1f332; 作者简介&#xff1a;硕风和炜&#xff0c;…

每日十题八股-补充材料-2025年2月15日

1.TCP是如何保证消息的顺序和可靠的&#xff1f; 写得超级好的文章 首先肯定是三次握手和四次挥手保证里通讯双方建立了正确有效的连接。 其次是校验和、序列号&#xff0c;ACK消息应答机制还有重传机制&#xff0c;保证了消息顺序和可靠。 同时配合拥塞机制和流量控制机制&am…

国内已经部署DeepSeek的第三方推荐

大家好&#xff0c;我是苍何。 最近DeepSeek爆火&#xff0c;我也说点心里话&#xff0c;其实就我们普通人而言&#xff0c;要想用好 DeepSeek&#xff0c;其实无非就是要利用好工具为我们自己提效。 比如你是搞编程的&#xff0c;你就得学会如何用 DeepSeek 更快速的辅助你编…

Windows环境下使用Ollama搭建本地AI大模型教程

注&#xff1a;Ollama仅支持Windows10及以上版本。 安装Ollama 去 ollama官网 下载对应平台及OS的安装包。 运行安装包&#xff0c;点击“安装”按钮即可开始安装。Ollama会自动安装到你的 C:\Users\<当前用户名>\AppData\Local\Programs\Ollama 目录上。 安装完成后&…

2024年认证杯SPSSPRO杯数学建模A题(第二阶段)保暖纤维的保暖能力全过程文档及程序

2024年认证杯SPSSPRO杯数学建模 A题 保暖纤维的保暖能力 原题再现&#xff1a; 冬装最重要的作用是保暖&#xff0c;也就是阻挡温暖的人体与寒冷环境之间的热量传递。人们在不同款式的棉衣中会填充保暖材料&#xff0c;从古已有之的棉花、羽绒到近年来各种各样的人造纤维。不…

算法19(力扣244)反转字符串

1、问题 编写一个函数&#xff0c;其作用是将输入的字符串反转过来。输入字符串以字符数组 s 的形式给出。 不要给另外的数组分配额外的空间&#xff0c;你必须原地修改输入数组、使用 O(1) 的额外空间解决这一问题。 2、示例 &#xff08;1&#xff09; 示例 1&a…

DeepSeek 助力 Vue 开发:打造丝滑的卡片(Card)

前言&#xff1a;哈喽&#xff0c;大家好&#xff0c;今天给大家分享一篇文章&#xff01;并提供具体代码帮助大家深入理解&#xff0c;彻底掌握&#xff01;创作不易&#xff0c;如果能帮助到大家或者给大家一些灵感和启发&#xff0c;欢迎收藏关注哦 &#x1f495; 目录 Deep…

ESP32 arduino + DeepSeek API访问

此项目主要使用ESP32-S3实现一个AI语音聊天助手&#xff0c;可以通过该项目熟悉ESP32-S3 arduino的开发&#xff0c;百度语音识别&#xff0c;语音合成API调用&#xff0c;百度文心一言大模型API的调用方法&#xff0c;音频的录制及播放&#xff0c;SD卡的读写&#xff0c;Wifi…

最新国内 ChatGPT Plus/Pro 获取教程

最后更新版本&#xff1a;20250202 教程介绍&#xff1a; 本文将详细介绍如何快速获取一张虚拟信用卡&#xff0c;并通过该卡来获取ChatGPT Plus和ChatGPT Pro。 # 教程全程约15分钟开通ChatGPT Plus会员帐号前准备工作 一个尚未升级的ChatGPT帐号&#xff01;一张虚拟信用卡…

SQLMesh 系列教程4- 详解模型特点及模型类型

SQLMesh 作为一款强大的数据建模工具&#xff0c;以其灵活的模型设计和高效的增量处理能力脱颖而出。本文将详细介绍 SQLMesh 模型的特点和类型&#xff0c;帮助读者快速了解其强大功能。我们将深入探讨不同模型类型&#xff08;如增量模型、全量模型、SCD Type 2 等&#xff0…

渗透利器:YAKIT 工具-基础实战教程.

YAKIT 工具-基础实战教程. YAKIT&#xff08;Yak Integrated Toolkit&#xff09;是一款基于Yak语言开发的集成化网络安全单兵工具&#xff0c;旨在覆盖渗透测试全流程&#xff0c;提供从信息收集、漏洞扫描到攻击实施的自动化支持。其核心目标是通过GUI界面降低Yak语言的使用…

jenkins 配置ssh拉取gitlab

一、生成key ssh-keygen -t rsa -b 4096 -C "root" 二、将id_rsa内容拷贝到jenkins 公钥id_rsa.pub拷贝到gitlab

基于css实现正六边形的三种方案

方案一&#xff1a;通过旋转三个长方形生成正六边形 分析&#xff1a; 如下图所示&#xff0c;我们可以通过旋转三个长方形来得到一个正六边形。疑问&#xff1a; 1. 长方形的宽高分别是多少&#xff1f; 设正六边形的边长是100&#xff0c;基于一些数学常识&#xff0c;可以…

18.Python实战:实现年会抽奖系统

目录结构 python/ ├── sql/ │ └── table.sql # 创建数据库及数据表 ├── config/ │ └── __init__.py # 数据库和Flask配置 ├── static/ │ ├── style.css # 样式文件 │ └── script.js # JavaScript脚本…

145,【5】 buuctf web [GWCTF 2019]mypassword

进入靶场 修改了url后才到了注册页面 注测后再登录 查看源码 都点进去看看 有个反馈页面 再查看源码 又有收获 // 检查$feedback是否为数组 if (is_array($feedback)) {// 如果是数组&#xff0c;弹出提示框提示反馈不合法echo "<script>alert(反馈不合法);<…

19.4.9 数据库方式操作Excel

版权声明&#xff1a;本文为博主原创文章&#xff0c;转载请在显著位置标明本文出处以及作者网名&#xff0c;未经作者允许不得用于商业目的。 本节所说的操作Excel操作是讲如何把Excel作为数据库来操作。 通过COM来操作Excel操作&#xff0c;请参看第21.2节 在第19.3.4节【…

CAS单点登录(第7版)7.授权

如有疑问&#xff0c;请看视频&#xff1a;CAS单点登录&#xff08;第7版&#xff09; 授权 概述 授权和访问管理 可以使用以下策略实施授权策略以保护 CAS 中的应用程序和依赖方。 服务访问策略 服务访问策略允许您定义授权和访问策略&#xff0c;以控制对向 CAS 注册的…

java集合框架之Map系列

前言 首先从最常用的HashMap开始。HashMap是基于哈希表实现的&#xff0c;使用数组和链表&#xff08;或红黑树&#xff09;的结构。在Java 8之后&#xff0c;当链表长度超过阈值时会转换为红黑树&#xff0c;以提高查询效率。哈希冲突通过链地址法解决。需要明确的是&#xff…