【Highway-env】IntersectionEnv代码阅读

news2024/9/23 21:18:47

文章目录

  • 主要完成任务
  • 代码结构
    • 1.action space
    • 2.default_config
    • 3.reward
      • _agent_rewards
      • _agent_reward
      • _reward
      • _rewards
      • 小结
    • 4.terminated & truncated
    • 5.reset
      • _make_road
      • _make_vehicles
      • _spawn_vehicle
    • 6.step

主要完成任务

IntersectionEnv继承自AbstractEnv,主要完成以下4个任务

  • default_config环境默认的配置
  • define_spaces设置相应的动作空间和观测空间
  • step以一定的频率(policy frequency)执行策略并以一定的频率(simulation frequency)模拟环境
  • render用于显示

代码结构

这部分的代码大致可以分为以下几个部分,我也将从以下几个方面进行分析。
在这里插入图片描述

另附上AbstractEnv部分的代码结构。
在这里插入图片描述

1.action space

IntersectionEnv类中首先定义了action space,如下所示:分为SLOWERIDLEFASTER。默认设置期望速度设置为[0, 4.5, 9]
在这里插入图片描述

2.default_config

default_config设置了环境的默认配置,如下所示:

    @classmethod
    def default_config(cls) -> dict:
        config = super().default_config()
        config.update({
            "observation": {
                "type": "Kinematics",
                "vehicles_count": 15,
                "features": ["presence", "x", "y", "vx", "vy", "cos_h", "sin_h"],
                "features_range": {
                    "x": [-100, 100],
                    "y": [-100, 100],
                    "vx": [-20, 20],
                    "vy": [-20, 20],
                },
                "absolute": True,
                "flatten": False,
                "observe_intentions": False
            },
            "action": {
                "type": "DiscreteMetaAction",
                "longitudinal": True,
                "lateral": False,
                "target_speeds": [0, 4.5, 9]
            },
            "duration": 13,  # [s]
            "destination": "o1",
            "controlled_vehicles": 1,
            "initial_vehicle_count": 10,
            "spawn_probability": 0.6,
            "screen_width": 600,
            "screen_height": 600,
            "centering_position": [0.5, 0.6],
            "scaling": 5.5 * 1.3,
            "collision_reward": -5,
            "high_speed_reward": 1,
            "arrived_reward": 1,
            "reward_speed_range": [7.0, 9.0],
            "normalize_reward": False,
            "offroad_terminal": False
        })
        return config

默认配置文件还有AbstractEnv中所定义的部分。

    @classmethod
    def default_config(cls) -> dict:
        """
        Default environment configuration.

        Can be overloaded in environment implementations, or by calling configure().
        :return: a configuration dict
        """        
        return {
            "observation": {
                "type": "Kinematics"
            },
            "action": {
                "type": "DiscreteMetaAction"
            },
            "simulation_frequency": 15,  # [Hz]
            "policy_frequency": 1,  # [Hz]
            "other_vehicles_type": "highway_env.vehicle.behavior.IDMVehicle",
            "screen_width": 600,  # [px]
            "screen_height": 150,  # [px]
            "centering_position": [0.3, 0.5],
            "scaling": 5.5,
            "show_trajectories": False,
            "render_agent": True,
            "offscreen_rendering": os.environ.get("OFFSCREEN_RENDERING", "0") == "1",
            "manual_control": False,
            "real_time_rendering": False
        }

3.reward

接着来介绍奖励函数部分,在AbstractEnv中定义了_reward_rewards函数,其中_rewards只在info中进行使用。

    def _reward(self, action: Action) -> float:
        """
        Return the reward associated with performing a given action and ending up in the current state.

        :param action: the last action performed
        :return: the reward
        """
        raise NotImplementedError

    def _rewards(self, action: Action) -> Dict[Text, float]:
        """
        Returns a multi-objective vector of rewards.

        If implemented, this reward vector should be aggregated into a scalar in _reward().
        This vector value should only be returned inside the info dict.

        :param action: the last action performed
        :return: a dict of {'reward_name': reward_value}
        """
        raise NotImplementedError

IntersectionEnv类中,实现了_reward_rewards_agent_reward以及_agent_rewards四个函数,我们首先从第四个函数开始看起:

_agent_rewards

在这里插入图片描述

    def _agent_rewards(self, action: int, vehicle: Vehicle) -> Dict[Text, float]:
        """Per-agent per-objective reward signal."""
        scaled_speed = utils.lmap(vehicle.speed, self.config["reward_speed_range"], [0, 1])
        return {
            "collision_reward": vehicle.crashed,
            "high_speed_reward": np.clip(scaled_speed, 0, 1),
            "arrived_reward": self.has_arrived(vehicle),
            "on_road_reward": vehicle.on_road
        }

首先将车速进行线性映射,得到scaled_speed
lmap函数实现线性映射的功能:

  • 输入待映射的量 v v v,映射前范围: [ x 0 , x 1 ] [x_0,x_1] [x0,x1],映射后范围: [ y 0 , y 1 ] [y_0,y_1] [y0,y1]
  • 输出: y 0 + ( v − x 0 ) × ( y 1 − y 0 ) x 1 − x 0 y_0 + \frac{{(v-x_0)}\times{(y_1-y_0)}}{x_1-x_0} y0+x1x0(vx0)×(y1y0)

如:scaled_speed = utils.lmap(5, [7, 9], [0, 1])输出为-1.

utils.py
def lmap(v: float, x: Interval, y: Interval) -> float:
    """Linear map of value v with range x to desired range y."""
    return y[0] + (v - x[0]) * (y[1] - y[0]) / (x[1] - x[0])

has_arrived根据如下条件进行判断,lane_index是一个三元组(例,(‘il1’,‘o1’,0)),判断车辆是否在车道上,是否抵达目的地,且是否在车道坐标系中的纵向坐标大于exit_distance

    def has_arrived(self, vehicle: Vehicle, exit_distance: float = 25) -> bool:
        return "il" in vehicle.lane_index[0] \
               and "o" in vehicle.lane_index[1] \
               and vehicle.lane.local_coordinates(vehicle.position)[0] >= exit_distance

_agent_reward

_agent_reward接受来自_agent_rewards的字典,进行reward求和并判断是否启用奖励归一化。
R t o t a l = ( w c o l l i s i o n ⋅ R c o l l i s i o n + w h i g h s p e e d ⋅ R h i g h s p e e d + w a r r i v e d ⋅ R a r r i v e d ) ∗ w o n r o a d ⋅ R o n r o a d \begin{aligned}R_{total}&=(w_{collision}\cdot R_{collision}+w_{highspeed}\cdot R_{highspeed}+w_{arrived}\cdot R_{arrived})\\ &*w_{onroad}\cdot R_{onroad}\end{aligned} Rtotal=(wcollisionRcollision+whighspeedRhighspeed+warrivedRarrived)wonroadRonroad

启用归一化:
R = ( R − w c o l l i s i o n ) × ( 1 − 0 ) w a r r i v e d − w c o l l i s i o n R= \frac{{(R-w_{collision})}\times{(1-0)}}{w_{arrived}-w_{collision}} R=warrivedwcollision(Rwcollision)×(10)

    def _agent_reward(self, action: int, vehicle: Vehicle) -> float:
        """Per-agent reward signal."""
        rewards = self._agent_rewards(action, vehicle)
        reward = sum(self.config.get(name, 0) * reward for name, reward in rewards.items())
        reward = self.config["arrived_reward"] if rewards["arrived_reward"] else reward
        reward *= rewards["on_road_reward"]
        if self.config["normalize_reward"]:
            reward = utils.lmap(reward, [self.config["collision_reward"], self.config["arrived_reward"]], [0, 1])
        return reward

_reward

_reward通过对所有控制的车辆执行某个动作所获得的奖励进行求和,然后除以车辆的数量来得到平均奖励。

    def _reward(self, action: int) -> float:
        """Aggregated reward, for cooperative agents."""
        return sum(self._agent_reward(action, vehicle) for vehicle in self.controlled_vehicles
                   ) / len(self.controlled_vehicles)

_rewards

_rewards 方法计算的是合作智能体的多目标奖励。对于每个动作,它计算所有控制车辆的奖励,并将这些奖励按名称聚合起来,然后除以车辆的数量得到平均奖励。这个方法返回的是一个字典,其中每个键都是一个奖励的名称,每个值都是对应的平均奖励。最后将信息送人info.

    def _rewards(self, action: int) -> Dict[Text, float]:
        """Multi-objective rewards, for cooperative agents."""
        agents_rewards = [self._agent_rewards(action, vehicle) for vehicle in self.controlled_vehicles]
        return {
            name: sum(agent_rewards[name] for agent_rewards in agents_rewards) / len(agents_rewards)
            for name in agents_rewards[0].keys()
        }
AbstractEnv
    def _info(self, obs: Observation, action: Optional[Action] = None) -> dict:
        """
        Return a dictionary of additional information

        :param obs: current observation
        :param action: current action
        :return: info dict
        """
        info = {
            "speed": self.vehicle.speed,
            "crashed": self.vehicle.crashed,
            "action": action,
        }
        try:
            info["rewards"] = self._rewards(action)
        except NotImplementedError:
            pass
        return info
IntersectionEnv
    def _info(self, obs: np.ndarray, action: int) -> dict:
        info = super()._info(obs, action)
        info["agents_rewards"] = tuple(self._agent_reward(action, vehicle) for vehicle in self.controlled_vehicles)
        info["agents_dones"] = tuple(self._agent_is_terminal(vehicle) for vehicle in self.controlled_vehicles)
        return info

小结

在这里插入图片描述

4.terminated & truncated

  • 当车辆发生碰撞或者抵达终点或者偏离道路,则视为_is_terminated
  • 当车辆所经历的时间大于预定的时间duration,则truncated
  • _agent_is_terminal方法在info中使用。
    def _is_terminated(self) -> bool:
        return any(vehicle.crashed for vehicle in self.controlled_vehicles) \
               or all(self.has_arrived(vehicle) for vehicle in self.controlled_vehicles) \
               or (self.config["offroad_terminal"] and not self.vehicle.on_road)

    def _agent_is_terminal(self, vehicle: Vehicle) -> bool:
        """The episode is over when a collision occurs or when the access ramp has been passed."""
        return (vehicle.crashed or
                self.has_arrived(vehicle))

    def _is_truncated(self) -> bool:
        """The episode is truncated if the time limit is reached."""
        return self.time >= self.config["duration"]

5.reset

在这里插入图片描述

_make_road

在这里插入图片描述

_make_road实现了一个4-way的路口场景,共有以下四种优先级:

驾驶行为优先级图示
3horizontal straight lanes and right-turns在这里插入图片描述
2horizontal left-turns在这里插入图片描述
1vertical straight lanes and right-turns在这里插入图片描述
0vertical left-turns在这里插入图片描述

路网中的节点按如下规则进行标识:

(o:outer | i:inner + [r:right, l:left]) + (0:south | 1:west | 2:north | 3:east)
    def _make_road(self) -> None:
        """
        Make an 4-way intersection.

        The horizontal road has the right of way. More precisely, the levels of priority are:
            - 3 for horizontal straight lanes and right-turns
            - 1 for vertical straight lanes and right-turns
            - 2 for horizontal left-turns
            - 0 for vertical left-turns

        The code for nodes in the road network is:
        (o:outer | i:inner + [r:right, l:left]) + (0:south | 1:west | 2:north | 3:east)

        :return: the intersection road
        """
        lane_width = AbstractLane.DEFAULT_WIDTH
        right_turn_radius = lane_width + 5  # [m}
        left_turn_radius = right_turn_radius + lane_width  # [m}
        outer_distance = right_turn_radius + lane_width / 2
        access_length = 50 + 50  # [m]

        net = RoadNetwork()
        n, c, s = LineType.NONE, LineType.CONTINUOUS, LineType.STRIPED
        for corner in range(4):
            angle = np.radians(90 * corner)
            is_horizontal = corner % 2
            priority = 3 if is_horizontal else 1
            rotation = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
            # Incoming
            start = rotation @ np.array([lane_width / 2, access_length + outer_distance])
            end = rotation @ np.array([lane_width / 2, outer_distance])
            net.add_lane("o" + str(corner), "ir" + str(corner),
                         StraightLane(start, end, line_types=[s, c], priority=priority, speed_limit=10))
            # Right turn
            r_center = rotation @ (np.array([outer_distance, outer_distance]))
            net.add_lane("ir" + str(corner), "il" + str((corner - 1) % 4),
                         CircularLane(r_center, right_turn_radius, angle + np.radians(180), angle + np.radians(270),
                                      line_types=[n, c], priority=priority, speed_limit=10))
            # Left turn
            l_center = rotation @ (np.array([-left_turn_radius + lane_width / 2, left_turn_radius - lane_width / 2]))
            net.add_lane("ir" + str(corner), "il" + str((corner + 1) % 4),
                         CircularLane(l_center, left_turn_radius, angle + np.radians(0), angle + np.radians(-90),
                                      clockwise=False, line_types=[n, n], priority=priority - 1, speed_limit=10))
            # Straight
            start = rotation @ np.array([lane_width / 2, outer_distance])
            end = rotation @ np.array([lane_width / 2, -outer_distance])
            net.add_lane("ir" + str(corner), "il" + str((corner + 2) % 4),
                         StraightLane(start, end, line_types=[s, n], priority=priority, speed_limit=10))
            # Exit
            start = rotation @ np.flip([lane_width / 2, access_length + outer_distance], axis=0)
            end = rotation @ np.flip([lane_width / 2, outer_distance], axis=0)
            net.add_lane("il" + str((corner - 1) % 4), "o" + str((corner - 1) % 4),
                         StraightLane(end, start, line_types=[n, c], priority=priority, speed_limit=10))

        road = RegulatedRoad(network=net, np_random=self.np_random, record_history=self.config["show_trajectories"])
        self.road = road

首先是lane_widthright_turn_radiusleft_turn_radiusouter_distanceaccess_length等参数的设置,图示如下:

在这里插入图片描述
在这里插入图片描述

旋转矩阵: [ cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ] \left[ {\begin{array}{ccccccccccccccc}{\cos \theta }&{ - \sin \theta }\\{\sin \theta }&{\cos \theta }\end{array}} \right] [cosθsinθsinθcosθ]

代码遍历4个方向,构建相应的路网,图示如下:
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

_make_vehicles

在这里插入图片描述

    def _make_vehicles(self, n_vehicles: int = 10) -> None:
        """
        Populate a road with several vehicles on the highway and on the merging lane

        :return: the ego-vehicle
        """
        # Configure vehicles
        vehicle_type = utils.class_from_path(self.config["other_vehicles_type"])
        vehicle_type.DISTANCE_WANTED = 7  # Low jam distance
        vehicle_type.COMFORT_ACC_MAX = 6
        vehicle_type.COMFORT_ACC_MIN = -3

        # Random vehicles
        simulation_steps = 3
        for t in range(n_vehicles - 1):
            self._spawn_vehicle(np.linspace(0, 80, n_vehicles)[t])
        for _ in range(simulation_steps):
            [(self.road.act(), self.road.step(1 / self.config["simulation_frequency"])) for _ in range(self.config["simulation_frequency"])]

        # Challenger vehicle
        self._spawn_vehicle(60, spawn_probability=1, go_straight=True, position_deviation=0.1, speed_deviation=0)

        # Controlled vehicles
        self.controlled_vehicles = []
        for ego_id in range(0, self.config["controlled_vehicles"]):
            ego_lane = self.road.network.get_lane(("o{}".format(ego_id % 4), "ir{}".format(ego_id % 4), 0))
            destination = self.config["destination"] or "o" + str(self.np_random.randint(1, 4))
            ego_vehicle = self.action_type.vehicle_class(
                             self.road,
                             ego_lane.position(60 + 5*self.np_random.normal(1), 0),
                             speed=ego_lane.speed_limit,
                             heading=ego_lane.heading_at(60))
            try:
                ego_vehicle.plan_route_to(destination)
                ego_vehicle.speed_index = ego_vehicle.speed_to_index(ego_lane.speed_limit)
                ego_vehicle.target_speed = ego_vehicle.index_to_speed(ego_vehicle.speed_index)
            except AttributeError:
                pass

            self.road.vehicles.append(ego_vehicle)
            self.controlled_vehicles.append(ego_vehicle)
            for v in self.road.vehicles:  # Prevent early collisions
                if v is not ego_vehicle and np.linalg.norm(v.position - ego_vehicle.position) < 20:
                    self.road.vehicles.remove(v)

_spawn_vehicle

    def _spawn_vehicle(self,
                       longitudinal: float = 0,
                       position_deviation: float = 1.,
                       speed_deviation: float = 1.,
                       spawn_probability: float = 0.6,
                       go_straight: bool = False) -> None:
        if self.np_random.uniform() > spawn_probability:
            return

        route = self.np_random.choice(range(4), size=2, replace=False)
        route[1] = (route[0] + 2) % 4 if go_straight else route[1]
        vehicle_type = utils.class_from_path(self.config["other_vehicles_type"])
        vehicle = vehicle_type.make_on_lane(self.road, ("o" + str(route[0]), "ir" + str(route[0]), 0),
                                            longitudinal=(longitudinal + 5
                                                          + self.np_random.normal() * position_deviation),
                                            speed=8 + self.np_random.normal() * speed_deviation)
        for v in self.road.vehicles:
            if np.linalg.norm(v.position - vehicle.position) < 15:
                return
        vehicle.plan_route_to("o" + str(route[1]))
        vehicle.randomize_behavior()
        self.road.vehicles.append(vehicle)
        return vehicle

6.step

在这里插入图片描述

abstract.py
    def step(self, action: Action) -> Tuple[Observation, float, bool, bool, dict]:
        """
        Perform an action and step the environment dynamics.

        The action is executed by the ego-vehicle, and all other vehicles on the road performs their default behaviour
        for several simulation timesteps until the next decision making step.

        :param action: the action performed by the ego-vehicle
        :return: a tuple (observation, reward, terminated, truncated, info)
        """
        if self.road is None or self.vehicle is None:
            raise NotImplementedError("The road and vehicle must be initialized in the environment implementation")

        self.time += 1 / self.config["policy_frequency"]
        self._simulate(action)

        obs = self.observation_type.observe()
        reward = self._reward(action)
        terminated = self._is_terminated()
        truncated = self._is_truncated()
        info = self._info(obs, action)
        if self.render_mode == 'human':
            self.render()

        return obs, reward, terminated, truncated, info
intersection_env.py
    def step(self, action: int) -> Tuple[np.ndarray, float, bool, bool, dict]:
        obs, reward, terminated, truncated, info = super().step(action)
        self._clear_vehicles()
        self._spawn_vehicle(spawn_probability=self.config["spawn_probability"])
        return obs, reward, terminated, truncated, info
    def _simulate(self, action: Optional[Action] = None) -> None:
        """Perform several steps of simulation with constant action."""
        frames = int(self.config["simulation_frequency"] // self.config["policy_frequency"])
        for frame in range(frames):
            # Forward action to the vehicle
            if action is not None \
                    and not self.config["manual_control"] \
                    and self.steps % int(self.config["simulation_frequency"] // self.config["policy_frequency"]) == 0:
                self.action_type.act(action)

            self.road.act()
            self.road.step(1 / self.config["simulation_frequency"])
            self.steps += 1

            # Automatically render intermediate simulation steps if a viewer has been launched
            # Ignored if the rendering is done offscreen
            if frame < frames - 1:  # Last frame will be rendered through env.render() as usual
                self._automatic_rendering()

        self.enable_auto_render = False

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1228497.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

基于单片机16路抢答器仿真系统

**单片机设计介绍&#xff0c; 基于单片机16路抢答器仿真系统 文章目录 一 概要二、功能设计设计思路 三、 软件设计原理图 五、 程序六、 文章目录 一 概要 基于单片机的16路抢答器仿真系统是一种用于模拟和实现抢答竞赛的系统。该系统由硬件和软件两部分组成。 硬件方面&am…

单张图像3D重建:原理与PyTorch实现

近年来&#xff0c;深度学习&#xff08;DL&#xff09;在解决图像分类、目标检测、语义分割等 2D 图像任务方面表现出了出色的能力。DL 也不例外&#xff0c;在将其应用于 3D 图形问题方面也取得了巨大进展。 在这篇文章中&#xff0c;我们将探讨最近将深度学习扩展到单图像 3…

CICD 持续集成与持续交付——git

git使用 [rootcicd1 ~]# yum install -y git[rootcicd1 ~]# mkdir demo[rootcicd1 ~]# cd demo/ 初始化版本库 [rootcicd1 demo]# git init 查看状态 [rootcicd1 demo]# git status[rootcicd1 demo]# git status -s #简化输出 [rootcicd1 demo]# echo test > README.md[roo…

计算机毕业设计选题推荐-内蒙古旅游微信小程序/安卓APP-项目实战

✨作者主页&#xff1a;IT研究室✨ 个人简介&#xff1a;曾从事计算机专业培训教学&#xff0c;擅长Java、Python、微信小程序、Golang、安卓Android等项目实战。接项目定制开发、代码讲解、答辩教学、文档编写、降重等。 ☑文末获取源码☑ 精彩专栏推荐⬇⬇⬇ Java项目 Python…

控制您的音乐、视频等媒体内容

跨多个 Chrome 标签页播放音乐或声音 在计算机上打开 Chrome 。在标签页中播放音乐、视频或其他任何有声内容。您可以停留在该标签页上&#xff0c;也可以转到别处。要控制声音&#xff0c;请在右上角点击“媒体控件”图标 。您可暂停播放、转到下一首歌曲/下一个视频&#xf…

六大排序(插入排序、希尔排序、冒泡排序、选择排序、堆排序、快速排序)未完

文章目录 排序一、 排序的概念1.排序&#xff1a;2.稳定性&#xff1a;3.内部排序&#xff1a;4.外部排序&#xff1a; 二、插入排序1.直接插入排序2.希尔排序 三、选择排序1.直接选择排序方法一方法二直接插入排序和直接排序的区别 2.堆排序 四、交换排序1.冒泡排序2.快速排序…

机器视觉系统中的工业镜头的参数

光学倍率 β 焦距 f F值&#xff08;光圈&#xff09;Fno. 数值孔径 NA 工作距离 WD 视场&#xff08;视场角&#xff0c;视野&#xff09; 景深DOF 分辨率、分辨力 MTF 畸变

深入理解注意力机制(上)-起源

一、介绍 近几年自然语言处理有很大的进展&#xff0c;从 2018 年 Google 推出的 BERT&#xff0c;到后来的 GPT、ChatGPT 等&#xff0c;这些模型当时能取得这样的成果&#xff0c;除了庞大的数据量及损害资源外&#xff0c;最重要的是的就是背后的Transformer模型&#xff0c…

电子学会C/C++编程等级考试2022年03月(一级)真题解析

C/C++等级考试(1~8级)全部真题・点这里 第1题:双精度浮点数的输入输出 输入一个双精度浮点数,保留8位小数,输出这个浮点数。 时间限制:1000 内存限制:65536输入 只有一行,一个双精度浮点数。输出 一行,保留8位小数的浮点数。样例输入 3.1415926535798932样例输出 3.1…

莹莹API管理系统源码附带两套模板

这是一个API后台管理系统的源码&#xff0c;可以自定义添加接口&#xff0c;并自带两个模板。 环境要求 PHP版本要求高于5.6且低于8.0&#xff0c;已测试通过的版本为7.4。 需要安装PHPSG11加密扩展。 已测试&#xff1a;宝塔/主机亲测成功搭建&#xff01; 安装说明 &am…

算法——动态规划(新)

什么是动态规划&#xff1f; 动态规划算法的基本思想-求解步骤-基本要素和一些经典的动态规划问题【干货】-CSDN博客 一、三步问题 面试题 08.01. 三步问题 - 力扣&#xff08;LeetCode&#xff09; 思路 我们要知道&#xff0c;走楼梯&#xff0c;前三个阶梯步数已经知道&…

基于深度学习的恶意软件检测

恶意软件是指恶意软件犯罪者用来感染个人计算机或整个组织的网络的软件。 它利用目标系统漏洞&#xff0c;例如可以被劫持的合法软件&#xff08;例如浏览器或 Web 应用程序插件&#xff09;中的错误。 恶意软件渗透可能会造成灾难性的后果&#xff0c;包括数据被盗、勒索或网…

原理Redis-动态字符串SDS

动态字符串SDS Redis中保存的Key是字符串&#xff0c;value往往是字符串或者字符串的集合。可见字符串是Redis中最常用的一种数据结构。 不过Redis没有直接使用C语言中的字符串&#xff0c;因为C语言字符串存在很多问题&#xff1a; 获取字符串长度的需要通过运算非二进制安全…

【计算思维】蓝桥杯STEMA 科技素养考试真题及解析 4

1、下列哪个选项填到填到下图空缺处最合适 A、 B、 C、 D、 答案&#xff1a;D 2、按照如下图的规律摆放正方形&#xff0c;第 5 堆正方形的个数是 A、13 B、14 C、15 D、16 答案&#xff1a;D 3、从右面观察下面的立体图形&#xff0c;看到的是 A、 B、 C、 D、 答…

Jmeter做接口测试

1.Jmeter的安装以及环境变量的配置 Jmeter是基于java语法开发的接口测试以及性能测试的工具。 jdk&#xff1a;17 (最新的Jeknins&#xff0c;只能支持到17) jmeter&#xff1a;5.6 官网&#xff1a;http://jmeter.apache.org/download_jmeter.cgi 认识JMeter的目录&#xff1…

原理Redis-IntSet

IntSet IntSet是Redis中set集合的一种实现方式&#xff0c;基于整数数组来实现&#xff0c;并且具备长度可变、有序等特征。 结构如下&#xff1a; typedef struct intset {uint32_t encoding; /* 编码方式&#xff0c;支持存放16位、32位、64位整数*/uint32_t length; /* 元素…

基于Python+OpenCV+Tensorflow图像迁移的艺术图片生成系统

欢迎大家点赞、收藏、关注、评论啦 &#xff0c;由于篇幅有限&#xff0c;只展示了部分核心代码。 文章目录 一项目简介 二、功能三、系统![请添加图片描述](https://img-blog.csdnimg.cn/dbda87069fc14c24b71c1eb4224dff05.png)四. 总结 一项目简介 基于PythonOpenCVTensorfl…

边缘计算是如何为元宇宙提供动力的?

构建元宇宙虚拟世界并不简单&#xff0c;也并不便宜&#xff0c;但是还是有许多大型公司正在转移大量资源来开发他们的元宇宙业务&#xff0c;当然大部分企业注意力都围绕着 VR 耳机、AR 眼镜、触觉手套和其他沉浸式虚拟现实体验所需的可穿戴硬件。虽然这种沉浸式的体验是最终结…

2023.11.17 关于 Spring Boot 日志文件

目录 日志文件作用 常见的日志框架说明 门面模式 日志的使用 日志的级别 六种级别 日志级别的设置 日志的持久化 使用 Lombok 输出日志 实现原理 普通打印和日志的区别 日志文件作用 记录 错误日志 和 警告日志&#xff08;发现和定位问题&#xff09;记录 用户登录…

web自动化测试的智能革命:AI如何推动软件质量保证的未来

首先这个标题不是我取的&#xff0c;是我喂了关键字让AI给取的&#xff0c;果然非常的标题党&#xff0c;让人印象深刻&#xff0c;另外题图也是AI自动生成的。 先简单回顾一下web自动化测试的一些发展阶段 QTP时代 很多年前QTP横空出世的时候&#xff0c;没有人会怀疑这种工…