Reinforcement Learning with Code 【Chapter 8. Value Funtion Approximation】

news2024/11/17 1:45:23

Reinforcement Learning with Code

This note records how the author begin to learn RL. Both theoretical understanding and code practice are presented. Many material are referenced such as ZhaoShiyu’s Mathematical Foundation of Reinforcement Learning, .

文章目录

  • Reinforcement Learning with Code
    • Chapter 8. Value Funtion Approximation
      • 8.1 State value: MC/TD learning with function approximation
      • 8.2 Action value: Sarsa with funtion approximation
      • 8.3 Optimal action value: Q-learning with function approximation
      • 8.4 Deep Q-learning (DQN)
    • Reference

Chapter 8. Value Funtion Approximation

​ As so far in this note, state and action values are represented in tabular fashion. There are two problems that first although tabular representation is intuitive, it would encounter some problems when the state action space is large. Second since the value of a state is updated only if it is visited, the values of unvisited states cannot be estimated.

​ We can sovle the problems using a parameterized function v ^ ( s , w ) \hat{v}(s,w) v^(s,w) to approximate the value funtion, where w ∈ R m w\in\mathbb{R}^m wRm is the parameter vector. On the one hand, we only need to store the parameter w w w instead of all states s s s, which is much smaller. On the other hand, when a state s s s is visited, the parameter w w w is updated so that the values of some other unvisited states can also be estimated.

​ We usually use neural network to approximate the value function. This problem basically is a regression problem. We can find the optimal parameter w w w by minimize some objective funtions, which we will introduce next.

8.1 State value: MC/TD learning with function approximation

​ Let v π ( s ) v_\pi(s) vπ(s) and v ^ ( s , w ) \hat{v}(s,w) v^(s,w) be the true value and approximated state value of s ∈ S s\in\mathcal{S} sS. The objective funtion considered in value function approximation is usually
J ( w ) = E [ ( v π ( S ) − v ^ ( S , w ) ) 2 ] \textcolor{red}{J(w) = \mathbb{E}\Big[ \big( v_\pi(S)-\hat{v}(S,w) \big)^2 \Big]} J(w)=E[(vπ(S)v^(S,w))2]
which is also called mean square error in deep learning field. Where S , S ′ S,S^\prime S,S denote random variable of state s s s.

​ Next, we discuss the distribution of random variable S S S. We often use the stationary distribution, which describeds the long-run behavior of a Markov process. Let { d π ( s ) } s ∈ S \textcolor{blue}{\{d_\pi(s)\}_{s\in\mathcal{S}}} {dπ(s)}sS donte the stationary distribution of random variable S S S. By definition, d π ( s ) ≥ 0 d_\pi(s)\ge 0 dπ(s)0 and ∑ s ∈ S d π ( s ) = 1 \sum_{s\in\mathcal{S}} d_\pi(s)=1 sSdπ(s)=1. Hence, the objective funciton can be rewritten as
J ( w ) = ∑ s ∈ S d π ( s ) [ v π ( s ) − v ^ ( s , w ) ] 2 \textcolor{red}{J(w) = \sum_{s\in\mathcal{S}} d_\pi(s) \Big[v_\pi(s)-\hat{v}(s,w) \Big]^2} J(w)=sSdπ(s)[vπ(s)v^(s,w)]2
This objective function is a weighted squared error. How to compute the stationary distribution of random variable S S S? We often use the equation
d π T = d π T P π d_\pi^T = d_\pi^T P_\pi dπT=dπTPπ
As a result, d π d_\pi dπ is the left eigenvector of P π P_\pi Pπ associated with the eigenvalue of 1 1 1. The proof is omitted.

Recall the spirit of gradient descent (GD). We can use it to minimize the objective function as
w k + 1 = w k − α k ∇ w E [ ( v π ( S ) − v ^ ( S , w ) ) 2 ] = w k − α k E [ ∇ w ( v π ( S ) − v ^ ( S , w ) ) 2 ] = w k − 2 α k E [ ( v π ( S ) − v ^ ( S , w ) ) ∇ w ( − v ^ ( S , w ) ) ] = w k + 2 α k E [ v π ( S ) − v ^ ( S , w ) ∇ w v ^ ( S , w ) ] \begin{aligned} w_{k+1} & = w_k - \alpha_k \nabla_w \mathbb{E}\Big[ (v_\pi(S)-\hat{v}(S,w))^2 \Big] \\ & = w_k - \alpha_k \mathbb{E} \Big[ \nabla_w(v_\pi(S)-\hat{v}(S,w))^2 \Big] \\ & = w_k - 2\alpha_k \mathbb{E}\Big[ (v_\pi(S)-\hat{v}(S,w)) \nabla_w(-\hat{v}(S,w)) \Big] \\ & = w_k + 2\alpha_k \mathbb{E} \Big[v_\pi(S)-\hat{v}(S,w) \nabla_w\hat{v}(S,w) \Big] \end{aligned} wk+1=wkαkwE[(vπ(S)v^(S,w))2]=wkαkE[w(vπ(S)v^(S,w))2]=wk2αkE[(vπ(S)v^(S,w))w(v^(S,w))]=wk+2αkE[vπ(S)v^(S,w)wv^(S,w)]
where without loss of generality the cofficient 2 before α k \alpha_k αk can be dropped. By the spirit of stochastic gradient descent (SGD), we can remove the expectation operation to obtain
w t + 1 = w t + a t ( v π ( s t ) − v ^ ( s t , w t ) ) ∇ w v ^ ( s t , w t ) w_{t+1} = w_t + a_t (v_\pi(s_t) - \hat{v}(s_t,w_t))\nabla_w \hat{v}(s_t,w_t) wt+1=wt+at(vπ(st)v^(st,wt))wv^(st,wt)
However this equation can’t be implemented. Because it requires the true state value v π v_\pi vπ, which is the unknown to be esitmated. Hence, we can use the idea of Monte Carlo or TD learning to estimate it.

​ By Monte Carlo learning spirit, we can use the g t g_t gt to denote the discounted return that
g t = r t + 1 + γ r t + 2 + γ 2 r t + 3 + ⋯ g_t = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + \cdots gt=rt+1+γrt+2+γ2rt+3+
Then, g t g_t gt can be used as an approximation of v π ( s ) v_\pi(s) vπ(s). The algorithm becomes
w t + 1 = w t + a t ( g t − v ^ ( s t , w t ) ) ∇ w v ^ ( s t , w t ) w_{t+1} = w_t + a_t (\textcolor{red}{g_t} - \hat{v}(s_t,w_t))\nabla_w \hat{v}(s_t,w_t) wt+1=wt+at(gtv^(st,wt))wv^(st,wt)
​ By TD learning spirit, we can use the r t + 1 + γ v ^ ( s t + 1 , w t ) r_{t+1}+\gamma \hat{v}(s_{t+1},w_t) rt+1+γv^(st+1,wt) as the approximation of v π ( s ) v_\pi(s) vπ(s). The algorithm becomes
w t + 1 = w t + a t ( r t + 1 + γ v ^ ( s t + 1 , w ) − v ^ ( s t , w t ) ) ∇ w v ^ ( s t , w t ) w_{t+1} = w_t + a_t (\textcolor{red}{r_{t+1}+\gamma \hat{v}(s_{t+1},w)} - \hat{v}(s_t,w_t))\nabla_w \hat{v}(s_t,w_t) wt+1=wt+at(rt+1+γv^(st+1,w)v^(st,wt))wv^(st,wt)
Pesudocode:

Image

8.2 Action value: Sarsa with funtion approximation

​ To seach for optimal policies, we need to estimate action values. This section introduces how to estimate action values using Sarsa in the presence of value function approximation.

​ The action value q π ( s , a ) q_\pi(s,a) qπ(s,a) is described by a function q ^ ( s , a , w ) \hat{q}(s,a,w) q^(s,a,w) parameterized by w w w. The objective funtion considered in action value approximation is usually selected as
J ( w ) = E [ ( q π ( S , A ) − q ^ ( S , A , w ) ) 2 ] \textcolor{red}{J(w) = \mathbb{E}[(q_\pi(S,A) - \hat{q}(S,A,w))^2]} J(w)=E[(qπ(S,A)q^(S,A,w))2]
Use the stochasitic gradient descent to minimize the objective function

w k + 1 = w k − α k ∇ w E [ ( q π ( S , A ) − q ^ ( S , A , w ) ) 2 ] = w k + 2 a k E [ q π ( S , A ) − q ^ ( S , A , w ) ] ∇ w q ^ ( S , A , w ) = w k + α k ( q π ( s , a ) − q ^ ( s , a , w ) ) ∇ w q ^ ( s , a , w ) \begin{aligned} w_{k+1} & = w_k - \alpha_k \nabla_w \mathbb{E}[(q_\pi(S,A) - \hat{q}(S,A,w))^2] \\ & = w_k + 2a_k \mathbb{E}[q_\pi(S,A) - \hat{q}(S,A,w)]\nabla_w \hat{q}(S,A,w) \\ & = w_k + \alpha_k (q_\pi(s,a) - \hat{q}(s,a,w))\nabla_w\hat{q}(s,a,w) \end{aligned} wk+1=wkαkwE[(qπ(S,A)q^(S,A,w))2]=wk+2akE[qπ(S,A)q^(S,A,w)]wq^(S,A,w)=wk+αk(qπ(s,a)q^(s,a,w))wq^(s,a,w)

where without loss of generality the cofficient 2 before α k \alpha_k αk can be dropped.

​ By Sarsa spirit, we use the r + γ q ^ ( s ′ , a ′ , w ) r+\gamma \hat{q}(s^\prime,a^\prime,w) r+γq^(s,a,w) to approximate ture action value q π ( s , a ) q_\pi(s,a) qπ(s,a). Hence we have
w k + 1 = w k + α k ( r + γ q ^ ( s ′ , a ′ , w ) − q ^ ( s , a , w ) ) ∇ w q ^ ( s , a , w ) w_{k+1} = w_k + \alpha_k (r+\gamma \hat{q}(s^\prime,a^\prime,w) - \hat{q}(s,a,w))\nabla_w\hat{q}(s,a,w) wk+1=wk+αk(r+γq^(s,a,w)q^(s,a,w))wq^(s,a,w)
The sampled data ( s , a , r k , s k ′ , a k ′ ) (s,a,r_k,s^\prime_k,a^\prime_k) (s,a,rk,sk,ak) is changed to ( s t , a t , r t + 1 , s t + 1 , a t + 1 ) (s_t,a_t,r_{t+1},s_{t+1},a_{t+1}) (st,at,rt+1,st+1,at+1). Hence
w t + 1 = w t + α t [ r t + 1 + γ q ^ ( s t + 1 , a t + 1 , w t ) − q ^ ( s t , a t , w t ) ] ∇ w q ^ ( s t , a t , w t ) w_{t+1} = w_t + \alpha_t \Big[ \textcolor{red}{r_{t+1}+\gamma \hat{q}(s_{t+1},a_{t+1},w_t)} - \hat{q}(s_t,a_t,w_t) \Big]\nabla_w \hat{q}(s_t,a_t,w_t) wt+1=wt+αt[rt+1+γq^(st+1,at+1,wt)q^(st,at,wt)]wq^(st,at,wt)
Pseudocode:

Image

8.3 Optimal action value: Q-learning with function approximation

​ Similar to Sarsa, tabular Q-learning can also be extended to the case of value function approximation.

By the spirit of Q-learning, the update rule is

w t + 1 = w t + α t [ r t + 1 + γ max ⁡ a ∈ A ( s t + 1 ) q ^ ( s t + 1 , a , w t ) − q ^ ( s t , a t , w t ) ] ∇ w q ^ ( s t , a t , w t ) w_{t+1} = w_t + \alpha_t \Big[ \textcolor{red}{r_{t+1}+\gamma \max_{a\in\mathcal{A}(s_{t+1})} \hat{q}(s_{t+1},a,w_t)} - \hat{q}(s_t,a_t,w_t) \Big]\nabla_w \hat{q}(s_t,a_t,w_t) wt+1=wt+αt[rt+1+γaA(st+1)maxq^(st+1,a,wt)q^(st,at,wt)]wq^(st,at,wt)
which is the same as Sarsa expect that q ^ ( s t + 1 , a t + 1 , w t ) \hat{q}(s_{t+1},a_{t+1},w_t) q^(st+1,at+1,wt) is replaced by max ⁡ a ∈ A ( s t + 1 ) q ^ ( s t + 1 , a , w t ) \max_{a\in\mathcal{A}(s_{t+1})} \hat{q}(s_{t+1},a,w_t) maxaA(st+1)q^(st+1,a,wt).

Pseudocode:

Image

8.4 Deep Q-learning (DQN)

​ We can introduce deep neural networks into Q-learning to obtain deep Q-learning or deep Q-network (DQN).

Mathematically, deep Q-learning aims to minimize the objective funtion
J ( w ) = E [ ( R + γ max ⁡ a ∈ A ( S ′ ) q ^ ( S ′ , a , w ) − q ^ ( S , A , w ) ) 2 ] \textcolor{red}{J(w) = \mathbb{E} \Big[ \Big( R+\gamma \max_{a\in\mathcal{A}(S^\prime)} \hat{q}(S^\prime, a, w) - \hat{q}(S,A,w) \Big)^2 \Big]} J(w)=E[(R+γaA(S)maxq^(S,a,w)q^(S,A,w))2]
where ( S , A , R , S ′ ) (S,A,R,S^\prime) (S,A,R,S) are random variables representing a state, an action taken at that state, the immediate reward, and the next state. This objective funtion can be viewed as the Bellman opitmality error. That is because
q ( s , a ) = E [ R t + 1 + γ max ⁡ a ∈ A ( S t + 1 ) q ( S t + 1 , a ) ∣ S t = s , A t = a ] q(s,a) = \mathbb{E} \Big[ R_{t+1} + \gamma \max_{a\in\mathcal{A}(S_{t+1})} q(S_{t+1},a) | S_t = s, A_t= a \Big] q(s,a)=E[Rt+1+γaA(St+1)maxq(St+1,a)St=s,At=a]
is the Bellman optimality equation in terms of action value.

​ Then we can use the stochastic gradient to minimize the objective funtion. However, it is noted that the parameter w w w not only apperas in q ^ ( S , A , w ) \hat{q}(S,A,w) q^(S,A,w) but also in y ≜ R + γ max ⁡ a ∈ A ( S ′ ) q ^ ( S ′ , a , w ) y\triangleq R+\gamma \max_{a\in\mathcal{A}(S^\prime)}\hat{q}(S^\prime,a,w) yR+γmaxaA(S)q^(S,a,w). For the sake of simplicity, we can assume that w w w in y y y is fixed (at least for a while) when we calculate the gradient. To do that, we can introduce two networks. One is a *main networ*k representing q ^ ( s , a , w ) \hat{q}(s,a,w) q^(s,a,w) and the other is a target network q ^ ( s , a , w T ) \hat{q}(s,a,w_T) q^(s,a,wT). The objective function in this case degenerates to
J ( w ) = E [ ( R + γ max ⁡ a ∈ A ( S ′ ) q ^ ( S ′ , a , w T ) − q ^ ( S , A , w ) ) 2 ] J(w) = \mathbb{E} \Big[ \Big( R+\gamma \max_{a\in\mathcal{A}(S^\prime)} \hat{q}(S^\prime, a, w_T) - \hat{q}(S,A,w) \Big)^2 \Big] J(w)=E[(R+γaA(S)maxq^(S,a,wT)q^(S,A,w))2]
where w T w_T wT is the target network parameter and w w w is the main network parameter. When w T w_T wT is fixed, the gradient of J ( w ) J(w) J(w) is
∇ w J = − 2 ∗ E [ ( R + γ max ⁡ a ∈ A ( S ′ ) q ^ ( S ′ , a , w T ) − q ^ ( S ′ , A , w ) ) ∇ w q ^ ( S , A , w ) ] \nabla_{w}J = -2*\mathbb{E} \Big[ \Big( R + \gamma \max_{a\in\mathcal{A}(S^\prime)} \hat{q}(S^\prime,a,w_T) - \hat{q}(S^\prime,A,w) \Big) \nabla_{w}\hat{q}(S,A,w) \Big] wJ=2E[(R+γaA(S)maxq^(S,a,wT)q^(S,A,w))wq^(S,A,w)]
There are two techniques should be noticed.

​ When the target network is fixed, using stochastic gradient descent (SGD) we can obtain
w t + 1 = w t + α t [ r t + 1 + γ max ⁡ a ∈ A ( s t + 1 ) q ^ ( s t + 1 , a , w T ) − q ^ ( s t , a t , w ) ] ∇ w q ^ ( s t , a t , w ) \textcolor{red}{w_{t+1} = w_{t} + \alpha_t \Big[ r_{t+1} + \gamma \max_{a\in\mathcal{A}(s_{t+1})} \hat{q}(s_{t+1},a,w_T) - \hat{q}(s_t,a_t,w) \Big] \nabla_w \hat{q}(s_t,a_t,w)} wt+1=wt+αt[rt+1+γaA(st+1)maxq^(st+1,a,wT)q^(st,at,w)]wq^(st,at,w)
​ One technique is experience replay. This is, after we have collected some experience samples, we don’t use these samples in the order they were collected. Instead, we store them in a data set, called replay buffer, and draw a batch of samples randomly to train the neureal network. In particular, let ( s , a , r , s ′ ) (s,a,r,s^\prime) (s,a,r,s) be an experience sample and B ≜ { ( s , a , r , s ′ ) } \mathcal{B}\triangleq \{(s,a,r,s^\prime) \} B{(s,a,r,s)} be the replay buffer. Every time we train the neural network, we can draw a mini-batch of random samples from the reply buffer. The draw of samples, or called experience replay, should follow a uniform distribution.

​ Another technique is use two networks. A main network parameterized by w w w, and a target network parameterized by w T w_T wT. The two network parameters are set to be the same initially. The target output is y T ≜ r + γ max ⁡ a ∈ A ( s ′ ) q ^ ( s ′ , a , w T ) y_T \triangleq r + \gamma \max_{a\in\mathcal{A}(s^\prime)}\hat{q}(s^\prime,a,w_T) yTr+γmaxaA(s)q^(s,a,wT). Then, we directly minimize the TD error or called loss function ( y − q ^ ( s , a , w , ) ) 2 (y-\hat{q}(s,a,w,))^2 (yq^(s,a,w,))2 over the mini-batch { ( s , a , y T ) } \{(s,a,y_T)\} {(s,a,yT)} instead of a single sample to improve efficiency and stability.

​ The parameter of the main network is updated in every interation. By contrast, the target network is set to be the same as the main network every a certain number of iterations to meet the assumption that w T w_T wT is fixed when calculating the gradient.

Pseudocode:

Image

Reference

赵世钰老师的课程

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/804478.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

2.9 线性表的划分

划分规则: 以某个元素为标准, 把顺序表中的元素分为左右两个部分, 标准元素称为枢轴. 考研中划分有三种题型(划分策略). 题型一 要求: 给一个顺序表, 以第一个元素为枢轴, 将该顺序表划分为左右两部分, 使得左边的所有元素都小于枢轴, 右边的所有元素都大于枢轴. 并且枢轴要…

基于Java+SpringBoot+vue前后端分离大学生就业招聘系统设计实现

博主介绍:✌全网粉丝30W,csdn特邀作者、博客专家、CSDN新星计划导师、Java领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java技术领域和毕业项目实战✌ 🍅文末获取源码联系🍅 👇🏻 精彩专…

【雕爷学编程】Arduino动手做(174)---Sensor Shield V5.0传感器扩展板

37款传感器与执行器的提法,在网络上广泛流传,其实Arduino能够兼容的传感器模块肯定是不止这37种的。鉴于本人手头积累了一些传感器和执行器模块,依照实践出真知(一定要动手做)的理念,以学习和交流为目的&am…

c++学习(位图)[22]

位图 位图(Bitmap)是一种数据结构,用于表示一个固定范围的布尔值(通常是0或1)。它使用一个二进制位来表示一个布尔值,其中每个位的值表示对应位置的元素是否存在或满足某种条件。 位图可以用于解决一些特…

下级平台级联安防视频汇聚融合EasyCVR平台,层级显示不正确是什么原因?

视频汇聚平台安防监控EasyCVR可拓展性强、视频能力灵活、部署轻快,可支持的主流标准协议有GB28181、RTSP/Onvif、RTMP等,以及厂家私有协议与SDK接入,包括海康Ehome、海大宇等设备的SDK等,能对外分发RTSP、RTMP、FLV、HLS、WebRTC等…

粘包处理的方式

为什么出现粘包: 发送端在发送的时候由于 Nagel 算法的存在会将字节数较小的数据整合到一起发送,导致粘包;接收端不知道发送端数据的长度,导致接收时无法区分数据; 粘包处理的方式: 通过在数据前面加上报…

OpenLayers入门,OpenLayers如何加载GeoJson多边形、线段、点和区域范围等数据并叠加到OpenLayers矢量图层上

专栏目录: OpenLayers入门教程汇总目录 前言 前面两章已经讲了OpenLayers如何加载GeoJson数据到矢量图层和webgl图层上,前面两章也是可以支持多边形、线段、点和区域范围灯数据加载的,只是没有设置样式,所以只能看到点,本章就相当于完整版本,可以将所有图形都详细展示出…

使用Wps减小PDF文件的大小

第一步、打开左上角的文件 第二步、点击打印选项 第三步、点击打印按钮

大学的python课程一般叫什么,大学开设python课程吗

大家好,小编为大家解答大学的python课程一般叫什么的问题。很多人还不知道大学python课有没有听的必要,现在让我们一起来看看吧! 1、华中农业大学python期末考试会考原题吗 华中农业大芦如学python期末考试不会考原题。华中农业搜侍大学pyth…

Leetcode-每日一题【剑指 Offer II 075. 数组相对排序】

题目 给定两个数组,arr1 和 arr2, arr2 中的元素各不相同 arr2 中的每个元素都出现在 arr1 中 对 arr1 中的元素进行排序,使 arr1 中项的相对顺序和 arr2 中的相对顺序相同。未在 arr2 中出现过的元素需要按照升序放在 arr1 的末尾。 示例&…

都2023年了还不会Node.js爬虫?快学起来!

爬虫简介 什么是爬虫 爬虫(Web Crawler)是一种自动化程序,可以在互联网上自动抓取网页,并从中提取有用的信息。 爬虫可以模拟人类浏览器的行为,自动访问网站、解析网页、提取数据等。 通俗来说,爬虫就像…

财报解读:新鲜感褪去后,微软直面AI的骨感现实?

微软交出了一份远观尚可,但近看承压的“答卷”。 北京时间2023年7月26日,微软披露了2023财年第四财季及全年财报。受生产力和业务流程部门和智能云部门等业务带动,微软第四财季营收561.89亿美元,同比增长8%;净利润200…

Java-day02(关键字,变量,进制转换,数据类型转换,运算符)

关键字,变量,进制转换,数据类型转换,运算符 1.关键字,保留字与标识符 Java区分大小写 1.1 关键字 定义:有特殊含义,用作专用的字符串(单词) 特点:关键字所以字母都为…

小程序 获取用户头像、昵称、手机号的组件封装(最新版)

在父组件引入该组件 <!-- 授权信息 --><auth-mes showModal"{{showModal}}" idautnMes bind:onConfirm"onConfirm"></auth-mes> 子组件详细代码为: authMes.wxml <!-- components/authMes/authMes.wxml --> <van-popup show…

vs2013 32位 编译的 dll,重新用vs2022 64位编译,所遇问题记录

目录 一、vs2013 32 DLL 转 VS2022 64 DLL 所遇问题 1、 LNK2038: 检测到“_MSC_VER”的不匹配项: 值“1800”不匹配值“1900” 2、原先VS2013 现在 VS2022 导致的vsnprintf 重定义问题 3、 无法解析的外部符号 __vsnwprintf_s 4、无法解析的外部符号__imp__CertFreeC…

JGJ46-2005施工现场临时用电安全技术规范

为贯彻国家安全生产的法律和法规&#xff0c;保障施工现场用电安全&#xff0c;防止触电和电气火灾事故发生&#xff0c;促进建设事业发展&#xff0c;制定本规范。 本规范适用于新建、改建和扩建的工业与民用建筑和市政基础设施施工现场临时用电工程中的电源中性点直接接地的…

双重for循环优化

项目中有段代码逻辑是个双重for循环&#xff0c;发现数据量大的时候&#xff0c;直接导致数据接口响应超时&#xff0c;这里记录下不断优化的过程&#xff0c;算是抛砖引玉吧~ Talk is cheap,show me your code&#xff01; 双重for循环优化 1、数据准备2、原始双重for循环3、…

ChatGPT漫谈(三)

AIGC(AI Generated Content)指的是使用人工智能技术生成的内容,包括文字、图像、视频等多种形式。通过机器学习、深度学习等技术,AI系统可以学习和模仿人类的创作风格和思维模式,自动生成大量高质量的内容。AIGC被视为继用户生成内容(UGC)和专业生成内容(PGC)之后的下…

JS判断类型的方法和对应的局限性

JS判断类型的方法和对应的局限性 一、typeof 返回&#xff1a; 该方法返回小写字符串表示检测数据属于什么类型&#xff0c;例如&#xff1a; 检测函数返回function 可判断的数据类型&#xff1a; undefined、string、number、function、boolean、object&#xff0c;symb…

百题千解计划【CSDN每日一练】Ctrl+X,Ctrl+V(附解析+多种实现方法:Python、Java、C、C++、go、C#、JavaScript)

你要变成什么样子,全看你自己的选择。 🎯作者主页: 追光者♂🔥 🌸个人简介: 💖[1] 计算机专业硕士研究生💖 🌟[2] 2022年度博客之星人工智能领域TOP4🌟 🏅[3] 阿里云社区特邀专家博主🏅 🏆[4] CSDN-人工智能领域优质创作者🏆 📝…