👨🎓个人主页:研学社的博客
💥💥💞💞欢迎来到本博客❤️❤️💥💥
🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。
⛳️座右铭:行百里者,半于九十。
📋📋📋本文目录如下:🎁🎁🎁
目录
💥1 概述
📚2 运行结果
🎉3 参考文献
🌈4 Matlab代码及详细文章
💥1 概述
文献来源:
灰狼优化算法(GWO)是一种智能元启发式方法,它模仿了一群灰狼(狼群)的领导层级和合作狩猎行为。最近提出了一种GWO的增强,称为增强GWO(AGWO),具有更大的勘探能力。然而,在某些情况下,AGWO在开发阶段表现不佳,停滞在局部最佳状态。CS算法是一种受自然启发的优化技术,它模仿了杜鹃鸟和征税飞行的独特筑巢策略。这两种算法都具有强大的搜索能力。在我们的研究工作中,提出了一种新的混合元启发式算法,称为AGWOCS,它结合了两种元启发式算法的优点,以有效地获得全局最优。所提出的算法融合了AGWO的探索能力和布谷鸟搜索(CS)的开发能力为了测试我们提出的混合AGWOCS的熟练程度,有23个著名的基准测试功能。它与其他六种现有的元启发式方法进行了比较,包括标准GWO、粒子群优化(PSO)、增强GWO(AGWO)、增强型GWO(EGWO)、CS混合GWO(CS-GWO)以及混合PSO和GWO(GWOPSO)。仿真结果表明,AGWOCS在快速收敛速度以及避免局部最优停滞方面优于其他元启发式算法。
📚2 运行结果
部分代码:
function [Alpha_score,Alpha_pos,Convergence_curve]=AGWO_CS(SearchAgents_no,Max_iter,lb,ub,dim,fobj)
% initialize alpha_pos, beta_pos.
% p=initialization(SearchAgents_no,dim,ub,lb);
Alpha_pos=zeros(1,dim);
Alpha_score=inf; %change this to -inf for maximization problems
Beta_pos=zeros(1,dim);
Beta_score=inf; %change this to -inf for maximization problems
%Initialize the positions of search agents
Positions=initialization(SearchAgents_no,dim,ub,lb);
Positions=sort(Positions);
oldPositions=Positions;
Convergence_curve=zeros(1,Max_iter);
l=0;% Loop counter
% Main loop
while l<Max_iter
for i=1:size(Positions,1)
% Return back the search agents that go beyond the boundaries of the search space
% Flag4ub=Positions(i,:)>ub;
% Flag4lb=Positions(i,:)<lb;
% Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
% Positions(i,:)=sort(Positions(i,:));
% Calculate objective function for each search agent
% [Positions(i,:), fitness(i)]=objf(Positions(i,:)',measuredPos,measuredPos,BlindDeviceID,actualRefLocs,refDeviceID,Range,x);
fitness(i)=fobj(Positions(i,:));
% Update Alpha, Beta, and Delta
if fitness(i)<Alpha_score
Alpha_score=fitness(i); % Update alpha
Alpha_pos=Positions(i,:);
end
if fitness(i)>Alpha_score && fitness(i)<Beta_score
Beta_score=fitness(i); % Update beta
Beta_pos=Positions(i,:);
end
end
a=2-(cos(rand())*l/Max_iter); % a decreases non-linearly fron 2 to 1.
% Update the Position of search agents including omegas
for i=1:size(Positions,1)
for j=1:size(Positions,2)
r1=rand(); % r1 is a random number in [0,1]
r2=rand(); % r2 is a random number in [0,1]
A1=2*a*r1-a; % Equation (4)
C1=2*r2; % Equation (5)
D_alpha=abs(C1*Alpha_pos(j)-Positions(i,j)); % Equation (6)-part 1
X1(i,j)=Alpha_pos(j)-A1*D_alpha; % Equation (7)-part 1
r1=rand();
r2=rand();
A2=2*a*r1-a; % Equation (4)
C2=2*r2; % Equation (5)
D_beta=abs(C2*Beta_pos(j)-Positions(i,j)); % Equation (6)-part 2
X2(i,j)=Beta_pos(j)-A2*D_beta; % Equation (7)-part 2
end
end
%% Cuckoo Search integrated here and take control from AGWO
% the key group members of AGWO are updated by cuckoo search's position updation formula
%
[~,index]=min(fitness);
best=Positions(index,:);
X1=get_cuckoos(X1,best,lb,ub);
X2=get_cuckoos(X2,best,lb,ub);
%% control is sent back to AGWO
Positions=(X1+X2)/2;% Equation (8)
% Positions=sort(Positions);
if Positions(i,j)>ub
Positions(i,j)=ub;
elseif Positions(i,j)<lb
Positions(i,j)=lb;
end
l=l+1;
Convergence_curve(l)=Alpha_score;
end
end
🎉3 参考文献
部分理论来源于网络,如有侵权请联系删除。
S. Sharma, R. Kapoor and S. Dhiman, "A Novel Hybrid Metaheuristic Based on Augmented Grey Wolf Optimizer and Cuckoo Search for Global Optimization," 2021 2nd International Conference on Secure Cyber Computing and Communications (ICSCCC), 2021, pp. 376-381, doi: 10.1109/ICSCCC51823.2021.9478142.