- To properly initialize the system, update the obsMat and grid size to 24x24 for 10 agents, ensuring positions avoid obstacles. Adjust observation and action specs, duplicate agent blocks to total 10, and ensure the environment and networks fit the new observation size.
- Iterate through 10 agents instead of 3 and create agents for each actor and critic.
Need to add more agents in larger area in the agent coverage example
3 ビュー (過去 30 日間)
古いコメントを表示
this example has 3 agents in grid size of 12x12.
How do i quickly run the same example with 10 agents and grid size of 24x24 ?
could you please help
0 件のコメント
採用された回答
Nithin
2025 年 2 月 26 日
To run the same example with 10 agents on a "24x24" grid, you will need to manually update the code and add agent blocks.
Refer to the steps below to get a rough idea on the changes that need to be made:
for idx = 1:10
actor(idx) = rlDiscreteCategoricalActor(actorNetwork, oinfo, ainfo);
critic(idx) = rlValueFunction(criticNetwork, oinfo);
end
agents = arrayfun(@(i) rlPPOAgent(actor(i), critic(i), agentOpts), 1:10);
3. Update the training options to accommodate 10 agents
trainOpts = rlMultiAgentTrainingOptions(...
AgentGroups={1:10},...
LearningStrategy="centralized",...
MaxEpisodes=500,...
MaxStepsPerEpisode=maxsteps,...
SimulationStorageType="none",...
ScoreAveragingWindowLength=20,...
StopTrainingCriteria="AverageReward", ...
StopTrainingValue=200);
4. Ensure the simulation and training functions accommodate all agents and update the Reset Function
function in = resetMap(in, obsMat)
gridSize = [24 24];
isvalid = false;
while ~isvalid
rows = randperm(gridSize(1), 10);
cols = randperm(gridSize(2), 10);
s0 = [rows' cols'];
if all(arrayfun(@(i) all(~all(s0(i,:) == obsMat, 2)), 1:10))
isvalid = true;
end
end
g0 = zeros(gridSize);
for idx = 1:size(obsMat, 1)
r = obsMat(idx, 1);
c = obsMat(idx, 2);
g0(r, c) = 1.0;
end
for idx = 1:10
g0(s0(idx, 1), s0(idx, 2)) = idx / 10; % Assign different values for each agent
end
in = setVariable(in, 's0', s0);
in = setVariable(in, 'g0', g0);
end
5. Finally, navigate to the "rlAreaCoverage.slx" model and add agent blocks to the model. Then, inside the "Environment" system, update the "numRobots" value to 10 instead of 3 inside the "stepEnvironment" and "obeservation" functions.
Be sure to test and adjust any specific model parameters that might depend on the number of agents or grid size.
For more information about the objects and functions used, refer to the following documentation: https://www.mathworks.com/help/reinforcement-learning/ug/train-3-agents-for-area-coverage.html?s_eid=PSM_15028#d126e38413
I hope this helps you understand the workflow better.
0 件のコメント
その他の回答 (0 件)
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!