How to create a neural network for Multiple Agent with discrete and continuous action?

2 ビュー (過去 30 日間)
Hi All,
I am trying to create a RL model with 2 agents in my environment.
Both the observations are continuous, but Agent 1 Action is discrete and Agent 2 actions are continuous. How do I specify them while building the actor network?
%Create Action Specifications
numActions = 3;
numActions2 = 1;
actionSizes = numActions+ numActions2
numActionCombinations = 8;
S0 = [0 0 0];
S1 = [0 0 1];
S2 = [0 1 1];
S3 = [0 1 0];
S4 = [1 1 0];
S5 = [1 0 1];
S6 = [1 0 0];
S7 = [1 1 1];
actionInfo = rlFiniteSetSpec({S0,S1,S2,S3,S4,S5,S6,S7});
actionInfo2 = rlNumericSpec([numActions2 1],'LowerLimit',0.05,'UpperLimit',30);
actionInfo.Name = 'Pulse';
actionInfo2.Name = 'cRef';
net = [ featureInputLayer(obsSizes,'Normalization','none','Name','state')
fullyConnectedLayer(actionSizes,'Name','fc')
softmaxLayer('Name','actionProb') ];
actor = rlStochasticActorRepresentation(net,obsInfo,actInfo,'Observation','state');

採用された回答

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2021 年 4 月 26 日
If you want to specify the neural network structures yourself, there is nothing specific you need to do - simply create two actors and two critics, one for each action space and you are all set.
There is also the option to use the default agent feature where the neural nets are created automatically for you by only providing the observation and action space. See an example here.

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeEnvironments についてさらに検索

製品


リリース

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by