![photo](/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/24261941_1633725644107_DEF.jpg)
ali farid
Followers: 0 Following: 0
統計
MATLAB Answers
10 質問
0 回答
ランク
of 147,821
コントリビューション
0 問題
0 解答
スコア
0
バッジ数
0
コントリビューション
0 投稿
コントリビューション
0 パブリック チャネル
平均評価
コントリビューション
0 ハイライト
平均いいねの数
Feeds
質問
Reinforcement Learning: competitive or collaborative options in MARL Matlab
Hello, I am trying to set up three explorer agents to explore the unknown area in collobrative or competive manners. I am won...
約1ヶ月 前 | 1 件の回答 | 0
1
回答質問
Problem with bus input of RL agent
I used a block diagram of a RL agent in Simulink which in a Matlab example was used, but I modified the inputs of RL agent and I...
5ヶ月 前 | 0 件の回答 | 0
0
回答質問
Cannot propagate non-bus signal to block because the block has a bus object specified.
I have a Simulink model that observation was only an image, and I added two other vector to the observation in RL toolbox. Since...
5ヶ月 前 | 1 件の回答 | 0
1
回答質問
Observation specification must be scalar if not created by bus2RLSpec.
I am using a RL system that is initially designed for one type of observation which is image. Recently I added two scalar observ...
6ヶ月 前 | 1 件の回答 | 1
1
回答質問
A problem with RL toolbox: wrong size of inputs of actor network.
I have a problem with getSize which shows a wrong size, my input is a scalar with a size [1 1], but get size returns 2. I am usi...
6ヶ月 前 | 0 件の回答 | 0
0
回答質問
Reinforcement Learning Error with two scalar inputs
I have a strange error from a critic network that has 3 inputs, image, and two scalars. But I see the following error: Error ...
6ヶ月 前 | 0 件の回答 | 0
0
回答質問
Add scalar inputs to the actor network
I have a CNN based PPO actor critic, and it is working fine, but now I am trying to add three scalar values to the actor network...
6ヶ月 前 | 1 件の回答 | 0
1
回答質問
Design an actor critic network for non-image inputs
I have a robot with 3 inputs including wind, and current location and the current action. I use this three inputs to predict the...
7ヶ月 前 | 1 件の回答 | 0
1
回答質問
I see a zero mean reward for the first agent in multi-agent RL Toolbox
Hello, I have extended the PPO Coverage coverage path planning example of the Matlab for 5 agents. I can see now that always, I...
11ヶ月 前 | 0 件の回答 | 0
0
回答質問
Replace RL type (PPO with DPPG) in a Matlab example
There is a Matlab example about coverage path planning using PPO reinforcement learning in the following link: https://www.math...
約1年 前 | 1 件の回答 | 0