How can I set the initial value of action space while using Simulink DDPG Agent?

28 ビュー (過去 30 日間)
lei wang
lei wang 2024 年 11 月 20 日 14:48
回答済み: Shlok 2024 年 11 月 29 日 6:52
I got a robot model in Simulink, now I want to train the robot using DDPG Agent.
My question is, how can I set the initial value of action space? I want the action starts from some specefic value such as zero.

回答 (1 件)

Shlok
Shlok 2024 年 11 月 29 日 6:52
Hi Lei,
DDPG agents are designed to operate in continuous action spaces. Hence, to create a custom continuous action space for the DDPG agent, you can use the “rlNumericSpec” function. rlNumericSpec” helps in creating specifications object for a numeric action or observation channel. By setting the “LowerLimit” to zero, you can specify that the action starts from there.
Here is a sample code:
actionInfo = rlNumericSpec([1,1], 'LowerLimit', 0, 'UpperLimit', 1);
observationInfo = ... // define observation specifications
agtInitOpts = ... // define agent options
agent = rlDDPGAgent(observationInfo,actionInfo,agtInitOpts);
To know more about “rlNumericSpec function, you can refer to the following MATLAB Answer and documentations:

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

製品


リリース

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by