How to implement reinforcement learning using code generation
現在この質問をフォロー中です
- フォローしているコンテンツ フィードに更新が表示されます。
- コミュニケーション基本設定に応じて電子メールを受け取ることができます。
エラーが発生しました
ページに変更が加えられたため、アクションを完了できません。ページを再度読み込み、更新された状態を確認してください。
古いコメントを表示
I want to implement the reinforcement learning block in dSPACE using code generation, while the simulink will pop out the error 'The 'AgentWrapper' class does not suppot code generation'. Is there a way how to solve it?
Or is it possible to extract the neural work of reinforcement learning agent and import it into deep learning toolbox?
Thank you very much. Any suggestions are appreciated.
採用された回答
Kishen Mahadevan
2021 年 3 月 1 日
As of R2020b, the RL Agent block does not support Code Generation (we are working on it) and is currently only used for training a Reinforcement Learning Agent in Simulink.
However, in R2020b, native Simulink blocks such as 'Image Classifier' and 'Predict' were introduced in Deep Learning Toolbox, and the MATLAB function block was enhanced to model Deep Learning networks in Simulink. These blocks allow using pre-trained networks including Reinforcement Learning policies in Simulink to perform inference.
Also, in R2021a, plain C Code generation for Deep Learning networks is supported (so no dependence on 3p libraries like one-dnn), which enables code generation from the native DL blocks and the enhanced MATLAB function block mentioned above.
Using these features, steps you could follow in R2021a are:
1) Use either Predict or the MATLAB function block to replace the existing RL Agent block, and pull in your trained agent into Simulink
2) Leverage the Plain C Code generation feature to generate code for your Reinforcement Learning Agent


Note:
To create a function that can be used within the 'MATLAB function' block to evaluate the learned policy (pre-trained agent), or to create ‘agentData’ that can be imported into the 'Predict' block, please refer to the 'generatePolicyFunction' API.
7 件のコメント
Thank you very much. When using the predict block for reinforcement learning, what is the input of the block, as we know that the RL block action is determined by the observations and reward calculation.
The input to the 'Predict' block is your observations as it performs inference on the trained policy uisng the observations. The 'reward' input is not required in this case as that is used only during RL training.
Note: To load the pre-trained RL agent into simulink using the "predict' block, use the 'Load Network from mat file' option in the 'Block Parameters' and select the 'agentData.mat' file created using the 'generatePolicyFunction'.
Thank you for the reply.
- I did exactly as you said while it will pop out the dimension problem, which indicates Invalid setting for input port dimensions due to the total number of input and output elements are not the same. While it worked well with RL block and I just replace the block with Predict block. It seems the output of Predict is the whole elements sets of actions rather than single element, which is the actual output of RL agent.
- As for the other way, I implement the RL agent with the user defined function, while at the end of the simulation, it will pop out the error 'unable to save operating point because the fucntion block contains state variables that are not campatiable with simulation state save and restore'. and the function in the error is actually the interface function you mentioned above.
- And In the code generation, the error is 'saveing the operating point is only supported for models in Normal or Acclerator mode, and for model blocks in Normal or Accelerator mode'. Is this caused by the error in 2?
How should I solve it? Thank you very much for your help.
Hello,
The issue mentioned in point 1 is very similar to this MATLAB Answer. Please refer to it for more information.
Based on that MATLAB Answers post, using the MATLAB Function block in place of the predict block resolved the issue. Since you are facing issues with the MATLAB Function block setup as well, we might need to take a deeper look into the model.
@Kishen Mahadevan are you still working on the code generation support of the RL agent? In that case: only for inference or for the training?
Mayank
2025 年 11 月 12 日
I am facing issue in generating c code , can someone help

Mayank
2025 年 11 月 12 日
This is for an Opal RT based HIL using RL agent
その他の回答 (0 件)
カテゴリ
ヘルプ センター および File Exchange で Reinforcement Learning についてさらに検索
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Web サイトの選択
Web サイトを選択すると、翻訳されたコンテンツにアクセスし、地域のイベントやサービスを確認できます。現在の位置情報に基づき、次のサイトの選択を推奨します:
また、以下のリストから Web サイトを選択することもできます。
最適なサイトパフォーマンスの取得方法
中国のサイト (中国語または英語) を選択することで、最適なサイトパフォーマンスが得られます。その他の国の MathWorks のサイトは、お客様の地域からのアクセスが最適化されていません。
南北アメリカ
- América Latina (Español)
- Canada (English)
- United States (English)
ヨーロッパ
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
