フィルターのクリア

GridWorld RL continous action

6 ビュー (過去 30 日間)
Andrea Fernandez Fernandez
Andrea Fernandez Fernandez 2024 年 3 月 3 日
回答済み: Yatharth 2024 年 3 月 14 日
Hello, is it possible to modify GridWorld to work with continous actions or would it take a lot of effort and knowledge?

回答 (1 件)

Yatharth
Yatharth 2024 年 3 月 14 日
Hi Andrea,
Converting the GridWorld environment from discrete to continuous actions involves considerable effort and a deep understanding of reinforcement learning (RL) principles.
Here are key aspects to consider:
  1. Action representation: in the original GridWorld, actions are discrete and represent movements in cardinal directions (eg. North, South, East, West). For continuous actions, you would need to define how actions are represented, such as using vectors to denote directions and magnitude.
  2. State transitions: You would need to develop a new method to calculate the next step based on the continuous action taken. You will also need to handle collision and boundary conditions as these will get more complicated compared to simple obstacle collisions in case of discrete values.
  3. The reward structure may also need adjustments. In discrete GridWorld, rewards are typically assigned based on reaching certain cells. With continuous actions, rewards could be based on distances to objectives, with more granular adjustments.
  4. Most traditional RL algorithms used with GridWorld are designed for discrete action spaces (e.g., Q-Learning). Continuous action spaces often require different algorithms, such as Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), or Soft Actor-Critic (SAC), which are more complex and involve neural networks to approximate policy and/or value functions.
I hope this helps.

カテゴリ

Help Center および File ExchangeSequence and Numeric Feature Data Workflows についてさらに検索

製品


リリース

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by