- MathWorks Reinforcement Learning Toolbox. The RL Agent block is used to train a new agent or load an existing one.
- Quanser QLabs Virtual QUBE-Servo 2 software. Only needed to run the agent using the QUBE-Servo 2 Virtual Twin. If you don't have QLabs, you can sign up for a free trial here.
- Quanser QUARC Real-Time Control Software and the MathWorks Deep Learning Toolbox are required to run the agent using the QUBE-Servo 2 hardware. If you don’t have the QUARC software, you can sign up for free trial.
- Select whether you want to load a previously trained agent or train a new one from the drop-down box below. It is recommended that you start by testing the existing agent first.
- Run each section of the script in sequence up until the Simulate the QUBE-Servo 2 IP RL section.
- Simulate the RL balance control on the Simulink model shown above as described in that section (e.g., change the initial vertical angle).
- Go to the Running RL on the Virtual QUBE-Servo 2 IP Experiment section if you want to run the RL balance control agent on the QUBE-Servo 2 virtual twin.
- Go to the Running RL on the QUBE-Servo 2 IP Hardware section if you want to run RL balance control on the QUBE-Servo 2 hardware.
- Open the s_qube2_bal_rl Simulink model.
- Run the Simulink model.
- Try changing in the initial angle of the inverted pendulum somewhere between +/- 10 deg. For example, set the IC0 block to 0.96*ic_alpha0 to have the initial angle start at 172.8 deg, which is 7.2 deg away from the inverted position.
- Open the Quanser Interactive Labs (QLabs) software and make sure the Pendulum Workspace in the QUBE 2 - Pendulum menu is loaded as shown above. For more information about running the software, please go to the QLabs support page.
- Open the Simulink model that interacts with the Virtual QUBE-Servo 2 Pendulum, as shown below.
- Run the Simulink model.
- Click on the “Lift pendulum” button in the top-right corner to bring the pendulum up to the inverted position. The RL balance control will engage once the pendulum reaches +/- 10 deg of the vertical.
- Connect the QUBE-Servo 2 to the PC/laptop USB port.
- Make sure the QUBE-Servo 2 is powered and Power LED is lit.
- Open the Simulink model that interacts with the QUBE-Servo 2 given below.
- Click on the "Monitor and Tune" button in the Simulink toolbar to generate and run the QUARC controller.
- Once the LED is green, manually bring up the pendulum to the vertical position. Immediately release the pendulum once the controller engages (i.e., when it's within +/- 10 deg of vertical).
- Click on the Stop button to stop running the QUARC controller.
- Using the Reinforcement Learning Toolbox™ to Balance an Inverted Pendulum Quanser blog: https://www.quanser.com/blog/using-the-reinforcement-learning-toolbox-to-balance-an-inverted-pendulum/
- MathWorks® Reinforcement Learning eBook: https://www.mathworks.com/campaigns/offers/reinforcement-learning-with-matlab-ebook.html
- Reinforcement Learning Tech Talks videos by Brian Douglas: https://www.mathworks.com/videos/series/reinforcement-learning.html
- MathWorks® Reinforcement Learning Toolbox™ product page: https://www.mathworks.com/products/reinforcement-learning.html
- Reinforcement Learning: training and deploying a policy to control inverted pendulum with QUBE-Servo 2: https://www.mathworks.com/matlabcentral/fileexchange/99364-reinforcement-learning-inverted-pendulum-with-qube-servo2?s_tid=srchtitle. Shows how to implement a swing-up and balance RL routine on the QUBE-Servo 2 system using a Raspberry Pi embedded target.
Quanser (2024). Quanser QUBE-Servo 2 Pendulum Control Reinforcement Learning (https://www.mathworks.com/matlabcentral/fileexchange/106935-quanser-qube-servo-2-pendulum-control-reinforcement-learning), MATLAB Central File Exchange. 取得済み .
プラットフォームの互換性Windows macOS Linux
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!