Multi-Armed Bandit Problem Example

Learn how to implement two basic but powerful strategies to solve multi-armed bandit problems with MATLAB.
ダウンロード: 675
更新 2019/1/10

ライセンスの表示

Casino slot machines have a playful nickname - "one-armed bandit" - because of the single lever it has and our tendency to lose money when we play them.
Ordinary slot machines have only one lever. What if you had multiple levers to pull, each with different payout. This is a multi-armed bandit. You don't know which lever has the highest payout - you just have to try different levers to see which one works best, but for how long? If you keep pulling the low payout lever, you forego more rewards, but you won't know which lever is good until you try sufficient number of times.

Bandit algorithms are related to the field of machine learning called reinforcement learning. Rather than learning from explicit training data, or discovering patterns in static data, reinforcement learning discovers the best option from trial and error with live examples. The multi-armed bandits focus on the question of exploration vs. exploitation trade-off - how much resources should be spent in trial and error vs. maximizing the benefit. There are many different formulation of bandit problems and strategies to solve them.

引用

Toshiaki Takeuchi (2024). Multi-Armed Bandit Problem Example (https://www.mathworks.com/matlabcentral/fileexchange/69598-multi-armed-bandit-problem-example), MATLAB Central File Exchange. に取得済み.

MATLAB リリースの互換性
作成: R2018b
R2018b 以降のリリースと互換性あり
プラットフォームの互換性
Windows macOS Linux
カテゴリ
Help Center および MATLAB AnswersFilter Banks についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
バージョン 公開済み リリース ノート
1.0.1

Added an image

1.0.0