Skip to content
MathWorks - Mobile View
  • MathWorks アカウントへのサインインMathWorks アカウントへのサインイン
  • Access your MathWorks Account
    • マイ アカウント
    • コミュニティのプロファイル
    • ライセンスを関連付ける
    • サインアウト
  • 製品
  • ソリューション
  • アカデミア
  • サポート
  • コミュニティ
  • イベント
  • MATLAB を入手する
MathWorks
  • 製品
  • ソリューション
  • アカデミア
  • サポート
  • コミュニティ
  • イベント
  • MATLAB を入手する
  • MathWorks アカウントへのサインインMathWorks アカウントへのサインイン
  • Access your MathWorks Account
    • マイ アカウント
    • コミュニティのプロファイル
    • ライセンスを関連付ける
    • サインアウト

ビデオ・Webセミナー

  • MathWorks
  • ビデオ
  • ビデオ ホーム
  • 検索
  • ビデオ ホーム
  • 検索
  • 営業へのお問い合わせ
  • 評価版
2:14 Video length is 2:14.
  • Description
  • Full Transcript
  • Related Resources

What Is Half Precision?

This video introduces the concept of half precision or float16, a relatively new floating-point data. It can be used to reduce memory usage by half and has become very popular for accelerating deep learning training and inference. We also look at the benefits as well as the tradeoffs over traditional 32-bit single precision or 64-bit double-precision data types for traditional control applications.

Half precision or float16 is a relatively new floating-point data type that uses 16 bits, unlike traditional 32-bit single precision or 64-bit double-precision data types.

So, when you declare a variable as half in MATLAB, say the number pi, you may notice some loss of precision when compared to single or double representation as we see here.

The difference comes from the limited numbers of bits used by half precision. We only have 10 bits of precision and 5 bits for the exponent as opposed to 23 bits of precision and 8 bits for exponent in single. Hence the eps is much larger and also the dynamic range is limited.

So why is it important? Half’s recent popularity is because of its usefulness in accelerating deep learning training and inference mainly on NVIDIA GPUs as highlighted in the articles here. In addition, both Intel and ARM platforms also support half to accelerate computations.

The obvious benefit of using half precision is in reducing the memory and reducing the data bandwidth by 50% as we see here for Resnet50. In addition, the hardware vendors also provide hardware acceleration for computations in half such as the CUDA intrinsics in the case of NVIDIA GPUs.

We are seeing traditional applications such as powertrain control systems do the same where you may have data in the form of lookup tables as shown in a simple illustration here. By using half as the storage type, you are able to reduce the memory footprint of this 2D lookup table by 4x.

However, it is important to understand the tradeoff of the limited precision and range of half precision. For instance, in case of the deep learning network, the quantization error was of the order of 10^-4 and one has to analyze how this impacts the overall accuracy of the network.

This was a short introduction to half precision. Please refer to links below to learn more on how to simulate and generate C/C++ or CUDA code from half in MATLAB and Simulink.

Related Products

  • MATLAB
  • Fixed-Point Designer
  • Simulink

Learn More

Half-Precision Data Type in MATLAB
Floating Point Numbers
Fixed-Point Arithmetic
Construct Fixed-Point Numeric Object
Optimizing Lookup Tables
Lookup Table Optimization (2:21)
What Is Quantization?

3 Ways to Speed Up Model Predictive Controllers

Read white paper

A Practical Guide to Deep Learning: From Data to Deployment

Read ebook

Bridging Wireless Communications Design and Testing with MATLAB

Read white paper

Deep Learning and Traditional Machine Learning: Choosing the Right Approach

Read ebook

Hardware-in-the-Loop Testing for Power Electronics Control Design

Read white paper

Predictive Maintenance with MATLAB

Read ebook

Electric Vehicle Modeling and Simulation - Architecture to Deployment : Webinar Series

Register for Free

How much do you know about power conversion control?

Start quiz

Feedback

Featured Product

MATLAB

  • Request Trial
  • Get Pricing

View more related videos

MathWorks - Domain Selector

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

  • Switzerland (English)
  • Switzerland (Deutsch)
  • Switzerland (Français)
  • 中国 (简体中文)
  • 中国 (English)

You can also select a web site from the following list:

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)
  • 中国
    • 简体中文Chinese
    • English
  • 日本Japanese (日本語)
  • 한국Korean (한국어)

Contact your local office

  • 営業へのお問い合わせ
  • 評価版

MathWorks

Accelerating the pace of engineering and science

MathWorksはエンジニアや研究者向け数値解析ソフトウェアのリーディングカンパニーです。

ディスカバー…

製品を見る

  • MATLAB
  • Simulink
  • 学生向けソフトウェア
  • ハードウェア サポート
  • File Exchange

製品評価版の入手または製品の購入

  • ダウンロード
  • 評価版ソフトウェア
  • 営業へのお問い合わせ
  • 価格とライセンス
  • MathWorksストア

使い方を学ぶ

  • ドキュメンテーション
  • チュートリアル
  • 例
  • ビデオ・Webセミナー
  • トレーニング

サポートを受ける

  • インストールのヘルプ
  • MATLAB Answers
  • 技術コンサルティング
  • ライセンスセンター
  • サポートへのお問い合わせ

MathWorks について

  • 採用情報
  • ニュースルーム
  • 社会貢献
  • ユーザー事例
  • MathWorks について
  • Select a Web Site United States
  • トラストセンター
  • 商標
  • プライバシー ポリシー
  • 違法コピー防止
  • アプリケーション ステータス

© 1994-2022 The MathWorks, Inc.

  • Facebook
  • Twitter
  • Instagram
  • YouTube
  • LinkedIn
  • RSS

MATLAB を語ろう