Binary Point scaling Vs Slope Bias scaling

11 ビュー (過去 30 日間)
Arjun Singh
Arjun Singh 2017 年 11 月 5 日
回答済み: Kiran Kintali 2022 年 12 月 15 日
What is the difference between Binary point scaling and slope bias scaling? In what case should i use binary point scaling and when to use slope bias scaling? Is there any advantage of one scaling method over the other? What is the significance of slope and Bias in slope bias scaling?

採用された回答

Andy Bartlett
Andy Bartlett 2022 年 8 月 2 日
編集済み: Andy Bartlett 2022 年 8 月 2 日
Slope and Bias Scaling can Maximize Accuracy Per Bit
The key motivation for Slope and Bias scaling is maximize accuracy per bit.
If you have ever hooked up a sensor to the A2D converter of a microcontroller, you've likely used the equivalent of Slope and Bias scaling to squeeze every bit of accuracy possible from that A2D converter. Say you wanted to measure temperature in Kelvin from 273 to 283, and you had an 8-bit A2D converter. You used an analog circuit to scale the sensor voltage to get hex 0x00 out of the A2D at 273 Kelvin, and 0xFF out at 283 Kelvin. This would correspond to fixed-point Slope 10/255 and Bias 273 and data type numerictype( 0, 8, 10/255, 273) or equivalently fixdt( 0, 8, 10/255, 273).
The worst-case accuracy differences between Slope-Bias and Binary-Point can be dramatic when the values to be represented in the numeric type are all positive or all negative and are "bunched up far away from zero".
Consider the following example of picking an optimal type to represent the range from 130 to 200. To simplify understanding using plot, let's use just 3 bits for the example types. The worst-case quantization error for the Slope-Bias type is nearly 4 times better than the Binary-Point type.
The Slope-Bias type fully utilizes all 8 representable values of the 3-bit type. So, Slope-Bias has 100% utilization of representable values.
In contrast, the Binary-Point type is only utilizing 3 representable values for the range 130 to 200. The other 5 representable values are outside this range and never used. So, Binary-Point only has only 37% utilization of representable values.
continuum_min = 130;
continuum_max = 200;
nBits = 3;
exactlyRepresentExtremes = false;
[ntSlopeBias,ntBinPt] = compareOptimalSlopeBiasBinPt(...
continuum_min,continuum_max, nBits, exactlyRepresentExtremes);
h = gcf;
set(h,"Position",[1 1 800 800])
As an even more dramatic example, you can adjust the range to be 195 to 200. The worst-case error for Slope-Bias would be 25X better than Binary-Point. Representational utilization for Slope-Bias would remain 100%, but Binary-Point would drop to only using 1 representable value out of 2^nBits representable values.
Even if the values are not "bunched up" on one side of zero, Slope-Bias can still give up to twice the accuracy per bit compared to Binary-Point scaling. For example, for the range -129 to +129 using 8-bits, the worst-case error for an optimal Slope-Bias type is 0.2539 but for Binary-Point the worst cast error is nearly double at 0.5.
Some Slope-Bias Disadvantages
Two key disadvantages of Slope-Bias types are breadth of compatibility and efficiency impacts if net scaling is not matched.
Compatibility:
Some MathWorks products such as HDL Coder and DSP System Toolbox do not support Slope-Bias data types.
Efficiency:
Care should always be taken when setting the relative fixed-point scaling of terms of the inputs and outputs of a math operation.
If you are not careful, you kill accuracy of the operation.
There can also be a cost for handling net scaling adjustments. Ideally, you want all scaling adjustment to resolve to a no-op, or nothing more than a binary shift left or right.
When using only Binary-Point scaling, even if you are utterly careless the net scaling operations will never be more than shifts. But again if you are careless, you may lose all accuracy by right shifting off all the precision or left shifting into overflows. There is "no free lunch."
With Slope-Bias scaling, the net scaling operations can be no-ops, just shifts, or something more costly. Many operations such as addition, casts, and even complicated lookup tables can be made just as efficient with Slope-Bias as with Binary-Point if the net scaling is matched. Multiplications will be costlier if the Bias is non-zero. But if the Bias is zero, then multiplications can also be made equally efficient as compared to Binary-Point situations.
Let's consider doing attenuation of the intensity of an image stored in the widely used unorm8 data type that has a Bias of zero, but a non-power-of-two Slope 1/255.
unorm8 = numerictype(0,8,1/255,0)
To attenuate, we will multiply by 0.8984375.
Vk = fi( 0.8984375, 0, 8, 8 )
and, as is natural, we will store the image back in the original unorm8 data type.
So in symbolic math, the operation is
Vy = Vk * Vu
To figure out the efficient implementation of this math using embedded types, replace the real world values with their fixed-point scaling equations involving the stored integer values
Vu = (1/255) * Qu
Vk = 2^-8 * Qk = 2^-8 * 230
Vy = (1/255) * Qy
with substitution in the multiplication equation gives
(1/255) * Qy = 2^-8 * Qk * (1/255) * Qu
solving for the output stored integer gives
Qy = ( 2^-8 * Qk * (1/255) * Qu ) / (1/255)
simplifying to show the net scaling operations gives
Qy = ( Qk * Qu ) * 2^-8
Note: any time you see multiplication by a power of two with a negative exponent, it requires just a shift right. Any time you see multiplicaton by a power of two with a positive exponent, it requires just a shift left. OK, "just a shift" over simplifies other details, but the concept is sound.
Efficient C code to implement this math would be
Qy = ( (uint16_t)Qk * Qu ) >> 8
So even though two of the variables had non-power-of-two slopes, in the net scaling all that was left was a simple shift, just like with Binary-Point.
But if we had not matched the scaling, something much more costly could have been required to implement the math. So take some care to make sure net scaling does not cause overflows, does not destroy accuracy, and avoids costly net scaling operations on the embedded systems. The first two needs apply to both Binary-Point and Slope-Bias. The third need is only for Slope-Bias.
On the efficiency side, both Binary-Point and Slope-Bias need to make sure the word lengths don't grow too big and costly for the embedded compute device being targeted.

その他の回答 (1 件)

Kiran Kintali
Kiran Kintali 2022 年 12 月 15 日
HDL Coder currently does not support Slope Bias Scaling for efficiency reasons. Please consider using Binary Point Scaling for best hardware resources (power, performance, area). You can also use a mix of Native Floating Point and Fixed Point types where applicable to address any dynamic range issues.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by