help required for fixed point conversion

24 ビュー (過去 30 日間)
Gary
Gary 2023 年 3 月 29 日
コメント済み: Andy Bartlett 2023 年 4 月 7 日
I am modelling a encoder angle decoder control system in simulink. First I built a model with floating point and it works fine. When I am trying to convert it to fixed point, I am landing in trouble. After examining my model, I discovered that my model has subtractor that takes in set point as one input and other from feedback from the plant. The issue is that subtractor output has range (as captured by fixed point designer tool) from -0.00021679 to 0.01076. This translated to fixed point value of (1,32,37). Since this is out of range, the simulated result is erratic. How can I correct this error?

採用された回答

Andy Bartlett
Andy Bartlett 2023 年 4 月 4 日
編集済み: Andy Bartlett 2023 年 4 月 4 日
Mapping an A2D to a fixed-point data type
One way to map an A2D converter to a fixed-point data type is to use two real-world-value and stored-integer pairs.
You then solve a pair of affine equations
realWorldValue1 = Slope * storedIntegerValue1 + Bias
realWorldValue2 = Slope * storedIntegerValue2 + Bias
and enter those in the data type
fixdt( isSigned, WordLength, Slope, Bias)
realWorldValue1 = 10; % Volts
storedIntegerValue1 = 32767;
realWorldValue2 = -10; % Volts
storedIntegerValue2 = -32767; % Is this the correct value?
% Solve for data types Slope and Bias
% V = Slope * Q + Bias
%
% form as a Matrix Equation
% Vvec = [ Qvec ones ] * [Slope; Bias]
% Vvec = QOMat * slopeBiasVector
% then use backslash
% slopeBiasVector = QOMat \ Vvec
Vvec = [realWorldValue1; realWorldValue2];
SIvec = [storedIntegerValue1; storedIntegerValue2]
SIvec = 2×1
32767 -32767
QOMat = [SIvec ones(numel(SIvec),1)];
slopeBiasVector = QOMat \ Vvec;
Slope = slopeBiasVector(1)
Slope = 3.0519e-04
Bias = slopeBiasVector(2)
Bias = 0
isSigned = any( SIvec < 0 )
isSigned = logical
1
wordLength = max( ceil( log2( abs( double(SIvec) ) ) ) )
wordLength = 15
a2dNumericType = numerictype( isSigned, wordLength,Slope,Bias)
a2dNumericType = DataTypeMode: Fixed-point: slope and bias scaling Signedness: Signed WordLength: 15 Slope: 0.00030518509475997192 Bias: 0
% Sanity check
checkRealWorldValue1 = Slope * storedIntegerValue1 + Bias
checkRealWorldValue1 = 10
checkRealWorldValue2 = Slope * storedIntegerValue2 + Bias
checkRealWorldValue2 = -10
err1 = checkRealWorldValue1 - realWorldValue1
err1 = 0
err2 = checkRealWorldValue2 - realWorldValue2
err2 = 0

その他の回答 (2 件)

Andy Bartlett
Andy Bartlett 2023 年 3 月 29 日
Hi Gary,
That portion of your model will involve 4 data types.
  1. Digital measurement of output signal from analog plant
  2. Set point signal
  3. Accumulator type used to do the math inside the Subtraction block
  4. Output of that subtraction block feed to your controller logic
Normally, the fixed-point tool should be able to pick separate data types and scaling for each of those signals.
For the range values and data type for the subtractor output, everything looks pretty good.
errorSignalExample = fi([-0.00021679, 0.01076],1,32,37)
errorSignalExample =
-0.0002 0.0108 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 32 FractionLength: 37
rangeErrorSignalDataType = range(errorSignalExample)
rangeErrorSignalDataType =
-0.0156 0.0156 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 32 FractionLength: 37
The issues are likely coming from else where in the model.
Try setting the model's diagnostics for Signals with Saturating and Wrapping overflows to Warning, then rerun the simulation. This should help isolate sources of overflow.
If overflows are occurring, then try rerunning the Fixed-Point Tool workflow but give a bigger Safety Margin before proposing data types. If there are fixed-point types in the model at the start of the workflow, then turn on Data Type Override double when Collecting Ranges.
If overflows are not the issue, turn on signal logging for several key signals in the model. Repeat the fixed-point tool workflow with Data Type Override double on during collect ranges. Then at the end of the workflow click Compare Signals and use Simulation Data Inspector to isolate where the doubles signal traces first start to diverge from the fixed-point traces. This will point you to a place in the model to look more carefully at the math and the data types.
  3 件のコメント
Andy Bartlett
Andy Bartlett 2023 年 3 月 30 日
Change the data type of the accumulator to move one bit from the precision end to the range end, thus doubling the range. This will allow +1 to be represented without overflow.
dt = fixdt(1,32,30)
dt =
NumericType with properties: DataTypeMode: 'Fixed-point: binary point scaling' Signedness: 'Signed' WordLength: 32 FractionLength: 30 IsAlias: 0 DataScope: 'Auto' HeaderFile: '' Description: ''
representableRange = range( numerictype(dt) )
representableRange =
-2.0000 2.0000 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 32 FractionLength: 30
Gary
Gary 2023 年 3 月 30 日
what does 395 overflowwraps mean and how do I correlate to the data type required based on the number. Now elsewhere I see around 9000 wraps.

サインインしてコメントする。


Andy Bartlett
Andy Bartlett 2023 年 3 月 30 日
The number 395 is how many times that block in the model overflowed during the previous simulation. Suppose the element was a data type conversion block with input int16 and the output uint8. Suppose the input at one time step in the simulation was 260. Since 260 exceeds the maximum representable value, 255, of the output data type, an overflow will occur. The block could be configured to handle overflows with saturation in which case the output would be 255, and that would increase the count of "overflow saturations" for the block by 1. Alternately, the block could be configured to handle overflows with Modulo 2^Nbits wrapping in which case the output would be mod(260,2^8) = 4, and that would increase the count of "overflow wraps" for the block by 1.
So 395 overflow wraps means that during the previous instrumented simulation that block had 395 overflow events handled by Modulo 2^Nbits wrapping.
The count of overflows does NOT indicate what new data type is required to avoid overflows. Overflows due to values slightly too big for the output data type count as one overflow event, and overflows due to values 1000X to big for the output data type also count as just one overflow event.
Collecting the simulation minimum and maximum values is what helps pick a type that will avoid overflows. Calling fi with the simulation min and max will show the type thats big enough.
format long g
simulationMinMax = [-13.333333333333334, 12.666666666666666]; % Collected by Fixed-Point Tool or some other way
safetyMarginPercent = 25;
%
% Expanded range to cover
%
expandedMinMax = (1 + safetyMarginPercent/100) .* simulationMinMax
expandedMinMax = 1×2
-16.6666666666667 15.8333333333333
%
% Data type container attributes
% Signedness
% WordLength
%
isSigned = 1; % manually set
%isSigned = any( expandedMinMax < 0 ) % use range to see of negatives needed
wordLength = 8; % manually set
%
% Automatically determine scaling
% using fi's best precision mode (just don't specify scaling)
%
quantizedExpandedMinMax = fi( expandedMinMax, isSigned, wordLength)
quantizedExpandedMinMax =
-16.75 15.75 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 8 FractionLength: 2
bestPrecisionNumericType = numerictype(quantizedExpandedMinMax)
bestPrecisionNumericType = DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 8 FractionLength: 2
representableRangeOfBestPrecisionDataType = range(bestPrecisionNumericType)
representableRangeOfBestPrecisionDataType =
-32 31.75 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 8 FractionLength: 2
  5 件のコメント
Gary
Gary 2023 年 4 月 7 日
編集済み: Gary 2023 年 4 月 7 日
Just revisiting. My model functions well with 32 bit fixed point. However, I need 16 bit at output. How do I convert fixdt(1,32,30) to fixdt(1,16,14) without loss of precision?
Andy Bartlett
Andy Bartlett 2023 年 4 月 7 日
>> How do I convert fixdt(1,32,30) to fixdt(1,16,14) without loss of precision?
You can't avoid precision loss. You are dropping 16 bits from the precision end of the variable.
If you use the fastest, leanest conversion, Round to Floor, that you will introduce quantization error of 0 to just under 1 bit. In real world value terms, the absolutie quantization error is in the range 0 to Slope, where Slope = 2^-14;
You can cut the quantization error in half and balance around zero if you use Nearest rounding. With Nearest, the absolute quantization error will be 0 to 1/2 bits or in real world values 0 to 0.5*Slope = 2^-15. But keep in mind that round to nearest can overflow for values close to +1. If you turn on Saturation in the cast, quantization error will still be always be less or equal to half a bit. But if you allow the overflow cases to wrap modulo 2^N, then the quantization error will be huge for the overflowing cases.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeFixed Point についてさらに検索

タグ

製品


リリース

R2015b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by