Main Content

Implement Hardware-Efficient Real Burst QR Decomposition

This example shows how to implement a hardware-efficient QR decomposition using the Real Burst QR Decomposition block.

Economy Size QR Decomposition

The Real Burst QR Decomposition block performs the first step of solving the least-squares matrix equation AX = B which transforms A in-place to R and B in-place to C = Q'B, then solves the transformed system RX = C, where QR is the orthogonal-triangular decomposition of A.

To compute the stand-alone QR decomposition, this example sets B to be the identity matrix so that the output of the Real Burst QR Decomposition block is the upper-triangular R and C = Q'.

Define Matrix Dimensions

Specify the number of rows in matrices A and B, the number of columns in matrix A, and the number of columns in matrix B. This example sets B to be the identity matrix the same size as the number of rows of A.

m = 10; % Number of rows in matrices A and B
n = 3;  % Number of columns in matrix A
p = m;  % Number of columns in matrix B

Generate Matrices A and B

Use the helper function realUniformRandomArray to generate a random matrix A such that the elements of A are between -1 and +1, and A is full rank. Matrix B is the identity matrix.

rng('default')
A = fixed.example.realUniformRandomArray(-1,1,m,n);
B = eye(m);

Select Fixed-Point Data Types

Use the helper function qrFixedpointTypes to select fixed-point data types for matrices A and B that guarantee no overflow will occur in the transformation of A in-place to R and B in-place to C = Q'B.

max_abs_A = 1;  % Upper bound on max(abs(A(:))
max_abs_B = 1;  % Upper bound on max(abs(B(:))
precisionBits = 24;  % Number of bits of precision
T = fixed.qrFixedpointTypes(m,max_abs_A,max_abs_B,precisionBits);
A = cast(A,'like',T.A);
B = cast(B,'like',T.B);

Open the Model

model = 'RealBurstQRModel';
open_system(model);

The Data Handler subsystem in this model takes real matrices A and B as inputs. It sends rows of A and B to QR block using the AMBA AXI handshake protocol. The validIn signal indicates when data is available. The ready signal indicates that the block can accept the data. Transfer of data occurs only when both the validIn and ready signals are high. You can set a delay between the feeding in rows of A and B in the Data Handler to emulate the processing time of the upstream block. validIn remains high when rowDelay is set to 0 because this indicates the Data Handler always has data available.

Set Variables in the Model Workspace

Use the helper function setModelWorkspace to add the variables defined above to the model workspace. These variables correspond to the block parameters for the Real Burst QR Decomposition block.

numSamples = 1; % Number of sample matrices
rowDelay = 1; % Delay of clock cycles between feeding in rows of A and B
fixed.example.setModelWorkspace(model,'A',A,'B',B,'m',m,'n',n,'p',p,...
    'numSamples',numSamples, 'rowDelay',rowDelay);

Simulate the Model

out = sim(model);

Construct the Solution from the Output Data

The Real Burst QR Decomposition block outputs data one row at a time. When a result row is output, the block sets validOut to true. The rows of matrices R and C are output in reverse order to accommodate back-substitution, so you must reconstruct the data to interpret the results. To reconstruct the matrices R and C from the output data, use the helper function qrModelOutputToArray.

[C,R] = fixed.example.qrModelOutputToArray(out.C,out.R,m,n,p,numSamples);

Extract the Economy-Size Q

The block computes C = Q'B. In this example, B is the identity matrix, so Q = C' is the economy-size orthogonal factor of the QR decomposition.

Q = C';

Verify That Q is Orthogonal and R is Upper-Triangular

Q is orthogonal, so Q'Q is the identity matrix within roundoff.

I = Q'*Q
I = 

    1.0000   -0.0000   -0.0000
   -0.0000    1.0000   -0.0000
   -0.0000   -0.0000    1.0000

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 62
        FractionLength: 48

R is an upper-triangular matrix.

R
R = 

    2.2180    0.8559   -0.5607
         0    2.0578   -0.4017
         0         0    1.7117

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 29
        FractionLength: 24
isequal(R,triu(R))
ans =

  logical

   1

Verify the Accuracy of the Output

To evaluate the accuracy of the Real Burst QR Decomposition block, compute the relative error.

relative_error = norm(double(Q*R - A))/norm(double(A))
relative_error =

   1.5208e-06

Suppress mlint warnings.

%#ok<*NOPTS>