Poor performance of linprog in practice
7 ビュー (過去 30 日間)
古いコメントを表示
I have to solve a dynamic programming problem using a linear programming approach. For details, please see this paper. The LP that I want to solve is:
min c'*v
s.t.
A*v>=u,
where c is n*1, v is n*1, A is n^2*n, u is n^2*1.
The min is with respect to v, the value function of the original DP problem. I have a moderate number of variables, n=300 and m=n^2*n=90000 linear inequalities as constraints. No bound constraints on v.
I use the Matlab function linprog which in turn is based on the solver HIGHS (since R2024a). The code is slow for my purposes (i.e. a brute-force value iteration is much faster). Moreover, linprog gives correct results only if I set the option 'Algorithm','dual-simplex-highs'. With other algorithms, it gets stuck.
After profiling the code, it turns out that the bottleneck is line 377 of linprog:
[x, fval, exitflag, output, lambda] = run(algorithm, problem);
I was wondering if there is a way to speed up the code. Any help or suggestion is greatly appreciated! I put below a MWE to illustrate the problem.
clear,clc,close all
%% Set parameters
crra = 2;
alpha = 0.36;
beta = 0.95;
delta = 0.1;
%% Grid for capital
k_ss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
n_k = 300;
k_grid = linspace(0.1*k_ss,1.5*k_ss,n_k)';
%% Build current return matrix, U(k',k)
cons = k_grid'.^alpha+(1-delta)*k_grid'-k_grid;
U_mat = f_util(cons,crra);
U_mat(cons<=0) = -inf;
%% Using LINEAR PROGRAMMING
% min c'*v
% s.t.
% A*v>=u, where c is n*1, v is n*1, A is n^2*n, u is n^2*1
n = length(k_grid);
c_vec = ones(n,1);
u_vec = U_mat(:); %% U(k',k), stack columnwise
%% Build A matrix using cell-based method
tic
A = cell(n,1);
bigI = (-beta)*speye(n);
for i=1:n
temp = bigI;
temp(:,i) = temp(:,i)+1;
A{i} = temp;
end
A = vertcat(A{:});
disp('Time to build A matrix with cell method:')
toc
%% Call linprog
% 'dual-simplex-highs' (default and by far the best)
options = optimoptions('linprog','Algorithm','dual-simplex-highs');
tic
[V_lin,fval,exitflag,output] = linprog(c_vec,-A,-u_vec,[],[],[],[],options);
disp('Time linear programming:')
toc
if exitflag<=0
warning('linprog did not find a solution')
fprintf('exitflag = %d \n',exitflag)
end
%% Now that we solved for V, compute policy function
RHS_mat = U_mat+beta*V_lin; % (k',k)
[V1,pol_k_ind] = max(RHS_mat,[],1);
pol_k = k_grid(pol_k_ind);
% Plots
figure
plot(k_grid,V1)
figure
plot(k_grid,k_grid,'--',k_grid,pol_k)
function util = f_util(c,crra)
util = c.^(1-crra)/(1-crra);
end
PROFILE
3 件のコメント
Matt J
2024 年 9 月 2 日
I have to solve a dynamic programming problem using a linear programming approach.
What do you mean by you "have to"? If speed is a priority, it seems like it would be better to solve it with a dynamic programming algorithm.
採用された回答
Matt J
2024 年 9 月 2 日
編集済み: Matt J
2024 年 9 月 2 日
There are some papers in the computational literature that show that if you recast a DP problem as LP problem you can achieve significant speed gains. ... Trick and Zin (1993) use CPLEX which is different from HIGHS using by Matlab in linprog. They show that LP can be much faster than value iteration (see graph below).
You could try using CPLEX. However, I don't think a 1993 paper is going to be remotely applicable to modern computation. In 1993, there were no GPUs or even multi-core CPUs. You would be very privileged if you had more than 16 MB RAM.
An implemention of DP value iteration with today's computing resources is going to be highly parallelizable with the Parallel Computing Toolbox - much more so than linprog.
3 件のコメント
Matt J
2024 年 9 月 3 日
編集済み: Matt J
2024 年 9 月 3 日
The availability of GPUs and multi-core CPUs speeds up DP value iteration but also the LP method, so I don't see why the LP approach would become obsolete for this type of problems.
Because value iteration can be written as a sequence of matrix multiplications and other elemental matrix operations, which Matlab is super-well optimized for. Conversely, linprog requires the allocation of memory for all kinds of submatrices and tableaus and doesn't really have a predictable memory access pattern. It really doesn't surprise me that it runs faster than linprog.
However, if you show your "brute force" implementation of value iteration, the community might be able to examine whether you've implemented that in the fastest possible way.
Matt J
2024 年 9 月 3 日
編集済み: Matt J
2024 年 9 月 3 日
Also, I notice you don't have a stochastic state transition in your DP problem. That simplifies value iteration greatly, and makes it extra fast. You can see below that one step of value iteration is only 1e-4 seconds. You could do 1000 iterations without breaking a sweat.
U=rand(300);
V=rand(300,1);
beta=0.5;
timeit(@()valueIter(U,V,beta))
function V=valueIter(U,V,beta)
V=max(U+beta*V',[],2);
end
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Surrogate Optimization についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!