{"group":{"id":1,"name":"Community","lockable":false,"created_at":"2012-01-18T18:02:15.000Z","updated_at":"2025-12-14T01:33:56.000Z","description":"Problems submitted by members of the MATLAB Central community.","is_default":true,"created_by":161519,"badge_id":null,"featured":false,"trending":false,"solution_count_in_trending_period":0,"trending_last_calculated":"2025-12-14T00:00:00.000Z","image_id":null,"published":true,"community_created":false,"status_id":2,"is_default_group_for_player":false,"deleted_by":null,"deleted_at":null,"restored_by":null,"restored_at":null,"description_opc":null,"description_html":null,"published_at":null},"problems":[{"id":524,"title":"Sequential Unconstrained Minimization (SUMT) using Interior Penalty","description":"Write a function to find the values of a design variable vector, _x_, that minimizes a scalar objective function, _f_ ( _x_ ), given a function handle to _f_, and a starting guess, _x0_, subject to inequality constraints _g_ ( _x_ )\u003c=0 with function handle _g_. Use a logarithmic interior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of increasing penalty parameter values. That is, the penalty (barrier) function, _P_, is\r\n\r\n P(x,r) = -sum(log(-g(x)))/r\r\n\r\nwhere _r_ is the penalty parameter.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes a scalar objective function, \u003ci\u003ef\u003c/i\u003e ( \u003ci\u003ex\u003c/i\u003e ), given a function handle to \u003ci\u003ef\u003c/i\u003e, and a starting guess, \u003ci\u003ex0\u003c/i\u003e, subject to inequality constraints \u003ci\u003eg\u003c/i\u003e ( \u003ci\u003ex\u003c/i\u003e )\u0026lt;=0 with function handle \u003ci\u003eg\u003c/i\u003e. Use a logarithmic interior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of increasing penalty parameter values. That is, the penalty (barrier) function, \u003ci\u003eP\u003c/i\u003e, is\u003c/p\u003e\u003cpre\u003e P(x,r) = -sum(log(-g(x)))/r\u003c/pre\u003e\u003cp\u003ewhere \u003ci\u003er\u003c/i\u003e is the penalty parameter.\u003c/p\u003e","function_template":"function [x,fmin]=sumt_interior(f,g,x0,penalty_parameter)\r\nif nargin\u003c4 || isempty(penalty_parameter)\r\n   penalty_parameter=? % initialize penalty parameter values for SUMT loop\r\nend\r\n% You may find that fminsearch is not accurate enough for the unconstrained minimization","test_suite":"%% Haftka \u0026 Gurdal, Figure 5.7.1 example\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 5;\r\n[xmin,fmin]=sumt_interior(f,g,x0) %#ok\u003c*NOPTS\u003e\r\nxcorrect=2;\r\nassert(norm(xmin-xcorrect)\u003c2e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-3)\r\n\r\n%%\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 5;\r\n[xmin,fmin]=sumt_interior(f,g,x0,1) % 1 iteration for unit penalty value\r\nxr1=4;\r\nassert(norm(xmin-xr1)\u003c1e-4)\r\nassert(abs(fmin-f(xr1))\u003c1e-4)\r\n\r\n%% Vanderplaats, Figure 5-4 example\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [3; 3];\r\n[xmin,fmin]=sumt_interior(f,g,x0,1) % 1 iteration\r\nxr2=[1.8686\r\n     2.1221];\r\nassert(norm(xmin-xr2)\u003c1e-2)\r\nassert(abs(fmin-f(xr2))\u003c1e-2)\r\n\r\n%%\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [2.1; 0.1];\r\nr  = 2^12;\r\n[xmin,fmin]=sumt_interior(f,g,x0,r) % Final iteration\r\nxcorrect=[2; 0];\r\nassert(norm(xmin-xcorrect)\u003c5e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c2e-3)","published":true,"deleted":false,"likes_count":1,"comments_count":1,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":10,"test_suite_updated_at":null,"rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-24T03:23:26.000Z","updated_at":"2025-12-10T16:18:46.000Z","published_at":"2012-03-24T18:41:49.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes a scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e (\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e ), given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, subject to inequality constraints\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e (\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e )\u0026lt;=0 with function handle\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e. Use a logarithmic interior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of increasing penalty parameter values. That is, the penalty (barrier) function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eP\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, is\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"code\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003e\u003c![CDATA[ P(x,r) = -sum(log(-g(x)))/r]]\u003e\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003ewhere\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003er\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e is the penalty parameter.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":481,"title":"Rosenbrock's Banana Function and its derivatives","description":"Write a function to return the value of \u003chttp://en.wikipedia.org/wiki/Rosenbrock_function Rosenbrock's two-dimensional banana function\u003e, as well as it's gradient (column) vector and Hessian matrix, given a vector of it's two independent variables _x1_ and _x2_. Here's the Rosenbrock function: \r\n\r\n 100*[(x2-x1^2)^2] + (1-x1)^2","description_html":"\u003cp\u003eWrite a function to return the value of \u003ca href=\"http://en.wikipedia.org/wiki/Rosenbrock_function\"\u003eRosenbrock's two-dimensional banana function\u003c/a\u003e, as well as it's gradient (column) vector and Hessian matrix, given a vector of it's two independent variables \u003ci\u003ex1\u003c/i\u003e and \u003ci\u003ex2\u003c/i\u003e. Here's the Rosenbrock function:\u003c/p\u003e\u003cpre\u003e 100*[(x2-x1^2)^2] + (1-x1)^2\u003c/pre\u003e","function_template":"function [value,gradient,Hessian] = Rosenbrock_banana(x)\r\n  value = x;\r\n  gradient = [0; 0];\r\n  Hessian = [0, 0; 0, 0]\r\nend","test_suite":"%%\r\nassert(isempty(regexp(fileread('Rosenbrock_banana.m'),'assert.m')))\r\n%%\r\nx = [0; 0];\r\nassert(isequal(Rosenbrock_banana(x),1))\r\n%%\r\nx = [1; 1];\r\nassert(isequal(Rosenbrock_banana(x),0))\r\n%%\r\nx = [1; -1];\r\nassert(isequal(Rosenbrock_banana(x),400))\r\n%%\r\nx = [-1; 0.5];\r\nassert(isequal(Rosenbrock_banana(x),29))\r\n%%\r\nx = [0; 0];\r\n[~,grad]=Rosenbrock_banana(x);\r\nassert(isequal(grad,[-2; 0]))\r\n%%\r\nx = [0; 0];\r\n[~,~,Hess]=Rosenbrock_banana(x);\r\nassert(isequal(Hess,diag([2, 200])))\r\n%%\r\nx = [1; 1];\r\n[~,grad]=Rosenbrock_banana(x);\r\nassert(isequal(grad,[0; 0]))\r\n%%\r\nx = [1; 1];\r\n[~,~,Hess]=Rosenbrock_banana(x);\r\nassert(isequal(Hess,[802, -400; -400, 200]))\r\n%%\r\nx = [-1.9; 2];\r\ncorrect_value = 267.6200;\r\ncorrect_grad = -1e3*[1.2294; 0.3220];\r\ncorrect_Hess = [3534, 760; 760, 200];\r\n[val,grad,Hess]=Rosenbrock_banana(x);\r\nassert(isequal(str2num(num2str(val)),correct_value))\r\nassert(isequal(str2num(num2str(grad)),correct_grad))\r\nassert(all(max(abs(Hess-correct_Hess)\u003c1e-8)))","published":true,"deleted":false,"likes_count":2,"comments_count":2,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":166,"test_suite_updated_at":"2013-08-25T01:30:52.000Z","rescore_all_solutions":false,"group_id":25,"created_at":"2012-03-12T05:01:12.000Z","updated_at":"2026-03-29T19:32:52.000Z","published_at":"2012-03-12T21:14:57.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to return the value of\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:hyperlink w:docLocation=\\\"http://en.wikipedia.org/wiki/Rosenbrock_function\\\"\u003e\u003cw:r\u003e\u003cw:t\u003eRosenbrock's two-dimensional banana function\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:hyperlink\u003e\u003cw:r\u003e\u003cw:t\u003e, as well as it's gradient (column) vector and Hessian matrix, given a vector of it's two independent variables\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex1\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex2\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e. Here's the Rosenbrock function:\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"code\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003e\u003c![CDATA[ 100*[(x2-x1^2)^2] + (1-x1)^2]]\u003e\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":484,"title":"Steepest Descent Method","description":"Write a function to find the values of a design variable vector, _x_, that minimizes an unconstrained scalar objective function, _f_, given a function handle to _f_ and its gradient, a starting guess, _x0_, a gradient tolerance, _TolGrad_, and a maximum number of iterations, _MaxIter_, using the Steepest Descent Method.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes an unconstrained scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e and its gradient, a starting guess, \u003ci\u003ex0\u003c/i\u003e, a gradient tolerance, \u003ci\u003eTolGrad\u003c/i\u003e, and a maximum number of iterations, \u003ci\u003eMaxIter\u003c/i\u003e, using the Steepest Descent Method.\u003c/p\u003e","function_template":"function [xmin,fmin]=SteepestDescent(F,gradF,x0,TolGrad,MaxIter)\r\n%\r\n%  Input\r\n%  F........ function handle for objective function F(x) with input argument, x\r\n%  gradF... function handle for gradient of objective function with input argument, x\r\n%  x0....... design variable vector initial guess for starting point\r\n%  TolGrad.. Tolerance for norm of objective gradient to be zero\r\n%  MaxIter.. Maximum number of iterations\r\n%\r\n%  Output\r\n%  x........ final design variable vector found to minimize objective function, F(x)\r\n%  f1....... final objective function minimum value\r\n\r\nif nargin\u003c4 || isempty(TolGrad), TolGrad=1e-2; end\r\nif nargin\u003c5 || isempty(MaxIter), MaxIter=20; end\r\n\r\n%Initialize loop parameters\r\niter = 0;\r\nf0   = F(x0);\r\nc    = gradF(x0);\r\nConverged = norm(c) \u003c TolGrad;\r\n\r\n%% Search direction and line search iterations\r\nwhile iter\u003cMaxIter \u0026\u0026 ~Converged\r\n   iter = iter + 1;\r\n   xmin = 0;\r\n   fmin = 0;\r\nend\r\nend","test_suite":"%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2.0];\r\nx1=[\r\n   -1.4478\r\n    2.1184];\r\nx2=[\r\n    1.7064\r\n    2.9446];\r\nf1=6.0419;\r\nf2=0.6068;\r\n[xmin,fmin]=SteepestDescent(F,gradF,x0,0.01,1)\r\nassert(norm(xmin-x1)\u003c0.2||norm(xmin-x2)\u003c0.2)\r\nassert( abs(fmin-f1)\u003c0.5|| abs(fmin-f2)\u003c0.5) % 2 local min\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect=[1;1];\r\nfcorrect=0;\r\n[xmin,fmin]=SteepestDescent(F,gradF,x0) % 20 iterations default\r\nassert(norm((xmin-xcorrect),inf)\u003c1)\r\nassert(abs(fmin-fcorrect)\u003c0.8);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [1.1; 0.9];\r\nxcorrect=[1;1];\r\nfcorrect=0;\r\n[xmin,fmin]=SteepestDescent(F,gradF,x0,1e-2,2000)\r\nassert(isequal(round(xmin),xcorrect))\r\nassert(isequal(round(fmin),fcorrect))","published":true,"deleted":false,"likes_count":2,"comments_count":5,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":32,"test_suite_updated_at":"2012-03-17T00:51:37.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-12T07:04:30.000Z","updated_at":"2025-12-10T14:07:35.000Z","published_at":"2012-03-17T01:01:14.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes an unconstrained scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and its gradient, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a gradient tolerance,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eTolGrad\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a maximum number of iterations,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eMaxIter\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, using the Steepest Descent Method.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":523,"title":"Sequential Unconstrained Minimization (SUMT) using Exterior Penalty","description":"Write a function to find the values of a design variable vector, _x_, that minimizes a scalar objective function, _f_, given a function handle to _f_, a starting guess, _x0_, subject to inequality and equality constraints with function handles _g_\u003c=0 and _h_=0, respectively. Use a quadratic exterior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of penalty parameter values that become increasingly larger.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes a scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e, a starting guess, \u003ci\u003ex0\u003c/i\u003e, subject to inequality and equality constraints with function handles \u003ci\u003eg\u003c/i\u003e\u0026lt;=0 and \u003ci\u003eh\u003c/i\u003e=0, respectively. Use a quadratic exterior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of penalty parameter values that become increasingly larger.\u003c/p\u003e","function_template":"function [x,fmin]=sumt_exterior(f,g,h,x0,penalty_parameter)\r\nif isempty(g), g=@(x)[]; end\r\nif isempty(h), h=@(x)[]; end\r\nif nargin\u003c5 || isempty(penalty_parameter)\r\n   penalty_parameter=? % initialize penalty parameter values for SUMT loop\r\nend\r\n% You may use fminsearch for the unconstrained minimization","test_suite":"%% Haftka \u0026 Gurdal, Figure 5.7.1 example\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 0;\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0) %#ok\u003c*NOPTS\u003e\r\nxcorrect=2;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-3)\r\n\r\n%%\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 0;\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0,1) % 1 iteration for unit penalty value\r\nxr1=1.75;\r\nassert(norm(xmin-xr1)\u003c1e-4)\r\nassert(abs(fmin-f(xr1))\u003c1e-4)\r\n\r\n%% Haftka \u0026 Gurdal, Example 5.7.1\r\nf = @(x) x(1).^2 + 10*x(2).^2;\r\nh = @(x) sum(x)-4;\r\nx0 = [0; 0];\r\n[xmin,fmin]=sumt_exterior(f,[],h,x0)\r\nxcorrect=[40; 4]/11;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\n\r\n%%\r\nf = @(x) x(1).^2 + 10*x(2).^2;\r\nh = @(x) sum(x)-4;\r\nx0 = [0; 0];\r\nr  = [1, 5];\r\n[xmin,fmin]=sumt_exterior(f,[],h,x0,r) % 2 iterations\r\nxr2=[3.0769\r\n     0.3077];\r\nassert(norm(xmin-xr2,inf)\u003c1e-4)\r\nassert(abs(fmin-f(xr2))\u003c1e-4)\r\n\r\n%% Vanderplaats, Figure 5-4 example\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [0; 0];\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0)\r\nxcorrect=[2; 0];\r\nassert(norm(xmin-xcorrect)\u003c1e-4)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\n\r\n%%\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [5; 5];\r\nr  = [1, 2];\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0,r) % 2 iterations\r\nxr2=[1.9536\r\n    -0.0496];\r\nassert(norm(xmin-xr2)\u003c1e-4)\r\nassert(abs(fmin-f(xr2))\u003c1e-4)","published":true,"deleted":false,"likes_count":1,"comments_count":3,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":12,"test_suite_updated_at":null,"rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-24T03:18:31.000Z","updated_at":"2025-12-10T14:27:22.000Z","published_at":"2012-03-24T03:18:36.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes a scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, subject to inequality and equality constraints with function handles\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e\u0026lt;=0 and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eh\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e=0, respectively. Use a quadratic exterior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of penalty parameter values that become increasingly larger.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":493,"title":"Quasi-Newton Method for Unconstrained Minimization using BFGS Update","description":"Write a function to find the values of a design variable vector, _x_, that minimizes an unconstrained scalar objective function, _f_, given a function handle to _f_ and its gradient, a starting guess, _x0_, a gradient tolerance, _TolGrad_, and a maximum number of iterations, _MaxIter_, using the Quasi-Newton (Secant) Method. Initialize the Hessian approximation as an identity matrix. Update the Hessian matrix approximation using the BFGS update formula.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes an unconstrained scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e and its gradient, a starting guess, \u003ci\u003ex0\u003c/i\u003e, a gradient tolerance, \u003ci\u003eTolGrad\u003c/i\u003e, and a maximum number of iterations, \u003ci\u003eMaxIter\u003c/i\u003e, using the Quasi-Newton (Secant) Method. Initialize the Hessian approximation as an identity matrix. Update the Hessian matrix approximation using the BFGS update formula.\u003c/p\u003e","function_template":"function [x,f1]=BFGS_Quasi_Newton(F,gradF,x0,TolGrad,MaxIter)\r\n%\r\n%  Input\r\n%  F........ function handle for objective function F(x) with input argument, x\r\n%  gradF... function handle for gradient of objective function with input argument, x\r\n%  x0....... design variable vector initial guess for starting point\r\n%  TolGrad.. Tolerance for norm of objective gradient to be zero\r\n%  MaxIter.. Maximum number of iterations\r\n%\r\n%  Output\r\n%  x........ final design variable vector found to minimize objective function, F(x)\r\n%  f1....... final objective function minimum value\r\n\r\nif nargin\u003c4 || isempty(TolGrad), TolGrad=1e-3; end\r\nif nargin\u003c5 || isempty(MaxIter), MaxIter=100;  end\r\n\r\n% Change following steepest descent algorithm to Quasi-Newton Method \r\n% using BFGS Hessian updates.\r\n\r\n%% Initialize loop parameters\r\niter = 0;\r\nf0 = F(x0);\r\nc0 = gradF(x0);\r\nc  = c0;\r\nConverged = norm(c) \u003c TolGrad;\r\n\r\n%% Search direction and line search iterations\r\ndisp('iter alpha f(alpha)  norm(c)')\r\nfprintf('%4.0f %6.6f %8.4f %8.4f\\n',[iter, 0, f0, norm(c)])\r\nwhile iter\u003cMaxIter \u0026\u0026 ~Converged\r\n\titer = iter + 1;\r\n\td = -c;\r\n\tf = @(alpha) F(x0+alpha*d);\r\n\t[alpha,f1] = fminsearch( f, 0 );\r\n\tx = x0 + alpha*d;\r\n\r\n   Converged = norm(c) \u003c TolGrad;\r\n   x0 = x;\r\n   c0 = c;\r\nend\r\nfprintf('%4.0f %6.4f %8.4f %8.4f\\n',[iter, alpha, f1, norm(c)])\r\nend","test_suite":"%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2.0];\r\nx1=[\r\n   -1.4478\r\n    2.1184];\r\nx2=[\r\n    1.7064\r\n    2.9446];\r\nf1=6.0419;\r\nf2=0.6068;\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0,0.01,1) % single steepest descent\r\nassert(norm(xmin-x1)\u003c0.2||norm(xmin-x2)\u003c0.2)\r\nassert( abs(fmin-f1)\u003c0.5|| abs(fmin-f2)\u003c0.5) % 2 local min\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect=[\r\n    0.2927\r\n    0.0506];\r\nfcorrect=0.63;\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0,1e-2,2) % two iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert( abs(fmin-fcorrect)\u003c0.01)\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0;0];\r\nxcorrect = [1;1;];\r\nfcorrect = 0;\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0)\r\nassert(norm(xmin-xcorrect)\u003c0.01)\r\nassert(abs(fmin-fcorrect)\u003c0.01);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nHessF=@(x) 200*[4*x(1).^2-2*(x(2)-x(1).^2)+1/100, -2*x(1);\r\n                -2*x(1), 1];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\nx0 = [-1.9; 2];\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0,1e-4)\r\nassert(isequal(round(xmin),xcorrect))\r\nassert(isequal(round(fmin),fcorrect))","published":true,"deleted":false,"likes_count":1,"comments_count":0,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":22,"test_suite_updated_at":"2012-03-20T03:17:25.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-13T17:04:07.000Z","updated_at":"2025-12-10T14:21:49.000Z","published_at":"2012-03-20T03:21:19.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes an unconstrained scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and its gradient, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a gradient tolerance,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eTolGrad\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a maximum number of iterations,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eMaxIter\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, using the Quasi-Newton (Secant) Method. Initialize the Hessian approximation as an identity matrix. Update the Hessian matrix approximation using the BFGS update formula.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":485,"title":"Fletcher-Reeves Conjugate Gradient Method","description":"Write a function to find the values of a design variable vector, _x_, that minimizes an unconstrained scalar objective function, _f_, given a function handle to _f_ and its gradient, a starting guess, _x0_, a gradient tolerance, _TolGrad_, and a maximum number of iterations, _MaxIter_, using Fletcher-Reeves Conjugate Gradient Method.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes an unconstrained scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e and its gradient, a starting guess, \u003ci\u003ex0\u003c/i\u003e, a gradient tolerance, \u003ci\u003eTolGrad\u003c/i\u003e, and a maximum number of iterations, \u003ci\u003eMaxIter\u003c/i\u003e, using Fletcher-Reeves Conjugate Gradient Method.\u003c/p\u003e","function_template":"function [x,f1]=ConjGrad(F,gradF,x0,TolGrad,MaxIter)\r\n%\r\n%  Input\r\n%  F........ function handle for objective function F(x) with input argument, x\r\n%  gradF... function handle for gradient of objective function with input argument, x\r\n%  x0....... design variable vector initial guess for starting point\r\n%  TolGrad.. Tolerance for norm of objective gradient to be zero\r\n%  MaxIter.. Maximum number of iterations\r\n%\r\n%  Output\r\n%  x........ final design variable vector found to minimize objective function, F(x)\r\n%  f1....... final objective function minimum value\r\n\r\nif nargin\u003c4 || isempty(TolGrad), TolGrad=1e-2; end\r\nif nargin\u003c5 || isempty(MaxIter), MaxIter=20; end\r\n\r\n%% Change following steepest descent algorithm to Conjugate Gradient Method\r\n\r\n% Initialize loop parameters\r\niter = 0;\r\nf0   = F(x0);\r\nc    = gradF(x0);\r\nConverged = norm(c) \u003c TolGrad;\r\ndisp('   iter alpha f(alpha)  norm(c)')\r\nfprintf(' %6.0f %5.3f %8.4f %8.4f\\n',[iter, 0, f0, norm(c)])\r\nalpha = 1e-6*norm(c);\r\n\r\n%% Search direction and line search iterations\r\nwhile iter\u003cMaxIter \u0026\u0026 ~Converged\r\n\titer = iter + 1;\r\n   d = -c;\r\n   f = @(alpha) F(x0+alpha*d);\r\n   alphaUpper = bracket( f, 0, 0.1*alpha );\r\n   [alpha,f1] = fminbnd( f, 0, alphaUpper );\r\n   x = x0 + alpha*d;\r\n   c = gradF(x);\r\n   Converged = (norm(c) \u003c TolGrad);\r\n   x0 = x;\r\nend\r\nfprintf(' %6.0f %5.3f %8.4f %8.4f\\n',[iter, alpha, f1, norm(c)])\r\nend\r\n\r\n%% Bracket interval for 1-D line search\r\n   function [alphaUpper,alphaLower,nfunc] = bracket( f, alpha0, delta, MaxIter )\r\n\t% usage: [alphaUpper,alphaLower,nfunc] = bracket( f, alpha0, delta, MaxIter )\r\n\t%  Golden section search to bracket unimodal, univariate minimum\r\n\t%--input\r\n\t%  f = function handle to univariate function of alpha\r\n\t%  alpha0... Starting point lower bound on bracket\r\n\t%  delta.... Guess for upper bound on bracket on unimodal min\r\n\t%  MaxIter.. Maximum number of iterations\r\n\t%--output\r\n\t%  alphaUpper... Upper bound on alpha to bracket min of f(alpha)\r\n\t%  alphaLower... Lower bound on alpha to bracket min of f(alpha)\r\n\t%  nfunc........ Number of function evaluations\r\n\r\n      if nargin\u003c3, delta = 1.e-3; end\r\n      if nargin\u003c4, MaxIter=1e3; end\r\n      \r\n      %--Local variables\r\n      %  expand... Expansion factor for extending Upper Bound\r\n      %  phi...... Golden section ratio = 1.681...\r\n      phi=(1+sqrt(5))/2;\r\n      expand = phi; % Set=1 to use equal interval search\r\n      \r\n      % Initialize variables\r\n      alphaLower=alpha0;\r\n      alphaLast=alpha0;\r\n      alphaNext=delta;\r\n      fLast=f(alphaLast);\r\n      fNext=f(alphaNext);\r\n      iter=1;\r\n      while fNext\u003cfLast \u0026\u0026 iter\u003c=MaxIter\r\n         iter = iter + 1;\r\n         delta = expand*delta;\r\n         alphaLower = alphaLast;\r\n         alphaLast = alphaNext;\r\n         alphaNext = alphaNext + delta;\r\n         fLast = fNext;\r\n         fNext = f(alphaNext);\r\n      end\r\n      alphaUpper=alphaNext;\r\n      nfunc=iter+1;\r\n   end","test_suite":"%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2.0];\r\nx1=[\r\n   -1.4478\r\n    2.1184];\r\nx2=[\r\n    1.7064\r\n    2.9446];\r\nf1=6.0419;\r\nf2=0.6068;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,0.01,1) % single steepest descent\r\nassert(norm(xmin-x1)\u003c0.2||norm(xmin-x2)\u003c0.2)\r\nassert( abs(fmin-f1)\u003c0.5|| abs(fmin-f2)\u003c0.5) % 2 local min\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect=[\r\n    0.2926\r\n    0.0505];\r\nfcorrect=0.6238;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,1e-2,2) % two iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert( abs(fmin-fcorrect)\u003c0.01)\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [1.1;0.9];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0) % default 20 iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert(abs(fmin-fcorrect)\u003c0.01);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,0.01,100) % Convergence before 100 iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert(abs(fmin-fcorrect)\u003c0.01);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,1e-3,200)\r\nassert(isequal(round(xmin),xcorrect))\r\nassert(isequal(round(fmin),fcorrect))","published":true,"deleted":false,"likes_count":0,"comments_count":0,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":31,"test_suite_updated_at":"2012-03-19T22:08:55.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-12T08:38:19.000Z","updated_at":"2025-12-10T14:17:15.000Z","published_at":"2012-03-19T22:15:16.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes an unconstrained scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and its gradient, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a gradient tolerance,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eTolGrad\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a maximum number of iterations,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eMaxIter\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, using Fletcher-Reeves Conjugate Gradient Method.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":527,"title":"Augmented Lagrange Multiplier (ALM) Method","description":"Write a function to find the values of a design variable vector, _x_, that minimizes a scalar objective function, _f_ ( _x_ ), given a function handle to _f_, and a starting guess, _x0_, subject to inequality and equality constraints with function handles _g_\u003c=0 and _h_=0, respectively. Use the Augmented Lagrangian Multiplier Method (a.k.a., the Method of Multipliers) and return the estimate of Lagrange multipliers _u_ for inequalities, and _v_ for equality constraints.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes a scalar objective function, \u003ci\u003ef\u003c/i\u003e ( \u003ci\u003ex\u003c/i\u003e ), given a function handle to \u003ci\u003ef\u003c/i\u003e, and a starting guess, \u003ci\u003ex0\u003c/i\u003e, subject to inequality and equality constraints with function handles \u003ci\u003eg\u003c/i\u003e\u0026lt;=0 and \u003ci\u003eh\u003c/i\u003e=0, respectively. Use the Augmented Lagrangian Multiplier Method (a.k.a., the Method of Multipliers) and return the estimate of Lagrange multipliers \u003ci\u003eu\u003c/i\u003e for inequalities, and \u003ci\u003ev\u003c/i\u003e for equality constraints.\u003c/p\u003e","function_template":"function [x,fmin,u,v]=alm(f,g,h,x0,MaxIter)\r\nif isempty(g), g=@(x)0; end\r\nif isempty(h), h=@(x)0; end\r\nif nargin\u003c5, MaxIter=20; end","test_suite":"%% Haftka \u0026 Gurdal, Figure 5.7.1 example\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 0;\r\n[xmin,fmin,u]=alm(f,g,[],x0) %#ok\u003c*NOPTS\u003e\r\nxcorrect=2;\r\nucorrect=0.5;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-3)\r\nassert(abs(u-ucorrect)\u003c1e-3)\r\n\r\n%% Haftka \u0026 Gurdal, Example 5.7.1\r\nf = @(x) x(1).^2 + 10*x(2).^2;\r\nh = @(x) sum(x)-4;\r\nx0 = [0; 0];\r\n[xmin,fmin,~,v]=alm(f,[],h,x0)\r\nxcorrect=[40; 4]/11;\r\nvcorrect=-7.2727;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\nassert(abs(v-vcorrect)\u003c1e-2)\r\n\r\n%% Vanderplaats, Figure 5-4 example\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [0; 0];\r\n[xmin,fmin,u]=alm(f,g,[],x0)\r\nxcorrect=[2; 0];\r\nucorrect=[0.2; 0.6];\r\nassert(norm(xmin-xcorrect)\u003c1e-4)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\nassert(norm(u-ucorrect)\u003c1e-2)","published":true,"deleted":false,"likes_count":0,"comments_count":0,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":6,"test_suite_updated_at":"2012-03-25T02:49:42.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-25T02:46:55.000Z","updated_at":"2025-09-01T07:55:40.000Z","published_at":"2012-03-25T02:49:42.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes a scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e (\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e ), given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, subject to inequality and equality constraints with function handles\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e\u0026lt;=0 and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eh\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e=0, respectively. Use the Augmented Lagrangian Multiplier Method (a.k.a., the Method of Multipliers) and return the estimate of Lagrange multipliers\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eu\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e for inequalities, and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ev\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e for equality constraints.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"}],"problem_search":{"errors":[],"problems":[{"id":524,"title":"Sequential Unconstrained Minimization (SUMT) using Interior Penalty","description":"Write a function to find the values of a design variable vector, _x_, that minimizes a scalar objective function, _f_ ( _x_ ), given a function handle to _f_, and a starting guess, _x0_, subject to inequality constraints _g_ ( _x_ )\u003c=0 with function handle _g_. Use a logarithmic interior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of increasing penalty parameter values. That is, the penalty (barrier) function, _P_, is\r\n\r\n P(x,r) = -sum(log(-g(x)))/r\r\n\r\nwhere _r_ is the penalty parameter.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes a scalar objective function, \u003ci\u003ef\u003c/i\u003e ( \u003ci\u003ex\u003c/i\u003e ), given a function handle to \u003ci\u003ef\u003c/i\u003e, and a starting guess, \u003ci\u003ex0\u003c/i\u003e, subject to inequality constraints \u003ci\u003eg\u003c/i\u003e ( \u003ci\u003ex\u003c/i\u003e )\u0026lt;=0 with function handle \u003ci\u003eg\u003c/i\u003e. Use a logarithmic interior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of increasing penalty parameter values. That is, the penalty (barrier) function, \u003ci\u003eP\u003c/i\u003e, is\u003c/p\u003e\u003cpre\u003e P(x,r) = -sum(log(-g(x)))/r\u003c/pre\u003e\u003cp\u003ewhere \u003ci\u003er\u003c/i\u003e is the penalty parameter.\u003c/p\u003e","function_template":"function [x,fmin]=sumt_interior(f,g,x0,penalty_parameter)\r\nif nargin\u003c4 || isempty(penalty_parameter)\r\n   penalty_parameter=? % initialize penalty parameter values for SUMT loop\r\nend\r\n% You may find that fminsearch is not accurate enough for the unconstrained minimization","test_suite":"%% Haftka \u0026 Gurdal, Figure 5.7.1 example\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 5;\r\n[xmin,fmin]=sumt_interior(f,g,x0) %#ok\u003c*NOPTS\u003e\r\nxcorrect=2;\r\nassert(norm(xmin-xcorrect)\u003c2e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-3)\r\n\r\n%%\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 5;\r\n[xmin,fmin]=sumt_interior(f,g,x0,1) % 1 iteration for unit penalty value\r\nxr1=4;\r\nassert(norm(xmin-xr1)\u003c1e-4)\r\nassert(abs(fmin-f(xr1))\u003c1e-4)\r\n\r\n%% Vanderplaats, Figure 5-4 example\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [3; 3];\r\n[xmin,fmin]=sumt_interior(f,g,x0,1) % 1 iteration\r\nxr2=[1.8686\r\n     2.1221];\r\nassert(norm(xmin-xr2)\u003c1e-2)\r\nassert(abs(fmin-f(xr2))\u003c1e-2)\r\n\r\n%%\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [2.1; 0.1];\r\nr  = 2^12;\r\n[xmin,fmin]=sumt_interior(f,g,x0,r) % Final iteration\r\nxcorrect=[2; 0];\r\nassert(norm(xmin-xcorrect)\u003c5e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c2e-3)","published":true,"deleted":false,"likes_count":1,"comments_count":1,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":10,"test_suite_updated_at":null,"rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-24T03:23:26.000Z","updated_at":"2025-12-10T16:18:46.000Z","published_at":"2012-03-24T18:41:49.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes a scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e (\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e ), given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, subject to inequality constraints\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e (\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e )\u0026lt;=0 with function handle\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e. Use a logarithmic interior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of increasing penalty parameter values. That is, the penalty (barrier) function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eP\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, is\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"code\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003e\u003c![CDATA[ P(x,r) = -sum(log(-g(x)))/r]]\u003e\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003ewhere\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003er\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e is the penalty parameter.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":481,"title":"Rosenbrock's Banana Function and its derivatives","description":"Write a function to return the value of \u003chttp://en.wikipedia.org/wiki/Rosenbrock_function Rosenbrock's two-dimensional banana function\u003e, as well as it's gradient (column) vector and Hessian matrix, given a vector of it's two independent variables _x1_ and _x2_. Here's the Rosenbrock function: \r\n\r\n 100*[(x2-x1^2)^2] + (1-x1)^2","description_html":"\u003cp\u003eWrite a function to return the value of \u003ca href=\"http://en.wikipedia.org/wiki/Rosenbrock_function\"\u003eRosenbrock's two-dimensional banana function\u003c/a\u003e, as well as it's gradient (column) vector and Hessian matrix, given a vector of it's two independent variables \u003ci\u003ex1\u003c/i\u003e and \u003ci\u003ex2\u003c/i\u003e. Here's the Rosenbrock function:\u003c/p\u003e\u003cpre\u003e 100*[(x2-x1^2)^2] + (1-x1)^2\u003c/pre\u003e","function_template":"function [value,gradient,Hessian] = Rosenbrock_banana(x)\r\n  value = x;\r\n  gradient = [0; 0];\r\n  Hessian = [0, 0; 0, 0]\r\nend","test_suite":"%%\r\nassert(isempty(regexp(fileread('Rosenbrock_banana.m'),'assert.m')))\r\n%%\r\nx = [0; 0];\r\nassert(isequal(Rosenbrock_banana(x),1))\r\n%%\r\nx = [1; 1];\r\nassert(isequal(Rosenbrock_banana(x),0))\r\n%%\r\nx = [1; -1];\r\nassert(isequal(Rosenbrock_banana(x),400))\r\n%%\r\nx = [-1; 0.5];\r\nassert(isequal(Rosenbrock_banana(x),29))\r\n%%\r\nx = [0; 0];\r\n[~,grad]=Rosenbrock_banana(x);\r\nassert(isequal(grad,[-2; 0]))\r\n%%\r\nx = [0; 0];\r\n[~,~,Hess]=Rosenbrock_banana(x);\r\nassert(isequal(Hess,diag([2, 200])))\r\n%%\r\nx = [1; 1];\r\n[~,grad]=Rosenbrock_banana(x);\r\nassert(isequal(grad,[0; 0]))\r\n%%\r\nx = [1; 1];\r\n[~,~,Hess]=Rosenbrock_banana(x);\r\nassert(isequal(Hess,[802, -400; -400, 200]))\r\n%%\r\nx = [-1.9; 2];\r\ncorrect_value = 267.6200;\r\ncorrect_grad = -1e3*[1.2294; 0.3220];\r\ncorrect_Hess = [3534, 760; 760, 200];\r\n[val,grad,Hess]=Rosenbrock_banana(x);\r\nassert(isequal(str2num(num2str(val)),correct_value))\r\nassert(isequal(str2num(num2str(grad)),correct_grad))\r\nassert(all(max(abs(Hess-correct_Hess)\u003c1e-8)))","published":true,"deleted":false,"likes_count":2,"comments_count":2,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":166,"test_suite_updated_at":"2013-08-25T01:30:52.000Z","rescore_all_solutions":false,"group_id":25,"created_at":"2012-03-12T05:01:12.000Z","updated_at":"2026-03-29T19:32:52.000Z","published_at":"2012-03-12T21:14:57.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to return the value of\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:hyperlink w:docLocation=\\\"http://en.wikipedia.org/wiki/Rosenbrock_function\\\"\u003e\u003cw:r\u003e\u003cw:t\u003eRosenbrock's two-dimensional banana function\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:hyperlink\u003e\u003cw:r\u003e\u003cw:t\u003e, as well as it's gradient (column) vector and Hessian matrix, given a vector of it's two independent variables\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex1\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex2\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e. Here's the Rosenbrock function:\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"code\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003e\u003c![CDATA[ 100*[(x2-x1^2)^2] + (1-x1)^2]]\u003e\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":484,"title":"Steepest Descent Method","description":"Write a function to find the values of a design variable vector, _x_, that minimizes an unconstrained scalar objective function, _f_, given a function handle to _f_ and its gradient, a starting guess, _x0_, a gradient tolerance, _TolGrad_, and a maximum number of iterations, _MaxIter_, using the Steepest Descent Method.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes an unconstrained scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e and its gradient, a starting guess, \u003ci\u003ex0\u003c/i\u003e, a gradient tolerance, \u003ci\u003eTolGrad\u003c/i\u003e, and a maximum number of iterations, \u003ci\u003eMaxIter\u003c/i\u003e, using the Steepest Descent Method.\u003c/p\u003e","function_template":"function [xmin,fmin]=SteepestDescent(F,gradF,x0,TolGrad,MaxIter)\r\n%\r\n%  Input\r\n%  F........ function handle for objective function F(x) with input argument, x\r\n%  gradF... function handle for gradient of objective function with input argument, x\r\n%  x0....... design variable vector initial guess for starting point\r\n%  TolGrad.. Tolerance for norm of objective gradient to be zero\r\n%  MaxIter.. Maximum number of iterations\r\n%\r\n%  Output\r\n%  x........ final design variable vector found to minimize objective function, F(x)\r\n%  f1....... final objective function minimum value\r\n\r\nif nargin\u003c4 || isempty(TolGrad), TolGrad=1e-2; end\r\nif nargin\u003c5 || isempty(MaxIter), MaxIter=20; end\r\n\r\n%Initialize loop parameters\r\niter = 0;\r\nf0   = F(x0);\r\nc    = gradF(x0);\r\nConverged = norm(c) \u003c TolGrad;\r\n\r\n%% Search direction and line search iterations\r\nwhile iter\u003cMaxIter \u0026\u0026 ~Converged\r\n   iter = iter + 1;\r\n   xmin = 0;\r\n   fmin = 0;\r\nend\r\nend","test_suite":"%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2.0];\r\nx1=[\r\n   -1.4478\r\n    2.1184];\r\nx2=[\r\n    1.7064\r\n    2.9446];\r\nf1=6.0419;\r\nf2=0.6068;\r\n[xmin,fmin]=SteepestDescent(F,gradF,x0,0.01,1)\r\nassert(norm(xmin-x1)\u003c0.2||norm(xmin-x2)\u003c0.2)\r\nassert( abs(fmin-f1)\u003c0.5|| abs(fmin-f2)\u003c0.5) % 2 local min\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect=[1;1];\r\nfcorrect=0;\r\n[xmin,fmin]=SteepestDescent(F,gradF,x0) % 20 iterations default\r\nassert(norm((xmin-xcorrect),inf)\u003c1)\r\nassert(abs(fmin-fcorrect)\u003c0.8);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [1.1; 0.9];\r\nxcorrect=[1;1];\r\nfcorrect=0;\r\n[xmin,fmin]=SteepestDescent(F,gradF,x0,1e-2,2000)\r\nassert(isequal(round(xmin),xcorrect))\r\nassert(isequal(round(fmin),fcorrect))","published":true,"deleted":false,"likes_count":2,"comments_count":5,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":32,"test_suite_updated_at":"2012-03-17T00:51:37.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-12T07:04:30.000Z","updated_at":"2025-12-10T14:07:35.000Z","published_at":"2012-03-17T01:01:14.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes an unconstrained scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and its gradient, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a gradient tolerance,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eTolGrad\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a maximum number of iterations,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eMaxIter\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, using the Steepest Descent Method.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":523,"title":"Sequential Unconstrained Minimization (SUMT) using Exterior Penalty","description":"Write a function to find the values of a design variable vector, _x_, that minimizes a scalar objective function, _f_, given a function handle to _f_, a starting guess, _x0_, subject to inequality and equality constraints with function handles _g_\u003c=0 and _h_=0, respectively. Use a quadratic exterior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of penalty parameter values that become increasingly larger.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes a scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e, a starting guess, \u003ci\u003ex0\u003c/i\u003e, subject to inequality and equality constraints with function handles \u003ci\u003eg\u003c/i\u003e\u0026lt;=0 and \u003ci\u003eh\u003c/i\u003e=0, respectively. Use a quadratic exterior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of penalty parameter values that become increasingly larger.\u003c/p\u003e","function_template":"function [x,fmin]=sumt_exterior(f,g,h,x0,penalty_parameter)\r\nif isempty(g), g=@(x)[]; end\r\nif isempty(h), h=@(x)[]; end\r\nif nargin\u003c5 || isempty(penalty_parameter)\r\n   penalty_parameter=? % initialize penalty parameter values for SUMT loop\r\nend\r\n% You may use fminsearch for the unconstrained minimization","test_suite":"%% Haftka \u0026 Gurdal, Figure 5.7.1 example\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 0;\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0) %#ok\u003c*NOPTS\u003e\r\nxcorrect=2;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-3)\r\n\r\n%%\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 0;\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0,1) % 1 iteration for unit penalty value\r\nxr1=1.75;\r\nassert(norm(xmin-xr1)\u003c1e-4)\r\nassert(abs(fmin-f(xr1))\u003c1e-4)\r\n\r\n%% Haftka \u0026 Gurdal, Example 5.7.1\r\nf = @(x) x(1).^2 + 10*x(2).^2;\r\nh = @(x) sum(x)-4;\r\nx0 = [0; 0];\r\n[xmin,fmin]=sumt_exterior(f,[],h,x0)\r\nxcorrect=[40; 4]/11;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\n\r\n%%\r\nf = @(x) x(1).^2 + 10*x(2).^2;\r\nh = @(x) sum(x)-4;\r\nx0 = [0; 0];\r\nr  = [1, 5];\r\n[xmin,fmin]=sumt_exterior(f,[],h,x0,r) % 2 iterations\r\nxr2=[3.0769\r\n     0.3077];\r\nassert(norm(xmin-xr2,inf)\u003c1e-4)\r\nassert(abs(fmin-f(xr2))\u003c1e-4)\r\n\r\n%% Vanderplaats, Figure 5-4 example\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [0; 0];\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0)\r\nxcorrect=[2; 0];\r\nassert(norm(xmin-xcorrect)\u003c1e-4)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\n\r\n%%\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [5; 5];\r\nr  = [1, 2];\r\n[xmin,fmin]=sumt_exterior(f,g,[],x0,r) % 2 iterations\r\nxr2=[1.9536\r\n    -0.0496];\r\nassert(norm(xmin-xr2)\u003c1e-4)\r\nassert(abs(fmin-f(xr2))\u003c1e-4)","published":true,"deleted":false,"likes_count":1,"comments_count":3,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":12,"test_suite_updated_at":null,"rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-24T03:18:31.000Z","updated_at":"2025-12-10T14:27:22.000Z","published_at":"2012-03-24T03:18:36.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes a scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, subject to inequality and equality constraints with function handles\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e\u0026lt;=0 and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eh\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e=0, respectively. Use a quadratic exterior penalty for the sequential unconstrained minimization technique (SUMT) with an optional input vector of penalty parameter values that become increasingly larger.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":493,"title":"Quasi-Newton Method for Unconstrained Minimization using BFGS Update","description":"Write a function to find the values of a design variable vector, _x_, that minimizes an unconstrained scalar objective function, _f_, given a function handle to _f_ and its gradient, a starting guess, _x0_, a gradient tolerance, _TolGrad_, and a maximum number of iterations, _MaxIter_, using the Quasi-Newton (Secant) Method. Initialize the Hessian approximation as an identity matrix. Update the Hessian matrix approximation using the BFGS update formula.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes an unconstrained scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e and its gradient, a starting guess, \u003ci\u003ex0\u003c/i\u003e, a gradient tolerance, \u003ci\u003eTolGrad\u003c/i\u003e, and a maximum number of iterations, \u003ci\u003eMaxIter\u003c/i\u003e, using the Quasi-Newton (Secant) Method. Initialize the Hessian approximation as an identity matrix. Update the Hessian matrix approximation using the BFGS update formula.\u003c/p\u003e","function_template":"function [x,f1]=BFGS_Quasi_Newton(F,gradF,x0,TolGrad,MaxIter)\r\n%\r\n%  Input\r\n%  F........ function handle for objective function F(x) with input argument, x\r\n%  gradF... function handle for gradient of objective function with input argument, x\r\n%  x0....... design variable vector initial guess for starting point\r\n%  TolGrad.. Tolerance for norm of objective gradient to be zero\r\n%  MaxIter.. Maximum number of iterations\r\n%\r\n%  Output\r\n%  x........ final design variable vector found to minimize objective function, F(x)\r\n%  f1....... final objective function minimum value\r\n\r\nif nargin\u003c4 || isempty(TolGrad), TolGrad=1e-3; end\r\nif nargin\u003c5 || isempty(MaxIter), MaxIter=100;  end\r\n\r\n% Change following steepest descent algorithm to Quasi-Newton Method \r\n% using BFGS Hessian updates.\r\n\r\n%% Initialize loop parameters\r\niter = 0;\r\nf0 = F(x0);\r\nc0 = gradF(x0);\r\nc  = c0;\r\nConverged = norm(c) \u003c TolGrad;\r\n\r\n%% Search direction and line search iterations\r\ndisp('iter alpha f(alpha)  norm(c)')\r\nfprintf('%4.0f %6.6f %8.4f %8.4f\\n',[iter, 0, f0, norm(c)])\r\nwhile iter\u003cMaxIter \u0026\u0026 ~Converged\r\n\titer = iter + 1;\r\n\td = -c;\r\n\tf = @(alpha) F(x0+alpha*d);\r\n\t[alpha,f1] = fminsearch( f, 0 );\r\n\tx = x0 + alpha*d;\r\n\r\n   Converged = norm(c) \u003c TolGrad;\r\n   x0 = x;\r\n   c0 = c;\r\nend\r\nfprintf('%4.0f %6.4f %8.4f %8.4f\\n',[iter, alpha, f1, norm(c)])\r\nend","test_suite":"%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2.0];\r\nx1=[\r\n   -1.4478\r\n    2.1184];\r\nx2=[\r\n    1.7064\r\n    2.9446];\r\nf1=6.0419;\r\nf2=0.6068;\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0,0.01,1) % single steepest descent\r\nassert(norm(xmin-x1)\u003c0.2||norm(xmin-x2)\u003c0.2)\r\nassert( abs(fmin-f1)\u003c0.5|| abs(fmin-f2)\u003c0.5) % 2 local min\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect=[\r\n    0.2927\r\n    0.0506];\r\nfcorrect=0.63;\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0,1e-2,2) % two iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert( abs(fmin-fcorrect)\u003c0.01)\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0;0];\r\nxcorrect = [1;1;];\r\nfcorrect = 0;\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0)\r\nassert(norm(xmin-xcorrect)\u003c0.01)\r\nassert(abs(fmin-fcorrect)\u003c0.01);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nHessF=@(x) 200*[4*x(1).^2-2*(x(2)-x(1).^2)+1/100, -2*x(1);\r\n                -2*x(1), 1];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\nx0 = [-1.9; 2];\r\n[xmin,fmin]=BFGS_Quasi_Newton(F,gradF,x0,1e-4)\r\nassert(isequal(round(xmin),xcorrect))\r\nassert(isequal(round(fmin),fcorrect))","published":true,"deleted":false,"likes_count":1,"comments_count":0,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":22,"test_suite_updated_at":"2012-03-20T03:17:25.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-13T17:04:07.000Z","updated_at":"2025-12-10T14:21:49.000Z","published_at":"2012-03-20T03:21:19.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes an unconstrained scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and its gradient, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a gradient tolerance,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eTolGrad\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a maximum number of iterations,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eMaxIter\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, using the Quasi-Newton (Secant) Method. Initialize the Hessian approximation as an identity matrix. Update the Hessian matrix approximation using the BFGS update formula.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":485,"title":"Fletcher-Reeves Conjugate Gradient Method","description":"Write a function to find the values of a design variable vector, _x_, that minimizes an unconstrained scalar objective function, _f_, given a function handle to _f_ and its gradient, a starting guess, _x0_, a gradient tolerance, _TolGrad_, and a maximum number of iterations, _MaxIter_, using Fletcher-Reeves Conjugate Gradient Method.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes an unconstrained scalar objective function, \u003ci\u003ef\u003c/i\u003e, given a function handle to \u003ci\u003ef\u003c/i\u003e and its gradient, a starting guess, \u003ci\u003ex0\u003c/i\u003e, a gradient tolerance, \u003ci\u003eTolGrad\u003c/i\u003e, and a maximum number of iterations, \u003ci\u003eMaxIter\u003c/i\u003e, using Fletcher-Reeves Conjugate Gradient Method.\u003c/p\u003e","function_template":"function [x,f1]=ConjGrad(F,gradF,x0,TolGrad,MaxIter)\r\n%\r\n%  Input\r\n%  F........ function handle for objective function F(x) with input argument, x\r\n%  gradF... function handle for gradient of objective function with input argument, x\r\n%  x0....... design variable vector initial guess for starting point\r\n%  TolGrad.. Tolerance for norm of objective gradient to be zero\r\n%  MaxIter.. Maximum number of iterations\r\n%\r\n%  Output\r\n%  x........ final design variable vector found to minimize objective function, F(x)\r\n%  f1....... final objective function minimum value\r\n\r\nif nargin\u003c4 || isempty(TolGrad), TolGrad=1e-2; end\r\nif nargin\u003c5 || isempty(MaxIter), MaxIter=20; end\r\n\r\n%% Change following steepest descent algorithm to Conjugate Gradient Method\r\n\r\n% Initialize loop parameters\r\niter = 0;\r\nf0   = F(x0);\r\nc    = gradF(x0);\r\nConverged = norm(c) \u003c TolGrad;\r\ndisp('   iter alpha f(alpha)  norm(c)')\r\nfprintf(' %6.0f %5.3f %8.4f %8.4f\\n',[iter, 0, f0, norm(c)])\r\nalpha = 1e-6*norm(c);\r\n\r\n%% Search direction and line search iterations\r\nwhile iter\u003cMaxIter \u0026\u0026 ~Converged\r\n\titer = iter + 1;\r\n   d = -c;\r\n   f = @(alpha) F(x0+alpha*d);\r\n   alphaUpper = bracket( f, 0, 0.1*alpha );\r\n   [alpha,f1] = fminbnd( f, 0, alphaUpper );\r\n   x = x0 + alpha*d;\r\n   c = gradF(x);\r\n   Converged = (norm(c) \u003c TolGrad);\r\n   x0 = x;\r\nend\r\nfprintf(' %6.0f %5.3f %8.4f %8.4f\\n',[iter, alpha, f1, norm(c)])\r\nend\r\n\r\n%% Bracket interval for 1-D line search\r\n   function [alphaUpper,alphaLower,nfunc] = bracket( f, alpha0, delta, MaxIter )\r\n\t% usage: [alphaUpper,alphaLower,nfunc] = bracket( f, alpha0, delta, MaxIter )\r\n\t%  Golden section search to bracket unimodal, univariate minimum\r\n\t%--input\r\n\t%  f = function handle to univariate function of alpha\r\n\t%  alpha0... Starting point lower bound on bracket\r\n\t%  delta.... Guess for upper bound on bracket on unimodal min\r\n\t%  MaxIter.. Maximum number of iterations\r\n\t%--output\r\n\t%  alphaUpper... Upper bound on alpha to bracket min of f(alpha)\r\n\t%  alphaLower... Lower bound on alpha to bracket min of f(alpha)\r\n\t%  nfunc........ Number of function evaluations\r\n\r\n      if nargin\u003c3, delta = 1.e-3; end\r\n      if nargin\u003c4, MaxIter=1e3; end\r\n      \r\n      %--Local variables\r\n      %  expand... Expansion factor for extending Upper Bound\r\n      %  phi...... Golden section ratio = 1.681...\r\n      phi=(1+sqrt(5))/2;\r\n      expand = phi; % Set=1 to use equal interval search\r\n      \r\n      % Initialize variables\r\n      alphaLower=alpha0;\r\n      alphaLast=alpha0;\r\n      alphaNext=delta;\r\n      fLast=f(alphaLast);\r\n      fNext=f(alphaNext);\r\n      iter=1;\r\n      while fNext\u003cfLast \u0026\u0026 iter\u003c=MaxIter\r\n         iter = iter + 1;\r\n         delta = expand*delta;\r\n         alphaLower = alphaLast;\r\n         alphaLast = alphaNext;\r\n         alphaNext = alphaNext + delta;\r\n         fLast = fNext;\r\n         fNext = f(alphaNext);\r\n      end\r\n      alphaUpper=alphaNext;\r\n      nfunc=iter+1;\r\n   end","test_suite":"%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2.0];\r\nx1=[\r\n   -1.4478\r\n    2.1184];\r\nx2=[\r\n    1.7064\r\n    2.9446];\r\nf1=6.0419;\r\nf2=0.6068;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,0.01,1) % single steepest descent\r\nassert(norm(xmin-x1)\u003c0.2||norm(xmin-x2)\u003c0.2)\r\nassert( abs(fmin-f1)\u003c0.5|| abs(fmin-f2)\u003c0.5) % 2 local min\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect=[\r\n    0.2926\r\n    0.0505];\r\nfcorrect=0.6238;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,1e-2,2) % two iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert( abs(fmin-fcorrect)\u003c0.01)\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [1.1;0.9];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0) % default 20 iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert(abs(fmin-fcorrect)\u003c0.01);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [0; 0];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,0.01,100) % Convergence before 100 iterations\r\nassert(norm(xmin-xcorrect)\u003c0.1)\r\nassert(abs(fmin-fcorrect)\u003c0.01);\r\n\r\n%%\r\n% Rosenbrock's banana function\r\nF=@(x) 100*(x(2)-x(1).^2).^2 + (1-x(1)).^2;\r\ngradF=@(x) [100*(4*x(1).^3-4*x(1).*x(2))+2*x(1)-2; 100*(2*x(2)-2*x(1).^2)];\r\nx0 = [-1.9; 2];\r\nxcorrect = [1;1];\r\nfcorrect = 0;\r\n[xmin,fmin]=ConjGrad(F,gradF,x0,1e-3,200)\r\nassert(isequal(round(xmin),xcorrect))\r\nassert(isequal(round(fmin),fcorrect))","published":true,"deleted":false,"likes_count":0,"comments_count":0,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":31,"test_suite_updated_at":"2012-03-19T22:08:55.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-12T08:38:19.000Z","updated_at":"2025-12-10T14:17:15.000Z","published_at":"2012-03-19T22:15:16.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes an unconstrained scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e and its gradient, a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, a gradient tolerance,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eTolGrad\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a maximum number of iterations,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eMaxIter\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, using Fletcher-Reeves Conjugate Gradient Method.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"},{"id":527,"title":"Augmented Lagrange Multiplier (ALM) Method","description":"Write a function to find the values of a design variable vector, _x_, that minimizes a scalar objective function, _f_ ( _x_ ), given a function handle to _f_, and a starting guess, _x0_, subject to inequality and equality constraints with function handles _g_\u003c=0 and _h_=0, respectively. Use the Augmented Lagrangian Multiplier Method (a.k.a., the Method of Multipliers) and return the estimate of Lagrange multipliers _u_ for inequalities, and _v_ for equality constraints.","description_html":"\u003cp\u003eWrite a function to find the values of a design variable vector, \u003ci\u003ex\u003c/i\u003e, that minimizes a scalar objective function, \u003ci\u003ef\u003c/i\u003e ( \u003ci\u003ex\u003c/i\u003e ), given a function handle to \u003ci\u003ef\u003c/i\u003e, and a starting guess, \u003ci\u003ex0\u003c/i\u003e, subject to inequality and equality constraints with function handles \u003ci\u003eg\u003c/i\u003e\u0026lt;=0 and \u003ci\u003eh\u003c/i\u003e=0, respectively. Use the Augmented Lagrangian Multiplier Method (a.k.a., the Method of Multipliers) and return the estimate of Lagrange multipliers \u003ci\u003eu\u003c/i\u003e for inequalities, and \u003ci\u003ev\u003c/i\u003e for equality constraints.\u003c/p\u003e","function_template":"function [x,fmin,u,v]=alm(f,g,h,x0,MaxIter)\r\nif isempty(g), g=@(x)0; end\r\nif isempty(h), h=@(x)0; end\r\nif nargin\u003c5, MaxIter=20; end","test_suite":"%% Haftka \u0026 Gurdal, Figure 5.7.1 example\r\nf = @(x) 0.5*x;\r\ng = @(x) 2-x;\r\nx0 = 0;\r\n[xmin,fmin,u]=alm(f,g,[],x0) %#ok\u003c*NOPTS\u003e\r\nxcorrect=2;\r\nucorrect=0.5;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-3)\r\nassert(abs(u-ucorrect)\u003c1e-3)\r\n\r\n%% Haftka \u0026 Gurdal, Example 5.7.1\r\nf = @(x) x(1).^2 + 10*x(2).^2;\r\nh = @(x) sum(x)-4;\r\nx0 = [0; 0];\r\n[xmin,fmin,~,v]=alm(f,[],h,x0)\r\nxcorrect=[40; 4]/11;\r\nvcorrect=-7.2727;\r\nassert(norm(xmin-xcorrect)\u003c1e-3)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\nassert(abs(v-vcorrect)\u003c1e-2)\r\n\r\n%% Vanderplaats, Figure 5-4 example\r\nf = @(x) sum(x);\r\ng = @(x) [x(1) - 2*x(2) - 2\r\n          8 - 6*x(1) + x(1).^2 - x(2)];\r\nx0 = [0; 0];\r\n[xmin,fmin,u]=alm(f,g,[],x0)\r\nxcorrect=[2; 0];\r\nucorrect=[0.2; 0.6];\r\nassert(norm(xmin-xcorrect)\u003c1e-4)\r\nassert(abs(fmin-f(xcorrect))\u003c1e-4)\r\nassert(norm(u-ucorrect)\u003c1e-2)","published":true,"deleted":false,"likes_count":0,"comments_count":0,"created_by":279,"edited_by":null,"edited_at":null,"deleted_by":null,"deleted_at":null,"solvers_count":6,"test_suite_updated_at":"2012-03-25T02:49:42.000Z","rescore_all_solutions":false,"group_id":1,"created_at":"2012-03-25T02:46:55.000Z","updated_at":"2025-09-01T07:55:40.000Z","published_at":"2012-03-25T02:49:42.000Z","restored_at":null,"restored_by":null,"spam":false,"simulink":false,"admin_reviewed":false,"description_opc":"{\"relationships\":[{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/document\",\"targetMode\":\"\",\"relationshipId\":\"rId1\",\"target\":\"/matlab/document.xml\"},{\"relationshipType\":\"http://schemas.mathworks.com/matlab/code/2013/relationships/output\",\"targetMode\":\"\",\"relationshipId\":\"rId2\",\"target\":\"/matlab/output.xml\"}],\"parts\":[{\"partUri\":\"/matlab/document.xml\",\"relationship\":[],\"contentType\":\"application/vnd.mathworks.matlab.code.document+xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?\u003e\\n\u003cw:document xmlns:w=\\\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\\\"\u003e\u003cw:body\u003e\u003cw:p\u003e\u003cw:pPr\u003e\u003cw:pStyle w:val=\\\"text\\\"/\u003e\u003c/w:pPr\u003e\u003cw:r\u003e\u003cw:t\u003eWrite a function to find the values of a design variable vector,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, that minimizes a scalar objective function,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e (\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e ), given a function handle to\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ef\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, and a starting guess,\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ex0\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e, subject to inequality and equality constraints with function handles\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eg\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e\u0026lt;=0 and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eh\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e=0, respectively. Use the Augmented Lagrangian Multiplier Method (a.k.a., the Method of Multipliers) and return the estimate of Lagrange multipliers\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003eu\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e for inequalities, and\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e \u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:rPr\u003e\u003cw:i/\u003e\u003c/w:rPr\u003e\u003cw:t\u003ev\u003c/w:t\u003e\u003c/w:r\u003e\u003cw:r\u003e\u003cw:t\u003e for equality constraints.\u003c/w:t\u003e\u003c/w:r\u003e\u003c/w:p\u003e\u003c/w:body\u003e\u003c/w:document\u003e\"},{\"partUri\":\"/matlab/output.xml\",\"contentType\":\"text/xml\",\"content\":\"\u003c?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\" standalone=\\\"no\\\" ?\u003e\u003cembeddedOutputs\u003e\u003cmetaData\u003e\u003cevaluationState\u003emanual\u003c/evaluationState\u003e\u003clayoutState\u003ecode\u003c/layoutState\u003e\u003coutputStatus\u003eready\u003c/outputStatus\u003e\u003c/metaData\u003e\u003coutputArray type=\\\"array\\\"/\u003e\u003cregionArray type=\\\"array\\\"/\u003e\u003c/embeddedOutputs\u003e\"}]}"}],"term":"tag:\"aoe4084\"","current_player_id":null,"fields":[{"name":"page","type":"integer","callback":null,"default":1,"directive":null,"facet":null,"facet_method":"and","operator":null,"param":null,"static":null,"prepend":true},{"name":"per_page","type":"integer","callback":null,"default":50,"directive":null,"facet":null,"facet_method":"and","operator":null,"param":null,"static":null,"prepend":true},{"name":"sort","type":"string","callback":null,"default":null,"directive":null,"facet":null,"facet_method":"and","operator":null,"param":null,"static":null,"prepend":true},{"name":"body","type":"text","callback":null,"default":"*:*","directive":null,"facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":false},{"name":"group","type":"string","callback":null,"default":null,"directive":"group","facet":true,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"difficulty_rating_bin","type":"string","callback":null,"default":null,"directive":"difficulty_rating_bin","facet":true,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"id","type":"integer","callback":null,"default":null,"directive":"id","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"tag","type":"string","callback":null,"default":null,"directive":"tag","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"product","type":"string","callback":null,"default":null,"directive":"product","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"created_at","type":"timeframe","callback":{},"default":null,"directive":"created_at","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"profile_id","type":"integer","callback":null,"default":null,"directive":"author_id","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"created_by","type":"string","callback":null,"default":null,"directive":"author","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"player_id","type":"integer","callback":null,"default":null,"directive":"solver_id","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"player","type":"string","callback":null,"default":null,"directive":"solver","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"solvers_count","type":"integer","callback":null,"default":null,"directive":"solvers_count","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"comments_count","type":"integer","callback":null,"default":null,"directive":"comments_count","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"likes_count","type":"integer","callback":null,"default":null,"directive":"likes_count","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"leader_id","type":"integer","callback":null,"default":null,"directive":"leader_id","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true},{"name":"leading_solution","type":"integer","callback":null,"default":null,"directive":"leading_solution","facet":null,"facet_method":"and","operator":null,"param":"term","static":null,"prepend":true}],"filters":[{"name":"asset_type","type":"string","callback":null,"default":null,"directive":null,"facet":null,"facet_method":"and","operator":null,"param":null,"static":"\"cody:problem\"","prepend":true},{"name":"profile_id","type":"integer","callback":{},"default":null,"directive":null,"facet":null,"facet_method":"and","operator":null,"param":"author_id","static":null,"prepend":true}],"query":{"params":{"per_page":50,"term":"tag:\"aoe4084\"","current_player":null,"sort":"map(difficulty_value,0,0,999) asc"},"parser":"MathWorks::Search::Solr::QueryParser","directives":{"term":{"directives":{"tag":[["tag:\"aoe4084\"","","\"","aoe4084","\""]]}}},"facets":{"#\u003cMathWorks::Search::Field:0x00007f196978bf48\u003e":null,"#\u003cMathWorks::Search::Field:0x00007f196978be08\u003e":null},"filters":{"#\u003cMathWorks::Search::Field:0x00007f1969789428\u003e":"\"cody:problem\""},"fields":{"#\u003cMathWorks::Search::Field:0x00007f196978c588\u003e":1,"#\u003cMathWorks::Search::Field:0x00007f196978c4e8\u003e":50,"#\u003cMathWorks::Search::Field:0x00007f196978c128\u003e":"map(difficulty_value,0,0,999) asc","#\u003cMathWorks::Search::Field:0x00007f196978bfe8\u003e":"tag:\"aoe4084\""},"user_query":{"#\u003cMathWorks::Search::Field:0x00007f196978bfe8\u003e":"tag:\"aoe4084\""},"queried_facets":{}},"query_backend":{"connection":{"configuration":{"index_url":"http://index-op-v2/solr/","query_url":"http://search-op-v2/solr/","direct_access_index_urls":["http://index-op-v2/solr/"],"direct_access_query_urls":["http://search-op-v2/solr/"],"timeout":10,"vhost":"search","exchange":"search.topic","heartbeat":30,"pre_index_mode":false,"host":"rabbitmq-eks","port":5672,"username":"search","password":"J3bGPZzQ7asjJcCk","virtual_host":"search","indexer":"amqp","http_logging":"true","core":"cody"},"query_connection":{"uri":"http://search-op-v2/solr/cody/","proxy":null,"connection":{"parallel_manager":null,"headers":{"User-Agent":"Faraday v1.0.1"},"params":{},"options":{"params_encoder":"Faraday::FlatParamsEncoder","proxy":null,"bind":null,"timeout":null,"open_timeout":null,"read_timeout":null,"write_timeout":null,"boundary":null,"oauth":null,"context":null,"on_data":null},"ssl":{"verify":true,"ca_file":null,"ca_path":null,"verify_mode":null,"cert_store":null,"client_cert":null,"client_key":null,"certificate":null,"private_key":null,"verify_depth":null,"version":null,"min_version":null,"max_version":null},"default_parallel_manager":null,"builder":{"adapter":{"name":"Faraday::Adapter::NetHttp","args":[],"block":null},"handlers":[{"name":"Faraday::Response::RaiseError","args":[],"block":null}],"app":{"app":{"ssl_cert_store":{"verify_callback":null,"error":null,"error_string":null,"chain":null,"time":null},"app":{},"connection_options":{},"config_block":null}}},"url_prefix":"http://search-op-v2/solr/cody/","manual_proxy":false,"proxy":null},"update_format":"RSolr::JSON::Generator","update_path":"update","options":{"url":"http://search-op-v2/solr/cody"}}},"query":{"params":{"per_page":50,"term":"tag:\"aoe4084\"","current_player":null,"sort":"map(difficulty_value,0,0,999) asc"},"parser":"MathWorks::Search::Solr::QueryParser","directives":{"term":{"directives":{"tag":[["tag:\"aoe4084\"","","\"","aoe4084","\""]]}}},"facets":{"#\u003cMathWorks::Search::Field:0x00007f196978bf48\u003e":null,"#\u003cMathWorks::Search::Field:0x00007f196978be08\u003e":null},"filters":{"#\u003cMathWorks::Search::Field:0x00007f1969789428\u003e":"\"cody:problem\""},"fields":{"#\u003cMathWorks::Search::Field:0x00007f196978c588\u003e":1,"#\u003cMathWorks::Search::Field:0x00007f196978c4e8\u003e":50,"#\u003cMathWorks::Search::Field:0x00007f196978c128\u003e":"map(difficulty_value,0,0,999) asc","#\u003cMathWorks::Search::Field:0x00007f196978bfe8\u003e":"tag:\"aoe4084\""},"user_query":{"#\u003cMathWorks::Search::Field:0x00007f196978bfe8\u003e":"tag:\"aoe4084\""},"queried_facets":{}},"options":{"fields":["id","difficulty_rating"]},"join":" "},"results":[{"id":524,"difficulty_rating":"easy-medium"},{"id":481,"difficulty_rating":"easy-medium"},{"id":484,"difficulty_rating":"easy-medium"},{"id":523,"difficulty_rating":"easy-medium"},{"id":493,"difficulty_rating":"easy-medium"},{"id":485,"difficulty_rating":"medium"},{"id":527,"difficulty_rating":"unrated"}]}}