I understand that you want to use 'trainlm' as backpropagation algorithm in LSTM model.
Case 1: Multilayer Shallow Neural Networks
[x,t] = simplefit_dataset;
net = fitnet(10,'trainbr'); # here trainFcn is set at trainbr
view(net);  # to view untrained network
view(net); # to view trained network
When you create LSTM, its a recurrent neural network. 
Example : 
Define Network Architecture
Define the network architecture. Create an LSTM network that consists of an LSTM layer with 200 hidden units, followed by a fully connected layer of size 50 and a dropout layer with dropout probability 0.5.
numResponses = size(YTrain{1},1);
    sequenceInputLayer(numFeatures)
    lstmLayer(numHiddenUnits,'OutputMode','sequence')
    fullyConnectedLayer(numResponses)
options = trainingOptions('adam', ...
    'MaxEpochs',maxEpochs, ...
    'MiniBatchSize',miniBatchSize, ...
    'InitialLearnRate',0.01, ...
    'GradientThreshold',1, ...
    'Plots','training-progress',...
Train the Network
Train the network using trainNetwork.
net = trainNetwork(XTrain,YTrain,layers,options);
in "trainingOptions" you can't set "solverName" = "trainlm". 
Solver for training network, specified as one of the following:
- 'sgdm' — Use the stochastic gradient descent with momentum (SGDM) optimizer. You can specify the momentum value using the Momentum training option.
- 'rmsprop'— Use the RMSProp optimizer. You can specify the decay rate of the squared gradient moving average using the SquaredGradientDecayFactor training option.
- 'adam'— Use the Adam optimizer. You can specify the decay rates of the gradient and squared gradient moving averages using the GradientDecayFactor and SquaredGradientDecayFactor training options, respectively.