Hi,
I understand that you want to know about the data calibration techniques used in ANN. You can refer to the following common calibration (or data preprocessing) techniques used in ANN:
Normalization: This technique scales the data to fit within a certain range, usually 0 to 1 or -1 to 1. You can use "mapminmax" function to normalize your data. You can refer to the following documentation link for more information on "mapminmax":
However if you are using deep learning workflows, you can normalize the data in the input layer itself (Eg: "sequenceInputLayer", "imageInputLayer", "featureInputLayer", etc) using normalization name-value pair instead of using "mapminmax".
Standardization: This technique transforms the data to have zero mean and unit variance. This is useful when the data follows a Gaussian distribution. You can use "mapstd" function to standardize the data. You can refer to the following documentation link for more information on "mapstd":
Apart from the above two techniques, if your data is categorical, you can use the "onehotencode" function to create a binary column for each category and mark it with a 1 for the corresponding category. You can refer to the following documentation link for more information on "onehotencode":
You have also mentioned that you have used "nntraintool" for neural network training, since this function is deprecated, you can use the "train" function instead to train shallow neural networks:
Hope this helps!