The basic structure of Mamdani fuzzy inference system is a model that maps input characteristics to input membership functions, input membership function to rules, rules to a set of output characteristics, output characteristics to output membership functions, and the output membership function to a single-valued output or a decision associated with the output. You used only fixed membership functions that were chosen arbitrarily. You applied fuzzy inference to only modeling systems whose rule structure is essentially predetermined by the user's interpretation of the characteristics of the variables in the model.
neuroFuzzyDesigner and the Neuro-Fuzzy Designer apply fuzzy inference techniques to data modeling. As you have seen from the other fuzzy inference GUIs, the shape of the membership functions depends on parameters, and changing these parameters change the shape of the membership function. Instead of just looking at the data to choose the membership function parameters, you choose membership function parameters automatically using these Fuzzy Logic Toolbox™ applications.
Suppose you want to apply fuzzy inference to a system for which you already have a collection of input/output data that you would like to use for modeling, model-following, or some similar scenario. You do not necessarily have a predetermined model structure based on characteristics of variables in your system.
In some modeling situations, you cannot discern what the membership functions should look like simply from looking at data. Rather than choosing the parameters associated with a given membership function arbitrarily, these parameters could be chosen so as to tailor the membership functions to the input/output data in order to account for these types of variations in the data values. In such cases, you can use the Fuzzy Logic Toolbox neuro-adaptive learning techniques incorporated in the anfis command.
The neuro-adaptive learning method works similarly to that of neural networks. Neuro-adaptive learning techniques provide a method for the fuzzy modeling procedure to learn information about a data set. Fuzzy Logic Toolbox software computes the membership function parameters that best allow the associated fuzzy inference system to track the given input/output data. The Fuzzy Logic Toolbox function that accomplishes this membership function parameter adjustment is called anfis. The anfis function can be accessed either from the command line or through the Neuro-Fuzzy Designer. Because the functionality of the command line function anfis and the Neuro-Fuzzy Designer is similar, they are used somewhat interchangeably in this discussion, except when specifically describing the GUI.
The acronym ANFIS derives its name from adaptive neuro-fuzzy inference system. Using a given input/output data set, the toolbox function anfis constructs a fuzzy inference system (FIS) whose membership function parameters are tuned (adjusted) using either a back propagation algorithm alone or in combination with a least squares type of method. This adjustment allows your fuzzy systems to learn from the data they are modeling.
A network-type structure similar to that of a neural network, which maps inputs through input membership functions and associated parameters, and then through output membership functions and associated parameters to outputs, can be used to interpret the input/output map.
The parameters associated with the membership functions changes through the learning process. The computation of these parameters (or their adjustment) is facilitated by a gradient vector. This gradient vector provides a measure of how well the fuzzy inference system is modeling the input/output data for a given set of parameters. When the gradient vector is obtained, any of several optimization routines can be applied in order to adjust the parameters to reduce some error measure. This error measure is usually defined by the sum of the squared difference between actual and desired outputs. anfis uses either back propagation or a combination of least squares estimation and back propagation for membership function parameter estimation.
The modeling approach used by anfis is similar to many system identification techniques. First, you hypothesize a parameterized model structure (relating inputs to membership functions to rules to outputs to membership functions, and so on). Next, you collect input/output data in a form that will be usable by anfis for training. You can then use anfis to train the FIS model to emulate the training data presented to it by modifying the membership function parameters according to a chosen error criterion.
In general, this type of modeling works well if the training data presented to anfis for training (estimating) membership function parameters is fully representative of the features of the data that the trained FIS is intended to model. In some cases however, data is collected using noisy measurements, and the training data cannot be representative of all the features of the data that will be presented to the model. In such situations, model validation is helpful.
Model Validation Using Testing and Checking Data Sets. Model validation is the process by which the input vectors from input/output data sets on which the FIS was not trained, are presented to the trained FIS model, to see how well the FIS model predicts the corresponding data set output values.
One problem with model validation for models constructed using adaptive techniques is selecting a data set that is both representative of the data the trained model is intended to emulate, yet sufficiently distinct from the training data set so as not to render the validation process trivial.
If you have collected a large amount of data, hopefully this data contains all the necessary representative features, so the process of selecting a data set for checking or testing purposes is made easier. However, if you expect to be presenting noisy measurements to your model, it is possible the training data set does not include all of the representative features you want to model.
The testing data set lets you check the generalization capability of the resulting fuzzy inference system. The idea behind using a checking data set for model validation is that after a certain point in the training, the model begins overfitting the training data set. In principle, the model error for the checking data set tends to decrease as the training takes place up to the point that overfitting begins, and then the model error for the checking data suddenly increases. Overfitting is accounted for by testing the FIS trained on the training data against the checking data, and choosing the membership function parameters to be those associated with the minimum checking error if these errors indicate model overfitting.
Usually, these training and checking data sets are collected based on observations of the target system and are then stored in separate files.
In the first example, two similar data sets are used for checking and training, but the checking data set is corrupted by a small amount of noise. This example illustrates of the use of the Neuro-Fuzzy Designer with checking data to reduce the effect of model overfitting. In the second example, a training data set that is presented to anfis is sufficiently different than the applied checking data set. By examining the checking error sequence over the training period, it is clear that the checking data set is not good for model validation purposes. This example illustrates the use of the Neuro-Fuzzy Designer to compare data sets.
[1] Jang, J.-S. R., "Fuzzy Modeling Using Generalized Neural Networks and Kalman Filter Algorithm," Proc. of the Ninth National Conf. on Artificial Intelligence (AAAI-91), pp. 762-767, July 1991.
[2] Jang, J.-S. R., "ANFIS: Adaptive-Network-based Fuzzy Inference Systems," IEEE Transactions on Systems, Man, and Cybernetics, Vol. 23, No. 3, pp. 665-685, May 1993.
[3] Jang, J.-S. R. and N. Gulley, "Gain scheduling based fuzzy controller design," Proc. of the International Joint Conference of the North American Fuzzy Information Processing Society Biannual Conference, the Industrial Fuzzy Control and Intelligent Systems Conference, and the NASA Joint Technology Workshop on Neural Networks and Fuzzy Logic, San Antonio, Texas, Dec. 1994.
[4] Jang, J.-S. R. and C.-T. Sun, "Neuro-fuzzy modeling and control, Proceedings of the IEEE, March 1995.
[5] Jang, J.-S. R. and C.-T. Sun, Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice Hall, 1997.
[6] Wang, L.-X., Adaptive fuzzy systems and control: design and stability analysis, Prentice Hall, 1994.
[7] Widrow, B. and D. Stearns, Adaptive Signal Processing, Prentice Hall, 1985.
You can create, train, and test Sugeno-type fuzzy systems using the Neuro-Fuzzy Designer.
To start the GUI, type the following command at the MATLAB^{®} prompt:
neuroFuzzyDesigner
The Neuro-Fuzzy Designer window shown in the following figure includes four distinct areas to support a typical workflow. The designer lets you perform the following tasks:
Access the online help topics by clicking Help in the Neuro-Fuzzy Designer.
To train a FIS, you must begin by loading a Training data set that contains the desired input/output data of the system to be modeled. Any data set you load must be an array with the data arranged as column vectors, and the output data in the last column.
You can also load Testing and Checking data in the designer. For more information on testing and checking data sets, see Model Validation Using Testing and Checking Data Sets.
To load a data set using the Load data portion of the designer:
Specify the data Type.
Select the data from a file or the MATLAB worksp.
Click Load Data.
After you load the data, it displays in the plot. The training, testing and checking data are annotated in blue as circles, diamonds, and pluses respectively.
To clear a specific data set from the designer:
In the Load data area, select the data Type.
Click Clear Data.
This action also removes the corresponding data from the plot.
Before you start the FIS training, you must specify an initial FIS model structure. To specify the model structure, perform one of the following tasks:
Load a previously saved Sugeno-type FIS structure from a file or the MATLAB workspace.
Generate the initial FIS model by choosing one of the following partitioning techniques:
Grid partition— Generates a single-output Sugeno-type FIS by using grid partitioning on the data.
Sub. clustering — Generates an initial model for ANFIS training by first applying subtractive clustering on the data.
To view a graphical representation of the initial FIS model structure, click Structure.
After loading the training data and generating the initial FIS structure, you can start training the FIS.
Tip If you want to save the training error generated during ANFIS training to the MATLAB workspace, see Save Training Error Data to MATLAB Workspace. |
The following steps show you how to train the FIS.
In Optim. Method, choose hybrid or backpropaga as the optimization method.
The optimization methods train the membership function parameters to emulate the training data.
Enter the number of training Epochs and the training Error Tolerance to set the stopping criteria for training.
The training process stops whenever the maximum epoch number is reached or the training error goal is achieved.
Click Train Now to train the FIS.
This action adjusts the membership function parameters and displays the error plots.
Examine the error plots to determine overfitting during the training. If you notice the checking error increasing over iterations, it indicates model overfitting. For examples on model overfitting, see Checking Data Helps Model Validation and Checking Data Does Not Validate Model.
After the FIS is trained, validate the model using a Testing or Checking data that differs from the one you used to train the FIS. To validate the trained FIS:
Select the validation data set and click Load Data.
Click Test Now.
This action plots the test data against the FIS output (shown in red) in the plot.
For more information on the use of testing data and checking data for model validation, see Model Validation Using Testing and Checking Data Sets.
In this section, we look at an example that loads similar training and checking data sets. The checking data set is corrupted by noise.
Loading Data. To work both of the following examples, you load the training data sets (fuzex1trnData and fuzex2trnData) and the checking data sets (fuzex1chkData and fuzex2chkData), into the Neuro-Fuzzy Designer from the workspace. You may also substitute your own data sets.
To load the data sets from the workspace into the Neuro-Fuzzy Designer:
Type the following commands at the MATLAB command line to load the data sets from the folder fuzzydemos into the MATLAB workspace:
load fuzex1trnData.dat load fuzex2trnData.dat load fuzex1chkData.dat load fuzex2chkData.dat
Open the Neuro-Fuzzy Designer by typing neuroFuzzyDesigner in the MATLAB command line.
To load the training data set from the workspace:
In the Load data portion of the designer, select the following options:
Type: Training
From: worksp.
Click Load Data to open the Load from workspace dialog box.
Type fuzex1trnData as shown in the following figure, and click OK.
The training data set is used to train a fuzzy system by adjusting the membership function parameters that best model this data, and appears in the plot in the center of the GUI as a set of circles.
The horizontal axis is marked data set index. This index indicates the row from which that input data value was obtained (whether or not the input is a vector or a scalar).
To load the checking data set from the workspace:
In the Load data portion of the GUI, select Checking in the Type column.
Click Load Data to open the Load from workspace dialog box.
Type fuzex1chkData as the variable name and click OK.
The checking data appears in the GUI plot as pluses superimposed on the training data.
The next step is to specify an initial fuzzy inference system for anfis to train.
Initializing and Generating Your FIS. You can either initialize the FIS parameters to your own preference, or if you do not have any preference for how you want the initial membership functions to be parameterized, you can let anfis initialize the parameters for you, as described in the following sections:
Automatic FIS Structure Generation. To initialize your FIS using anfis:
Choose Grid partition, the default partitioning method. The two partition methods, grid partitioning and subtractive clustering, are described later in Fuzzy C-Means Clustering, and in Subtractive Clustering.
Click on the Generate FIS button. Clicking this button displays a menu from which you can choose the number of membership functions, MFs, and the type of input and output membership functions. There are only two choices for the output membership function: constant and linear. This limitation of output membership function choices is because anfis only operates on Sugeno-type systems.
Fill in the entries as shown in the following figure, and click OK.
You can also implement this FIS generation from the command line using the command genfis1 (for grid partitioning) or genfis2 (for subtractive clustering).
Specifying Your Own Membership Functions for ANFIS. You can choose your own preferred membership functions with specific parameters to be used by anfis as an initial FIS for training.
To define your own FIS structure and parameters:
Add your desired membership functions (the custom membership option will be disabled for anfis). The output membership functions must either be all constant or all linear. For carrying out this and the following step, see The Fuzzy Logic Designer and The Membership Function Editor.
Select the Rules menu item in the Edit menu, and use the Rule Editor to generate the rules (see The Rule Editor).
Select the FIS Properties menu item from the Edit menu. Name your FIS, and save it to either the workspace or to file.
Click the Close button to return to the Neuro-Fuzzy Designer to train the FIS.
To load an existing FIS for ANFIS initialization, in the Generate FIS portion of the designer, click Load from worksp. or Load from file. You load your FIS from a file if you have saved a FIS previously that you would like to use. Otherwise you load your FIS from the workspace.
Viewing Your FIS Structure. After you generate the FIS, you can view the model structure by clicking the Structure button in the middle of the right side of the editor. A new editor appears, as follows.
The branches in this graph are color coded. Color coding of branches characterize the rules and indicate whether or not and, not, or or are used in the rules. The input is represented by the left-most node and the output by the right-most node. The node represents a normalization factor for the rules. Clicking on the nodes indicates information about the structure.
You can view the membership functions or the rules by opening either the Membership Function Editor, or the Rule Editor from the Edit menu.
ANFIS Training. The two anfis parameter optimization method options available for FIS training are hybrid (the default, mixed least squares and backpropagation) and backpropa (backpropagation). Error Tolerance is used to create a training stopping criterion, which is related to the error size. The training will stop after the training data error remains within this tolerance. This is best left set to 0 if you are unsure how your training error may behave.
Note: If you want to save the training error data generated during ANFIS training to the MATLAB workspace, you must train the FIS at the command line. For an example, Save Training Error Data to MATLAB Workspace. |
To start the training:
Leave the optimization method at hybrid.
Set the number of training epochs to 40, under the Epochs listing on the GUI (the default value is 3).
Select Train Now.
The following window appears on your screen.
The plot shows the checking error as ♦ ♦ on the top . The training error appears as * * on the bottom. The checking error decreases up to a certain point in the training, and then it increases. This increase represents the point of model overfitting. anfis chooses the model parameters associated with the minimum checking error (just prior to this jump point). This example shows why the checking data option of anfis is useful.
Testing Your Data Against the Trained FIS. To test your FIS against the checking data, select Checking data in the Test FIS portion of the Neuro-Fuzzy Designer, and click Test Now. When you test the checking data against the FIS, it looks satisfactory.
Loading More Data with anfis. If you load data into anfis after clearing previously loaded data, you must make sure that the newly loaded data sets have the same number of inputs as the previously loaded ones did. Otherwise, you must start a new neuroFuzzyDesigner session from the command line.
Checking Data Option and Clearing Data. If you do not want to use the checking data option of anfis, then do not load any checking data before you train the FIS. If you decide to retrain your FIS with no checking data, you can unload the checking data in one of two ways:
Select the Checking option button in the Load data portion of the Neuro-Fuzzy Designer, and then click Clear Data to unload the checking data.
Close the Neuro-Fuzzy Designer, and go to the MATLAB command line, and retype neuroFuzzyDesigner. In this case you must reload the training data.
After clearing the data, you must regenerate your FIS. After the FIS is generated, you can use your first training experience to decide on the number of training epochs you want for the second round of training.
This example examines what happens when the training and checking data sets are sufficiently different. To see how the Neuro-Fuzzy Designer can be used to learn something about data sets and how they differ:
Clear the Neuro-Fuzzy Designer:
Clear both the training and checking data.
(optional) Click the Clear Plot button on the right.
Load fuzex2trnData and fuzex2chkData (respectively, the training data and checking data) from the MATLAB workspace just as you did in the previous example.
You should see a plot similar to the one in the following figure. The training data appears as circles superimposed with the checking data, appearing as pluses.
Train the FIS for this system exactly as you did in the previous example, except now choose 60 Epochs before training. You should get the following plot, showing the checking error as ♦ ♦ on top and the training error as * * on the bottom.
In this case, the checking error is quite large. It appears that the minimum checking error occurs within the first epoch. Using the checking data option with anfis automatically sets the FIS parameters to be those associated with the minimum checking error. Clearly this set of membership functions is not the best choice for modeling the training data.
This example illustrates the problem discussed earlier wherein the checking data set presented to anfis for training was sufficiently different from the training data set. As a result, the trained FIS did not capture the features of this data set very well. It is important to know the features of your data set well when you select your training and checking data. When you do not know the features of your data, you can analyze the checking error plots to see whether or not the checking data performed sufficiently well with the trained model.
In this example, the checking error is sufficiently large to indicate that either you need to select more data for training or modify your membership function choices (both the number of membership functions and the type). Otherwise, the system can be retrained without the checking data, if you think the training data sufficiently captures the features you are trying to represent.
To complete this example, test the trained FIS model against the checking data. To do so, select Checking data in the Test FIS portion of the GUI, and click Test Now. The following plot in the GUI indicates that there is quite a discrepancy between the checking data output and the FIS output.
This example shows how to use the command line features of anfis on a chaotic time-series prediction example.
Generating a FIS using the ANFIS Editor GUI is quite simple. However, you need to be cautious about implementing the checking data validation feature of anfis. You must check that the checking data error does what is supposed to. Otherwise, you need to retrain the FIS.
Using ansfis for Chaotic Time-Series Prediction The example mgtsdemo uses anfis to predict a time series that is generated by the following Mackey-Glass (MG) time-delay differential equation.
This time series is chaotic, and so there is no clearly defined period. The series does not converge or diverge, and the trajectory is highly sensitive to initial conditions. This benchmark problem is used in the neural network and fuzzy modeling research communities.
To obtain the time series value at integer points, we applied the fourth-order Runge-Kutta method to find the numerical solution to the previous MG equation; the result was saved in the file mgdata.dat. Assume x(0) = 1.2, = 17, and x(t) = 0 for time < 0. Plot the MG time series.
load mgdata.dat time = mgdata(:, 1); x = mgdata(:, 2); figure(1), plot(time, x); title('Mackey-Glass Chaotic Time Series') xlabel('Time (sec)')
In time-series prediction, you need to use known values of the time series up to the point in time, say, t, to predict the value at some point in the future, say, t+_P_. The standard method for this type of prediction is to create a mapping from D sample data points, sampled every units in time, (x(t-(D-1) ),..., x(t- ), x(t)), to a predicted future value x(t+P). Following the conventional settings for predicting the MG time series, set D = 4 and = P = 6. For each t, the input training data for anfis is a four-dimensional vector of the following form.
The output training data corresponds to the trajectory prediction.
For each t, ranging in values from 118 to 1117, the training input/output data is a structure whose first component is the four-dimensional input w, and whose second component is the output s. There are 1000 input/output data values. You use the first 500 data values for the anfis training (these become the training data set), while the others are used as checking data for validating the identified fuzzy model. This division of data values results in two 500-point data structures, trnData and chkData.
The following code generates this data:
for t=118:1117, Data(t-117,:) = [x(t-18) x(t-12) x(t-6) x(t) x(t+6)]; end trnData=Data(1:500,:); chkData=Data(501:end,:);
To start the training, you need a FIS structure that specifies the structure and initial parameters of the FIS for learning. The genfis1 function handles this specification.
fismat = genfis1(trnData);
Because you did not specify numbers and types of membership functions used in the FIS, default values are assumed. These defaults provide two generalized bell membership functions on each of the four inputs, eight altogether. The generated FIS structure contains 16 fuzzy rules with 104 parameters. To achieve good generalization capability, it is important that the number of training data points be several times larger than the number parameters being estimated. In this case, the ratio between data and parameters is about five (500/104).
The function genfis1 generates initial membership functions that are equally spaced and cover the whole input space. Plot the input membership functions.
figure(2) subplot(2,2,1) plotmf(fismat, 'input', 1) subplot(2,2,2) plotmf(fismat, 'input', 2) subplot(2,2,3) plotmf(fismat, 'input', 3) subplot(2,2,4) plotmf(fismat, 'input', 4)
Start the training.
[fismat1,error1,ss,fismat2,error2] = ...
anfis(trnData,fismat,[],[0 0 0 0],chkData);
Because the checking data option of anfis is invoked, the final FIS you choose is the one associated with the minimum checking error. This result is stored in fismat2. Plots these new membership functions.
figure(3) subplot(2,2,1) plotmf(fismat2, 'input', 1) subplot(2,2,2) plotmf(fismat2, 'input', 2) subplot(2,2,3) plotmf(fismat2, 'input', 3) subplot(2,2,4) plotmf(fismat2, 'input', 4)
Plot the error signals.
figure(4) plot([error1 error2]); hold on; plot([error1 error2], 'o'); legend('error1','error2'); xlabel('Epochs'); ylabel('RMSE (Root Mean Squared Error)'); title('Error Curves');
In addition to these error plots, you may want to plot the FIS output versus the training or checking data. To compare the original MG time series and the fuzzy prediction side by side, try:
figure(5) anfis_output = evalfis([trnData(:,1:4); chkData(:,1:4)], ... fismat2); index = 125:1124; subplot(211), plot(time(index), [x(index) anfis_output]); xlabel('Time (sec)'); title('MG Time Series and ANFIS Prediction'); subplot(212), plot(time(index), x(index) - anfis_output); xlabel('Time (sec)'); title('Prediction Errors');
The difference between the original MG time series and the values estimated using anfis is very small, Thus, you can only see one curve in the first plot. The prediction error appears in the second plot with a much finer scale. You trained for only 10 epochs. If you apply more extensive training, you get better performance.
When working in the Neuro-Fuzzy Designer, you can export your initial FIS structure to the MATLAB workspace and then save the ANFIS training error values in the workspace.
The following example shows how to save the training error generated during ANFIS training to the MATLAB workspace:
Load the training and checking data in the MATLAB workspace by typing the following commands at the MATLAB prompt:
load fuzex1trnData.dat load fuzex1chkData.dat
Open the Neuro-Fuzzy Designer by typing the following command:
neuroFuzzyDesigner
The Neuro-Fuzzy Designer opens, as shown in the next figure.
Load the training data from the MATLAB workspace into the Neuro-Fuzzy Designer:
In the Load data panel of the Neuro-Fuzzy Designer, verify that Training is selected in the Type column.
Select worksp. in the From column.
Click Load Data to open the Load from workspace dialog box.
Type fuzex1trnData, and click OK.
The Neuro-Fuzzy Designer displays the training data in the plot as a set of circles (○).
Load the checking data from the MATLAB workspace into the Neuro-Fuzzy Designer:
In the Load data panel of the Neuro-Fuzzy Designer, select Checking in the Type column.
Click Load Data to open the Load from workspace dialog box.
Type fuzex1chkData as the variable name, and click OK.
The Neuro-Fuzzy Designer displays the checking data as plus signs (+) superimposed on the training data.
Generate an initial FIS:
In the Generate FIS panel, verify that Grid partition option is selected.
Click Generate FIS.
This action opens a dialog box where you specify the structure of the FIS.
In the dialog box, specify the following:
Enter 4 in the Number of MFs field.
Select gbellmf as the Membership Type for the input.
Select linear as the Membership Type for the output.
Click OK to generate the FIS and close the dialog box.
Export the initial FIS to the MATLAB workspace:
In the Neuro-Fuzzy Designer, select File > Export > To Workspace.
This action opens a dialog box where you specify the MATLAB variable name.
Enter initfis in the Workspace variable field.
Click OK to close the dialog box.
A variable named initfis now appears in the MATLAB workspace.
Train the FIS for 40 epochs by typing the following command at the MATLAB prompt:
figure;hold on; fismat=initfis; for ct=1:40, [fismat,error]=anfis(fuzex1trnData,fismat,... 2,NaN,fuzex1chkData,1); plot(ct,error(1),'b*'); end
To improve accuracy when you train the FIS, the code uses the results of the current iteration returned by the anfis command as the initial conditions for the next iteration. The output argument error contains the root mean squared errors representing the training data error. For more information, see the anfis reference page.
The plot of the training error versus the number of epochs appears in the next figure.
This topic discusses the arguments and range components of the command line function anfis and the analogous functionality of the Neuro-Fuzzy Designer.
The command anfis takes at least two and at most six input arguments. The general format is
[fismat1,trnError,ss,fismat2,chkError] = ... anfis(trnData,fismat,trnOpt,dispOpt,chkData,method);
where trnOpt (training options), dispOpt (display options), chkData (checking data), andmethod (training method), are optional. All output arguments are also optional.
When you open the Neuro-Fuzzy Designer using neuroFuzzyDesigner, only the training data set must exist prior to implementing anfis. In addition, the step-size is fixed when the adaptive neuro-fuzzy system is trained using this GUI tool.
The training data, trnData, is a required argument to anfis, as well as to the Neuro-Fuzzy Designer. Each row of trnData is a desired input/output pair of the target system you want to model Each row starts with an input vector and is followed by an output value. Therefore, the number of rows of trnData is equal to the number of training data pairs, and, because there is only one output, the number of columns of trnData is equal to the number of inputs plus one.
You can obtain the input FIS structure, fismat, from any of the fuzzy editors:
The Fuzzy Logic Designer
The Membership Function Editor
The Rule Editor from the Neuro-Fuzzy Designer (which allows a FIS structure to be loaded from a file or the MATLAB workspace)
The command line function, genfis1 (for which you only need to give numbers and types of membership functions)
The FIS structure contains both the model structure, (which specifies such items as the number of rules in the FIS, the number of membership functions for each input, etc.), and the parameters, (which specify the shapes of membership functions).
There are two methods that anfis learning employs for updating membership function parameters:
Backpropagation for all parameters (a steepest descent method)
A hybrid method consisting of backpropagation for the parameters associated with the input membership functions, and least squares estimation for the parameters associated with the output membership functions
As a result, the training error decreases, at least locally, throughout the learning process. Therefore, the more the initial membership functions resemble the optimal ones, the easier it will be for the model parameter training to converge. Human expertise about the target system to be modeled may aid in setting up these initial membership function parameters in the FIS structure.
The genfis1 function produces a FIS structure based on a fixed number of membership functions. This structure invokes the so-called curse of dimensionality, and causes excessive propagation of the number of rules when the number of inputs is moderately large, that is, more than four or five. Fuzzy Logic Toolbox software offers a method that provides for some dimension reduction in the fuzzy inference system: you can generate a FIS structure using the clustering algorithm discussed in Subtractive Clustering. To use the clustering algorithm, you must select the Sub. Clustering option in the Generate FIS portion of the Neuro-Fuzzy Designer before the FIS is generated. This subtractive clustering method partitions the data into groups called clusters, and generates a FIS with the minimum number rules required to distinguish the fuzzy qualities associated with each of the clusters.
The Neuro-Fuzzy Designer tool allows you to choose your desired error tolerance and number of training epochs.
Training option trnOpt for the command line anfis is a vector that specifies the stopping criteria and the step-size adaptation strategy:
trnOpt(1): number of training epochs, default = 10
trnOpt(2): error tolerance, default = 0
trnOpt(3): initial step-size, default = 0.01
trnOpt(4): step-size decrease rate, default = 0.9
trnOpt(5): step-size increase rate, default = 1.1
If any element of trnOpt is an NaN or missing, then the default value is taken. The training process stops if the designated epoch number is reached or the error goal is achieved, whichever comes first.
Usually, the step-size profile is a curve that increases initially, reaches some maximum, and then decreases for the remainder of the training. You achieve this ideal step-size profile by adjusting the initial step-size and the increase and decrease rates (trnOpt(3) - trnOpt(5)). The default values are set up to cover a wide range of learning tasks. For any specific application, you may want to modify these step-size options in order to optimize the training. However, there are no user-specified step-size options for training the adaptive neuro-fuzzy inference system generated using the Neuro-Fuzzy Designer.
Display options apply only to the command-line function anfis.
For the command line anfis, the display options argument, dispOpt, is a vector of either 1s or 0s that specifies what information to display, (print in the MATLAB command window), before, during, and after the training process. A 1 is used to denote print this option, whereas a 0 denotes do not print this option:
dispOpt(1): display ANFIS information, default = 1
dispOpt(2): display error (each epoch), default = 1
dispOpt(3): display step-size (each epoch), default = 1
dispOpt(4): display final results, default = 1
The default mode displays all available information. If any element of dispOpt is NaN or missing, the default value is used.
Both the Neuro-Fuzzy Designer and the command line anfis apply either a backpropagation form of the steepest descent method for membership function parameter estimation, or a combination of backpropagation and the least-squares method to estimate membership function parameters. The choices for this argument are hybrid or backpropagation. These method choices are designated in the command line function, anfis, by 1 and 0, respectively.
fismat1 is the output FIS structure corresponding to a minimal training error. This FIS structure is the one that you use to represent the fuzzy system when there is no checking data used for model cross-validation. This data also represents the FIS structure that is saved by the Neuro-Fuzzy Designer when the checking data option is not used.
When you use the checking data option, the output saved is that associated with the minimum checking error.
The training error is the difference between the training data output value, and the output of the fuzzy inference system corresponding to the same training data input value, (the one associated with that training data output value). The training error trnError records the root mean squared error (RMSE) of the training data set at each epoch. fismat1 is the snapshot of the FIS structure when the training error measure is at its minimum. The Neuro-Fuzzy Designer plots the training error versus epochs curve as the system is trained.
You cannot control the step-size options with the Neuro-Fuzzy Designer. Using the command line anfis, the step-size array ss records the step-size during the training. Plotting ss gives the step-size profile, which serves as a reference for adjusting the initial step-size and the corresponding decrease and increase rates. The step-size (ss) for the command-line function anfis is updated according to the following guidelines:
If the error undergoes four consecutive reductions, increase the step-size by multiplying it by a constant (ssinc) greater than one.
If the error undergoes two consecutive combinations of one increase and one reduction, decrease the step-size by multiplying it by a constant (ssdec) less than one.
The default value for the initial step-size is 0.01; the default values for ssinc and ssdec are 1.1 and 0.9, respectively. All the default values can be changed via the training option for the command line anfis.
The checking data, chkData, is used for testing the generalization capability of the fuzzy inference system at each epoch. The checking data has the same format as that of the training data, and its elements are generally distinct from those of the training data.
The checking data is important for learning tasks for which the input number is large, and/or the data itself is noisy. A fuzzy inference system needs to track a given input/output data set well. Because the model structure used for anfis is fixed, there is a tendency for the model to overfit the data on which is it trained, especially for a large number of training epochs. If overfitting does occur, the fuzzy inference system may not respond well to other independent data sets, especially if they are corrupted by noise. A validation or checking data set can be useful for these situations. This data set is used to cross-validate the fuzzy inference model. This cross-validation requires applying the checking data to the model and then seeing how well the model responds to this data.
When the checking data option is used with anfis, either via the command line, or using the Neuro-Fuzzy Designer, the checking data is applied to the model at each training epoch. When the command line anfis is invoked, the model parameters that correspond to the minimum checking error are returned via the output argument fismat2. The FIS membership function parameters computed using the Neuro-Fuzzy Designer when both training and checking data are loaded are associated with the training epoch that has a minimum checking error.
The use of the minimum checking data error epoch to set the membership function parameters assumes
The checking data is similar enough to the training data that the checking data error decreases as the training begins.
The checking data increases at some point in the training after the data overfitting occurs.
Depending on the behavior of the checking data error, the resulting FIS may or may not be the one you need to use. Refer to Checking Data Does Not Validate Model.
The output of the command line anfis, fismat2, is the output FIS structure with the minimum checking error. This FIS structure is the one that you should use for further calculation if checking data is used for cross validation.
The checking error is the difference between the checking data output value, and the output of the fuzzy inference system corresponding to the same checking data input value, which is the one associated with that checking data output value. The checking error chkError records the RMSE for the checking data at each epoch. fismat2 is the snapshot of the FIS structure when the checking error is at its minimum. The Neuro-Fuzzy Designer plots the checking error versus epochs curve as the system is trained.