Linear discriminant analysis (LDA) 55 is the most common linear classifier used to separate two or more classes and is a most common technique for reducing dimension. It is supervised learning technique and separate different groups by maximizing the difference between them to classify into different groups. It considers that the variances for each group are same. It calculates the scatter matrices for both between class Sb and within class Sw. Expected covariance of each class is represented by within class scatter matrices and between class scatter matrices is the covariance of the mean vector of each class. The scatter matrix value should be maximum within class and minimum between classes. Later a ratio of scatter matrices between classes Sb to scatter matrices within classes Sw is calculated. Finally, maximization of the ratio is based on eigenvectors of Sw-1Sb. To improve the performance of LDA one should remove outliers as they can affect mean and standard deviation. Therefore, data needs to be standardized. It is simple, non-iterative process and need less training time.
Support vector machine (SVM) 56 is one of the most used classifiers used for VT and VF classification. It is supervised learning algorithm, efficient for high dimensional dataset and has high generalization ability. It uses non-linear mapping to project the input data into high dimensional space. Thus, SVM can also deal with the non-linear problem and further used hyperplane for separation of classes. The hyperplane is drawn in such that the margin is maximum between classes. It is a kernel base learning technique. There are lot of different kernels functions and are selected base on the type of problem. SVM cannot deal with class imbalance problem therefore additional efforts are required to balance the data.
4.5.3 Neural network
The ANN is most popular and efficient technique in the field of classification of VF and VT. It uses a neuron to separate classes. It consists of input layer, hidden layers and output layers. It uses a basis function to train the neurons. The input are multiplied with the weights and are given as an input the basis function. Output of one layer becomes the input for another layer. There are different functions that can be use train the network. Most common one is sigmoidal function. There are different algorithms in ANN. Some of the popular algorithms are a multi-layer perceptron, backpropagation and feedforward propagation. Other ANN algorithm is a combination with fuzzy membership known as neural network fuzzy weighted membership (NEWFM). It composes of three layers: the input layer, hyperbox and the class layer.