Kernel methods for data analysis
Linear learning machines like perceptrons, support vector machines are potent machines for pattern recognition tasks. They generate hyperplane to separate classes. There are many such real-life problems where such linear separation is not possible. Kernel method(or kernel trick) is the method that adds nonlinearity flavor to these machines. We can increase the computation power of these machines by choosing an appropriate kernel. Kernel methods generally project data into higher dimensions. In this thesis, we show the properties of the kernel methods. We perform experiments on various standard kernels that are available and compare their results on various datasets. We also perform experiments on probabilistic kernel like Kullback-Leibler divergence, which operates on the vector’s set instead of one column feature vectors. In the end, we propose a new kernel Mutual Information kernel and perform experiments on it. We get promising results for new Mutual Information kernel. We use Support Vector Machine(SVM) for the classification task.
- M Tech Dissertations