Please use this identifier to cite or link to this item: http://hdl.handle.net/2122/14406
Authors: Langer, Horst* 
Falsaperla, Susanna* 
Hammer, Conny* 
Editors: Spichak, Viacheslav 
Title: Supervised learning
Publisher: Elsevier B.V.
Issue Date: Jan-2020
URL: https://www.sciencedirect.com/science/article/pii/B9780128118429000029?via%3Dihub
https://doi.org/10.1016/B978-0-12-811842-9.00002-9
ISBN: 9780128118429
Keywords: pattern recognition
supervised learning
Support Vector Machines
Multilayer Perceptrons
Hidden Markov Models
Bayesian Networks
Subject Classification04.04. Geology 
04.06. Seismology 
04.07. Tectonophysics 
04.08. Volcanology 
05.04. Instrumentation and techniques of general interest 
Abstract: In supervised classification, we search criteria allowing us to decide whether a sample belongs to a certain class of patterns. The identification of such decision functions is based on examples where we know a priori to which class they belong. The distinction of seismic signals, produced from earthquakes and nuclear explosions, is a classical problem of discrimination using classification with supervision. We move on from observed data—signals originating from known earthquakes and nuclear tests—and search for criteria on how to assign a class to a signal of unknown origin. We begin with Principal Component Analysis (PCA) and Fisher's Linear Discriminant Analysis (FLDA), identifying a linear element separating groups at best. PCA, FLDA, and likelihood-based approaches make use of statistical properties of the groups. Considering only the number of misclassified samples as a cost, we may prefer alternatives, such as the Multilayer Perceptrons (MLPs). The Support Vector Machines (SVMs) use a modified cost function, combining the criterion of the minimum number of misclassified samples with a request of separating the hulls of the groups with a margin as wide as possible. Both SVMs and MLPs overcome the limits of linear discrimination. A famous example for the advantages of the two techniques is the eXclusive OR (XOR) problem, where we wish to form classes of objects having the same parity—even, e.g., (0,0), (1,1) or odd, e.g., (0,1), (1,0). MLPs and SVMs offer effective methods for the identification of nonlinear decision functions, allowing us to resolve classification problems of any complexity provided the data set used during earning is sufficiently large. In Hidden Markov Models (HMMs), we consider observations where their meaning depends on their context. Observations form a causal chain generated by a hidden process. In Bayesian Networks (BNs) we represent conditional (in)dependencies between a set of random variables by a graphical model. In both HMMs and BNs, we aim at identifying models and parameters that explain observations with a highest possible degree of probability.
Appears in Collections:Book chapters

Files in This Item:
File Description SizeFormat
Chapter 2.pdfabstract290.2 kBAdobe PDFView/Open
Show full item record

Page view(s)

49
checked on Apr 24, 2024

Download(s)

17
checked on Apr 24, 2024

Google ScholarTM

Check

Altmetric