Please use this identifier to cite or link to this item: http://hdl.handle.net/2122/15324
Authors: Münchmeyer, Jannes* 
Woollam, Jack* 
Rietbrock, Andreas* 
Tilmann, Frederik* 
Lange, Dietrich* 
Bornstein, Thomas* 
Diehl, Tobias* 
Giunchi, Carlo* 
Haslinger, Florian* 
Jozinović, Dario* 
Michelini, Alberto* 
Saul, Joachim* 
Soto, Hugo* 
Title: Which Picker Fits My Data? A Quantitative Evaluation of Deep Learning Based Seismic Pickers
Journal: Journal of Geophysical Research: Solid Earth 
Series/Report no.: 1/127 (2022)
Publisher: Wiley-AGU
Issue Date: 6-Jan-2022
DOI: 10.1029/2021JB023499
Keywords: seismic phase recognition
deep learnig
Subject Classification04.06. Seismology 
Abstract: Seismic event detection and phase picking are the base of many seismological workflows. In recent years, several publications demonstrated that deep learning approaches significantly outperform classical approaches, achieving human-like performance under certain circumstances. However, as studies differ in the datasets and evaluation tasks, it is unclear how the different approaches compare to each other. Furthermore, there are no systematic studies about model performance in cross-domain scenarios, that is, when applied to data with different characteristics. Here, we address these questions by conducting a large-scale benchmark. We compare six previously published deep learning models on eight data sets covering local to teleseismic distances and on three tasks: event detection, phase identification and onset time picking. Furthermore, we compare the results to a classical Baer-Kradolfer picker. Overall, we observe the best performance for EQTransformer, GPD and PhaseNet, with a small advantage for EQTransformer on teleseismic data. Furthermore, we conduct a cross-domain study, analyzing model performance on data sets they were not trained on. We show that trained models can be transferred between regions with only mild performance degradation, but models trained on regional data do not transfer well to teleseismic data. As deep learning for detection and picking is a rapidly evolving field, we ensured extensibility of our benchmark by building our code on standardized frameworks and making it openly accessible. This allows model developers to easily evaluate new models or performance on new data sets. Furthermore, we make all trained models available through the SeisBench framework, giving end-users an easy way to apply these models.
Appears in Collections:Article published / in press

Files in This Item:
File Description SizeFormat
2110.13671.pdfOpen Access accepted article3.3 MBAdobe PDFView/Open
Show full item record

Page view(s)

157
checked on Apr 17, 2024

Download(s)

10
checked on Apr 17, 2024

Google ScholarTM

Check

Altmetric