Earth-prints repository, logo   DSpace

About DSpace Software
|earth-prints home page | roma library | bologna library | catania library | milano library | napoli library | palermo library
Please use this identifier to cite or link to this item:

Authors: Marzocchi, W.*
Jordan, T. H.*
Title: Testing for ontological errors in probabilistic forecasting models of natural systems
Title of journal: Proceedings of National Academy of Sciences
Series/Report no.: /111 (2014)
Issue Date: 2014
DOI: 10.1073/pnas.1410183111
Keywords: Bayesian statistics
testing hazard models
Abstract: Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchange- able when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desir- able because it provides more severe tests. We illustrate experi- mental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating pro- cess well enough to be sufficiently reliable for some useful pur- pose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this method- ology in separating predictable behaviors from those that are not.
Appears in Collections:05.01.04. Statistical analysis
Papers Published / Papers in press

Files in This Item:

File SizeFormatVisibility
PNAS_Marzocchi_Jordan_2014.pdf1.05 MBAdobe PDFNot available View/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Share this record




Stumble it!



Valid XHTML 1.0! ICT Support, development & maintenance are provided by CINECA. Powered on DSpace Software. CINECA