Now showing 1 - 10 of 30
  • Publication
    Open Access
    Validation data for manuscript "Novel statistical emulator construction for volcanic ash transport model Ash3d with physically-motivated measures"
    This database presents the testing and validation data used and presented in the manuscript titled "Novel statistical emulator construction for volcanic ash transport model Ash3d with physically-motivated measures". The manuscript is submitted to the Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. Please refer to the readme file for more details of the data.
      54  6
  • Publication
    Open Access
    Probabilistic forecasting of plausible debris flows from Nevado de Colima (Mexico) using data from the Atenquique debris flow, 1955
    We detail a new prediction-oriented procedure aimed at volcanic hazard assessment based on geophysical mass flow models constrained with heterogeneous and poorly defined data. Our method relies on an itemized application of the empirical falsification principle over an arbitrarily wide envelope of possible input conditions. We thus provide a first step towards a objective and partially automated experimental design construction. In particular, instead of fully calibrating model inputs on past observations, we create and explore more general requirements of consistency, and then we separately use each piece of empirical data to remove those input values that are not compatible with it. Hence, partial solutions are defined to the inverse problem. This has several advantages compared to a traditionally posed inverse problem: (i) the potentially nonempty inverse images of partial solutions of multiple possible forward models characterize the solutions to the inverse problem; (ii) the partial solutions can provide hazard estimates under weaker constraints, potentially including extreme cases that are important for hazard analysis; (iii) if multiple models are applicable, specific performance scores against each piece of empirical information can be calculated. We apply our procedure to the case study of the Atenquique volcaniclastic debris flow, which occurred on the flanks of Nevado de Colima volcano (Mexico), 1955.We adopt and compare three depthaveraged models currently implemented in the TITAN2D solver, available from https://vhub.org (Version 4.0.0 – last access: 23 June 2016). The associated inverse problem is not well-posed if approached in a traditional way. We show that our procedure can extract valuable information for hazard assessment, allowing the exploration of the impact of synthetic flows that are similar to those that occurred in the past but different in plausible ways. The implementation of multiple models is thus a crucial aspect of our approach, as they can allow the covering of other plausible flows. We also observe that model selection is inherently linked to the inversion problem.
      567  48
  • Publication
    Open Access
    Dynamic Probabilistic Hazard Mapping in the Long Valley Volcanic Region CA: Integrating Vent Opening Maps and Statistical Surrogates of Physical Models of Pyroclastic Density Currents
    Ideally, probabilistic hazard assessments combine available knowledge about physical mechanisms of the hazard, data on past hazards, and any precursor information. Systematically assessing the probability of rare, yet catastrophic hazards adds a layer of difficulty due to limited observation data. Via computer models, one can exercise potentially dangerous scenarios that may not have happened in the past but are probabilistically consistent with the aleatoric nature of previous volcanic behavior in the record. Traditional Monte Carlo-based methods to calculate such hazard probabilities suffer from two issues: they are computationally expensive, and they are static. In light of new information, newly available data, signs of unrest, and new probabilistic analysis describing uncertainty about scenarios the Monte Carlo calculation would need to be redone under the same computational constraints. Here we present an alternative approach utilizing statistical emulators that provide an efficient way to overcome the computational bottleneck of typical Monte Carlo approaches. Moreover, this approach is independent of an aleatoric scenario model and yet can be applied rapidly to any scenario model making it dynamic.We present and apply this emulator-based approach to create multiple probabilistic hazard maps for inundation of pyroclastic density currents in the Long Valley Volcanic Region. Further, we illustrate how this approach enables an exploration of the impact of epistemic uncertainties on these probabilistic hazard forecasts. Particularly, we focus on the uncertainty of vent opening models and how that uncertainty both aleatoric and epistemic impacts the resulting probabilistic hazard maps of pyroclastic density current inundation.
      511  20
  • Publication
    Open Access
    A probabilistic hazard mapping tool for the Long Valley volcanic region (CA, USA)
    Probabilistic hazard maps are used to graphically represent forecasts of potentially hazardous volcanic processes associated with an eruption. The construction of a probabilistic hazard map requires the characterization of all possible scenarios (aleatoric variability) that might lead to an event of interest. These scenarios then must be “fed in” to a physical model of the geophyiscal process which are typically computationally expensive to exercise. We present a hazard-mapping tool for the Long Valley region of California. This tool utilizes statistical surrogates of the physical model (in this demonstration, TITAN2D simulations of pyroclastic density currents) to perform rapid hazard assessment. It effectively replaces simulations that take O(min)-O(hours) with function evaluation which take a fraction of a second to exercise. This speed up enables tremendous flexibility in scenario modeling as we can quickly construct and compare probabilistic hazard maps under a variety of scenario models. Furthermore, we can quickly update a probabilistic hazard map as new data or emergent situations arise.
      39  14
  • Publication
    Open Access
    Comparative Analysis of the Structures and Outcomes of Geophysical Flow Models and Modeling Assumptions Using Uncertainty Quantification
    We advocate here a methodology for characterizing models of geophysical flows and the modeling assumptions they represent, using a statistical approach over the full range of applicability of the models. Such a characterization may then be used to decide the appropriateness of a model and modeling assumption for use. We present our method by comparing three different models arising from different rheology assumptions, and the output data show unambiguously the performance of the models across a wide range of possible flow regimes. This comparison is facilitated by the recent development of the new release of our TITAN2D mass flow code that allows choice of multiple rheologies. The quantitative and probabilistic analysis of contributions from different modeling assumptions in the models is particularly illustrative of the impact of the assumptions. Knowledge of which assumptions dominate, and, by how much, is illustrated in the topography on the SW slope of Volcán de Colima (MX). A simple model performance evaluation completes the presentation.
      468  24
  • Publication
    Open Access
    Refining the input space of plausible future debris flows using noisy data and multiple models of the physics
    Forecasts of future geophysical mass flows, fundamental in hazard assessment, usually rely on the reconstruction of past flows that occurred in the region of interest using models of physics that have been successful in hindcasting. The available pieces of data, are commonly related to the properties of the deposit left by the flows and to historical documentation. Nevertheless, this information can be fragmentary and affected by relevant sources of uncertainty (e.g., erosion and remobilization, superposition of subsequent events, unknown duration, and source). Moreover, different past flows may have had significantly different physical properties, and even a single flow may change its physics with respect to time and location, making the application of a single model inappropriate. In a probabilistic framework, for each model M we define (M, P_M), where P_M is a probability measure over the parameter space of M. While the support of PM can be restricted to a single value by solving an inverse problem for the optimal reconstruction of a particular flow, the inverse problem is not always well posed. That is, no input values are able to produce outputs consistent with all observed information. Choices based on limited data using classical calibration techniques (i.e. optimized data inversion) are often misleading since they do not reflect all potential event characteristics and can be error prone due to incorrectly limited event space. Sometimes the strict replication of a past flow may lead to overconstraining the model, especially if we are interested in the general predictive capabilities of a model over a whole range of possible future events. In this study, we use a multi-model ensemble and a plausible region approach to provide a more predictionoriented probabilistic framework for input space characterization in hazard analysis. In other words, we generalize a poorly constrained inverse problem, decomposing it into a hierarchy of simpler problems. We apply our procedure to the case study of the Atenquique volcaniclastic debris flow, which occurred on the flanks of Nevado de Colima volcano (Mexico) in 1955. We adopt and compare three depth-averaged models. Input spaces are explored by Monte Carlo simulation based on Latin hypercube sampling. The three models are incorporated in our large-scale mass flow simulation framework TITAN2D. Our meta-modeling framework is fully described in Fig.1 with a Venn diagram of input and output sets, and in Fig. 2 with a flowchart of the algorithm. See also for more details on the study. Our approach is characterized by three steps: (STEP 1) Let us assume that each model Mj is represented by an operator: f_Mj in R^d, where d is a dimensional parameter which is independent of the model chosen and characterizes a common output space. This operator simply links the input values to the related output values in Rd. Thus we define the global set of feasible inputs. This puts all the models in a natural meta-modeling framework, only requiring essential properties of feasibility in the models, namely the existence of the numerical output and the realism of the underlying physics. (STEP 2) After a preliminary screening, we characterize the codomain of plausible outputs: that is, the target of our simulations – it includes all the outputs consistent with the observed data, plus additional outputs which differ in arbitrary but plausible ways. For instance, having a robust numerical simulation without spurious effects, and with meaningful flow dynamics, and/or the capability to inundate a designated region. Thus, the specialized input space is defined as the inverse image of palusible outputs. (STEP 3) Furthermore, through more detailed testing, we can thus define the subspace of the inputs that are consistent with a piece of empirical data Di. For this reason those sets are called partial solutions to the inverse problem. In our case study, model selection appears to be inherently linked to the inversion problem. That is, the partial inverse problems enable us to find models depending on the example characteristics and spatial location.
      41  14
  • Publication
    Open Access
    Long Valley volcanic region long-term vent opening maps
    The high potential of probabilistic approaches for hazard assessment was identified by the VIMESEA group during the first meeting. In this context the INGV group has developed a novel approach that was first applied to the Phlegrean fields in Italy. This presentation was aimed to present further application to other volcanic areas in the world to address probabilities of volcanic vent reactivation. Though the message delivered by such simulations to civil authorities in charge of volcanic crisis management may not be straightforward, the VIMESEA participants concluded that the probabilistic approaches are essential for hazard assessment.
      44  8
  • Publication
    Open Access
    Multi-model probability assessments in the Long-Valley volcanic region (CA)
    The Long Valley volcanic region is an active volcanic area situated at the east base of the Sierra Nevada escarpment, and dominated by a 32-km wide resurgent caldera of ~760 ka. Eruptions during the last 180 ka have been localized at Mammoth Mountain on the western rim of the caldera, and along the Mono-Inyo Craters volcanic chain stretching about 45 km northward. The past eruption record is characterized by significant acceleration during the last 6 ka. In 1325 - 1350 AD there was a ~1 km3 eruption along a 25 km section of the Mono-Inyo Craters chain. The most recent eruption in ~1700 AD created Paoha Island in Mono Lake. The last eruption in the southern part of the system was ~10 ka (Red Cones), but continuous CO2 degassing, potential precursory signals and recent geophysical studies suggest that the Mammoth Mountain area could be active again. Multiple spatial probability models were developed, based on past vents locations. One of the models couples this information with pre-existing faults, sampling a fault outcrop site as a parameter of proximity to the vent location forecast. Similarly, different Poisson-type models have been developed for modeling the temporal sequence of eruptions and making estimates for the current volcanic intensity of the system (i.e. the expected rate of eruptions per year). The models implement various self-excitement features, assuming that the expected volcanic intensity is increased by past events and is instead decreased by prolonged periods of quiescence. All the available models can be considered as different “experts”, and this has significant analogies with “Structured Expert Judgment” problems. “Bayesian Model Averaging” is presented as a flexible technique for combining the results of multiple models, relying on their performance in hindcasting the past record. The analysis is setup in a doubly stochastic framework, enabling us to incorporate some of the main sources of epistemic uncertainty - these include the effects of the unknown relevance of Mammoth Mountain area, the incompleteness of the past record and mapped faults, and the uncertain age (and location) of past events. Our findings provide a rational basis for hazard mapping of the next eruption in the Long Valley volcanic region, suggesting that the hazard associated with Mammoth Mountain volcanism should be carefully reevaluated.
      35  6
  • Publication
    Open Access
    Probabilistic forecasting of plausible debris flows using data and multiple models of the physics
    Hazard assessment of geophysical mass flows, such as landslides or pyroclastic flows, usually relies on the reconstruction of past flows that occurred in the region of interest using models of physics that have been successful in hindcasting. While physical models relate inputs and outputs of the dynamical system of the mass flow (Gilbert, 1991; Patra et al., 2018a) this relation is dependent on the choice of model and parameters which is usually difficult for future events. Choices based on limited data using classical inversion is often misleading since it does not reflect all potential event characteristics and even in a probabilistic setting can be error-prone, due to incorrectly limited event space. In this work, we use a multi-model ensemble and a plausible region approach to provide a more prediction-oriented probabilistic framework for hazard analysis.
      67  5
  • Publication
    Open Access
    The Failure Forecast Method applied to the GPS and seismic data collected in the Campi Flegrei caldera (Italy) in 2011-2020.
    Episodes of slow uplift and subsidence of the ground, called bradyseism, characterize the recent dynamics of the Campi Flegrei caldera (Italy). In the last decades two major bradyseismic crises occurred, in 1969/1972 and in 1982/1984, with a ground uplift of 1.70 m and 1.85 m, respectively. Thousands of earthquakes, with a maximum magnitude of 4.2, caused the partial evacuation of the town of Pozzuoli in October 1983. This was followed by about 20 years of overall subsidence, about 1 m in total, until 2005. After 2005 the Campi Flegrei caldera has been rising again, with a slower rate, and a total maximum vertical displacement in the central area of ca. 70 cm. The two signals of ground deformation and background seismicity have been found to share similar accelerating trends. The failure forecast method can provide a first assessment of failure time on present‐day unrest signals at Campi Flegrei caldera (Italy) based on the monitoring data collected in [2011, 2020] and under the assumption to extrapolate such a trend into the future. In this study, we apply a probabilistic approach that enhances the well‐established method by incorporating stochastic perturbations in the linearized equations. The stochastic formulation enables the processing of decade‐long time windows of data, including the effects of variable dynamics that characterize the unrest. We provide temporal forecasts with uncertainty quantification, potentially indicative of eruption dates. The basis of the failure forecast method is a fundamental law for failing materials: ẇ^-α ẅ = A, where ẇ is the rate of the precursor signal, and α, A are model parameters that we fit on the data. The solution when α >1 is a power law of exponent 1/(1 − α) diverging at time Tf , called failure time. In our case study, Tf is the time when the accelerating signals collected at Campi Flegrei would diverge if we extrapolate their trend. The interpretation of Tf as the onset of a volcanic eruption is speculative. It is important to note that future variations of monitoring data could either slow down the increase so far observed, or suddenly further increase it leading to shorter failure times than those here reported. Data from observations at all locations in the region were also aggregated to reinforce the computations of Tf reducing the impact of observation errors.
      152  48