Now showing 1 - 4 of 4
  • Publication
    Open Access
    Probabilistic forecasting of plausible debris flows from Nevado de Colima (Mexico) using data from the Atenquique debris flow, 1955
    We detail a new prediction-oriented procedure aimed at volcanic hazard assessment based on geophysical mass flow models constrained with heterogeneous and poorly defined data. Our method relies on an itemized application of the empirical falsification principle over an arbitrarily wide envelope of possible input conditions. We thus provide a first step towards a objective and partially automated experimental design construction. In particular, instead of fully calibrating model inputs on past observations, we create and explore more general requirements of consistency, and then we separately use each piece of empirical data to remove those input values that are not compatible with it. Hence, partial solutions are defined to the inverse problem. This has several advantages compared to a traditionally posed inverse problem: (i) the potentially nonempty inverse images of partial solutions of multiple possible forward models characterize the solutions to the inverse problem; (ii) the partial solutions can provide hazard estimates under weaker constraints, potentially including extreme cases that are important for hazard analysis; (iii) if multiple models are applicable, specific performance scores against each piece of empirical information can be calculated. We apply our procedure to the case study of the Atenquique volcaniclastic debris flow, which occurred on the flanks of Nevado de Colima volcano (Mexico), 1955.We adopt and compare three depthaveraged models currently implemented in the TITAN2D solver, available from https://vhub.org (Version 4.0.0 – last access: 23 June 2016). The associated inverse problem is not well-posed if approached in a traditional way. We show that our procedure can extract valuable information for hazard assessment, allowing the exploration of the impact of synthetic flows that are similar to those that occurred in the past but different in plausible ways. The implementation of multiple models is thus a crucial aspect of our approach, as they can allow the covering of other plausible flows. We also observe that model selection is inherently linked to the inversion problem.
      567  48
  • Publication
    Open Access
    Refining the input space of plausible future debris flows using noisy data and multiple models of the physics
    Forecasts of future geophysical mass flows, fundamental in hazard assessment, usually rely on the reconstruction of past flows that occurred in the region of interest using models of physics that have been successful in hindcasting. The available pieces of data, are commonly related to the properties of the deposit left by the flows and to historical documentation. Nevertheless, this information can be fragmentary and affected by relevant sources of uncertainty (e.g., erosion and remobilization, superposition of subsequent events, unknown duration, and source). Moreover, different past flows may have had significantly different physical properties, and even a single flow may change its physics with respect to time and location, making the application of a single model inappropriate. In a probabilistic framework, for each model M we define (M, P_M), where P_M is a probability measure over the parameter space of M. While the support of PM can be restricted to a single value by solving an inverse problem for the optimal reconstruction of a particular flow, the inverse problem is not always well posed. That is, no input values are able to produce outputs consistent with all observed information. Choices based on limited data using classical calibration techniques (i.e. optimized data inversion) are often misleading since they do not reflect all potential event characteristics and can be error prone due to incorrectly limited event space. Sometimes the strict replication of a past flow may lead to overconstraining the model, especially if we are interested in the general predictive capabilities of a model over a whole range of possible future events. In this study, we use a multi-model ensemble and a plausible region approach to provide a more predictionoriented probabilistic framework for input space characterization in hazard analysis. In other words, we generalize a poorly constrained inverse problem, decomposing it into a hierarchy of simpler problems. We apply our procedure to the case study of the Atenquique volcaniclastic debris flow, which occurred on the flanks of Nevado de Colima volcano (Mexico) in 1955. We adopt and compare three depth-averaged models. Input spaces are explored by Monte Carlo simulation based on Latin hypercube sampling. The three models are incorporated in our large-scale mass flow simulation framework TITAN2D. Our meta-modeling framework is fully described in Fig.1 with a Venn diagram of input and output sets, and in Fig. 2 with a flowchart of the algorithm. See also for more details on the study. Our approach is characterized by three steps: (STEP 1) Let us assume that each model Mj is represented by an operator: f_Mj in R^d, where d is a dimensional parameter which is independent of the model chosen and characterizes a common output space. This operator simply links the input values to the related output values in Rd. Thus we define the global set of feasible inputs. This puts all the models in a natural meta-modeling framework, only requiring essential properties of feasibility in the models, namely the existence of the numerical output and the realism of the underlying physics. (STEP 2) After a preliminary screening, we characterize the codomain of plausible outputs: that is, the target of our simulations – it includes all the outputs consistent with the observed data, plus additional outputs which differ in arbitrary but plausible ways. For instance, having a robust numerical simulation without spurious effects, and with meaningful flow dynamics, and/or the capability to inundate a designated region. Thus, the specialized input space is defined as the inverse image of palusible outputs. (STEP 3) Furthermore, through more detailed testing, we can thus define the subspace of the inputs that are consistent with a piece of empirical data Di. For this reason those sets are called partial solutions to the inverse problem. In our case study, model selection appears to be inherently linked to the inversion problem. That is, the partial inverse problems enable us to find models depending on the example characteristics and spatial location.
      41  14
  • Publication
    Open Access
    Probabilistic forecasting of plausible debris flows using data and multiple models of the physics
    Hazard assessment of geophysical mass flows, such as landslides or pyroclastic flows, usually relies on the reconstruction of past flows that occurred in the region of interest using models of physics that have been successful in hindcasting. While physical models relate inputs and outputs of the dynamical system of the mass flow (Gilbert, 1991; Patra et al., 2018a) this relation is dependent on the choice of model and parameters which is usually difficult for future events. Choices based on limited data using classical inversion is often misleading since it does not reflect all potential event characteristics and even in a probabilistic setting can be error-prone, due to incorrectly limited event space. In this work, we use a multi-model ensemble and a plausible region approach to provide a more prediction-oriented probabilistic framework for hazard analysis.
      67  5
  • Publication
    Open Access
    A prediction-oriented hazard assessment procedure based on the empirical falsification principle, application to the Atenquique debris flow, 1955, México
    In this study, we detail a new prediction-oriented procedure aimed at volcanic hazard assessment based on geophysical mass flow models with heterogeneous and poorly constrained output information. Our method is based on an itemized application of the empirical falsification principle over an arbitrarily wide envelope of possible input conditions. In particular, instead of fully calibrating input data on past observations, we create and explore input values under more general requirements of consistency, and then we separately use each piece of empirical data to remove those input values that are not compatible with it, hence defining partial solutions to the inversion problem. This has several advantages compared to a traditionally posed inverse problem: (i) the potentially non-empty intersection of the input spaces of partial solutions fully contains solutions to the inverse problem; (ii) the partial solutions can provide hazard estimates under weaker constraints potentially including extreme cases that are important for hazard analysis; (iii) if multiple models are applicable, specific performance scores against each piece of empirical information can be calculated. We apply our procedure to the case study of the Atenquique volcaniclastic debris flow, which occurred in the State of Jalisco (MX), 1955. We adopt and compare three depth averaged models currently implemented in the TITAN2D solver, available from vhub.org. The associated inverse problem is not well-posed if approached in a traditional way. However, we show that our procedure can extract valuable information for hazard assessment, allowing the exploration of the impact of model flows that are similar to those which occurred in the past, but differ in plausible ways.
      54  10