DSpace Collection:
http://hdl.handle.net/2122/56
2016-10-01T10:37:35ZFault Directivity and Seismic Hazard
http://hdl.handle.net/2122/10121
Title: Fault Directivity and Seismic Hazard
Authors: Spagnuolo, E.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma1, Roma, Italia
Abstract: In planning the design of structures in a region of potential seismic activity, a specification of the “strength” of the earthquake ground motion, or the most likelihood ground motion level, is needed. The earthquake occurrence, and its effects, is described as a stochastic process. Thus its realization is linked to state variables defined over a a known space through a continuous function. The Ground Motion Predictive Equation (GMPE) realize this function and, despite its shortcoming as an effective design tool to control damage (Priestly, 2003), it is still the most widely used representation of earthquake ground motion employed in engineering practice. As a consequence the majority of hazard estimations are based on the GMPE providing a ground motion specification as a function of a certain number of variables.
In fact in many situation there are not enough data to allow a direct empirical specification of ground motion. Only few regions, i.e. Japan, have strong-motion network and data-banks sufficient to carry out seismic hazard assessment without the benefit of regionally-derived ground motion predictive model. The central role they hold in the hazard assessment motivates the recent efforts in better synthesize all available regional informations and general knowledge about earthquakes. The representation of the ground motion through the GMPE is simple compared to the complexity of the physical process involved. If only the magnitude and distance are taken into account, the GMPEs predict isoseismal curves that are expected to be isotropic around the hypocenter and uniform if no other effects are considered (i.e. site effects). Instead, the presence of a fault plane, across which a process of failure in shear develops, make this general formulation divert from the observations on a specific case. In fact the dynamic propagation of rupture results in anisotropy effects not included in the predictions although back-analyses of ground motions from past earthquakes have shown that such effects have a strong influence on the spatial distribution of ground motion.Although the anisotropy effects resulting from the propagation of rupture have been generally recognized and finally incorporated in predictions, its effect has not been tested yet in an hazard context. On the contrary, all the aforementioned issues motivate an in depth analysis of its contribution on the present tools of seismic hazard assessment. This work is mainly addressed to conduct such analysis. One guidance is provided answering to the following questions: Does directivity improves the performance of ground motion prediction in real time applications? Is directivity still effective in a PSHA framework? What deterministic hazard model can tell about directivity ?2010-03-31T22:00:00ZEvoluzione della trazione dinamica sulla faglia durante i forti terremoti
http://hdl.handle.net/2122/10120
Title: Evoluzione della trazione dinamica sulla faglia durante i forti terremoti
Authors: Spagnuolo, E.
Abstract: Nel presente lavoro la dinamica dei processi sismogenetici è stata studiata attraverso un metodo innovativo basato su una soluzione dell’equazione dell’elastodinamica che esprime lo sforzo di taglio agente sul piano di faglia come funzione della velocità di dislocazione e della sua evoluzione temporale. Il dato di ingresso della procedura numerica è quindi l’evoluzione nel tempo della velocità di dislocazione in ciascun punto del piano di faglia. Questo metodo permette di vincolare l’evoluzione della trazione in funzione del tempo e della posizione sulla faglia e consente quindi la stima dei principali parametri dinamici per terremoti reali. Il vantaggio di tale modello è che non viene imposta una legge costitutiva a priori. La procedura numerica è stata applicata a forti terremoti reali, per i quali sono disponibili i modelli cinematici che descrivono la propagazione della rottura cosismica, allo scopo di studiare il comportamento meccanico delle strutture sismogenetiche ed i meccanismi responsabili del rilascio di energia. L’applicazione del metodo ha prodotto risultati originali ed interessanti: gli andamenti della trazione in ciascun punto del piano di faglia, sia in funzione del tempo sia in funzione della dislocazione, mostrano l’andamento atteso in base all’interpretazione teorica del processo di propagazione della rottura cosismica, ovvero un chiaro andamento di tipo ‘dynamic weakening’. I parametri dinamici risultano ben vincolati, sebbene dipendenti dalla risoluzione dei modelli cinematici, e mostrano una distribuzione eterogenea sul piano di faglia. Un parametro molto importante ottenuto dagli andamenti della trazione dinamica è il breakdown work che, come definito da Tinti et al. (2005), fornisce una stima dell’energia spesa per far propagare il fronte di rottura. I valori ottenuti per i diversi terremoti sono in accordo con quelli pubblicati recentemente in letteratura (Rice et al., 2005; Tinti et al., 2005) e dimostrano che il breakdown work costituisce un contributo commensurabile alle stime dell’energia irradiata.2006-03-31T22:00:00ZImaging of crustal scatterers using multiple seismic arrays
http://hdl.handle.net/2122/10115
Title: Imaging of crustal scatterers using multiple seismic arrays
Authors: Roselli, P.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma1, Roma, Italia
Abstract: Array seismology is an useful tool to perform a detailed investigation of the Earth’s interior. Seismic arrays by using the coherence properties of the wavefield are able to extract directivity information and to increase the ratio of the coherent signal amplitude relative to the amplitude of incoherent noise. The Double Beam Method (DBM), developed by Krüger et al. (1993, 1996), is one of the possible applications to perform a refined seismic investigation of the crust and mantle by using seismic arrays. The DBM is based on a combination of source and receiver arrays leading to a further improvement of the signal-to-noise ratio by reducing the error in the location of coherent phases. Previous DBM works have been performed for mantle and core/mantle resolution (Krüger et al., 1993; Scherbaum et al., 1997; Krüger et al., 2001). An implementation of the DBM has been presented at 2D large-scale (Italian data-set for Mw=9.3, Sumatra earthquake) and at 3D crustal-scale as proposed by Rietbrock & Scherbaum (1999), by applying the revised version of Source Scanning Algorithm (SSA; Kao & Shan, 2004). In the 2D application, the rupture front propagation in time has been computed. In 3D application, the study area (20x20x33 km3), the data-set and the source-receiver configurations are related to the KTB-1994 seismic experiment (Jost et al., 1998). We used 60 short-period seismic stations (200-Hz sampling rate, 1-Hz sensors) arranged in 9 small arrays deployed in 2 concentric rings about 1 km (A-arrays) and 5 km (B-array) radius. The coherence values of the scattering points have been computed in the crustal volume, for a finite time-window along all array stations given the hypothesized origin time and source location. The resulting images can be seen as a (relative) joint log-likelihood of any point in the subsurface that have contributed to the full set of observed seismograms.2010-12-31T23:00:00ZAtmospheric water vapour tomography for DInSAR application and effect of volcanic plume on the microwaves
http://hdl.handle.net/2122/9869
Title: Atmospheric water vapour tomography for DInSAR application and effect of volcanic plume on the microwaves
Authors: Aranzulla, Massimo; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Catania, Catania, Italia
Abstract: A particular synergy among GPS and SAR techniques, to improve the precision of the
current ground deformation monitoring techniques, is investigated. The study of atmo-
spheric anomalies in the GPS EM waves propagation is useful to extrapolate information
about the wet refractivity ﬁeld. Because of its height and the quite variable weather
conditions, the estimation of Mount Etna atmospheric anomalies using GPS measure-
ments have noticeable importance to calibrate the SAR interferograms and to establish
the “eﬀective” ground deformation of the volcanic ediﬁce. In this study we presented
a method to obtain a 3D electromagnetic waves velocity tomography, starting from the
GPS output data analysis. Thanks to the agreement between the University of Catania
and the INGV-OE, the GPS data used in this work come from ”Etn@net” framework.
The GPS processing has been carried out by using the GAMIT software, by adopting
appropriate processing parameters. A new software was developed for deriving the tro-
pospheric tomography from the GPS data. The code was validated by using synthetic
tests which assume diﬀerent structure of atmospheric anomalies and with random noise
about twice severe than the typical errors of the GPS. The results of the tests proved
that the tomography software is able to reconstruct the simulated anomalies faithfully.
The code was applied to study the structure of the atmosphere in an actual case: the
period of August 12, 2011 at 10.00 am. The results of the tomography indicate clearly
important features of the refractivity ﬁeld of the studied day. In conclusion, the syn-
thetic tests and the application on actual data sets of the new software demonstrate that
it is able to reveal the tropospheric anomalies and thus it is an useful tool to improve
the results of the SAR interferometry. An indirect outcome of the use of the GPS for the atmospheric sounding on an active
volcanic area is that concerning the detection of volcanic products in the atmosphere.
Due to the Mt. Etna persistent activity occurred during the last two years, the capability
of GPS to detect the volcanic plume was investigated. The Etna volcano is particularly
suited for an in-depth investigation into the aptitude of GPS observations to detect
volcanic plumes, owing to both the high frequency of explosive episodes and also the
well-developed GPS network. Two diﬀerent approaches were tested, in order to examine
the capability of the GPS network to detect volcanic plumes at Etna. The ﬁrst approach
is applied on the signal strength of the GPS L2 carrier phase data, the second approach,
instead, is statistical, and analyzes the single diﬀerence post ﬁt residual of elaboration
signals to assert the hypothesis that the plume aﬀects the GPS data. The proposed
method has been tested for the September 4–5, 2007 activity of Mt. Etna. Results from
nineteen GPS permanent stations show that during this explosive activity, the GPS
residuals deﬁnitely include the contribution of the volcanic plume. In the future, data
derived from the GPS stations located on Etna’s ﬂanks could be used to improve the
alerting system of volcanic ash, already operating at the Istituto Nazionale di Geoﬁsica
e Vulcanologia, Osservatorio Etneo.2013-01-11T23:00:00ZCalibration and validation (CAL/VAL) of Remote Sensing data and spectral characterization of volcanic rocks
http://hdl.handle.net/2122/8827
Title: Calibration and validation (CAL/VAL) of Remote Sensing data and spectral characterization of volcanic rocks
Authors: Amici, Stefania
Abstract: A calibration method has been applied on satellite data in the visible infrared spectral range from which spectral reflectance and emissivity may be retrieved. This dissertation describes the steps needed for multispectral/hyperspectral data calibration and a number of algorithms for reflectance and emissivity retrieval. The methodology is applied to retrieve reflectance and emissivity of volcano Teide and is validated through a comparison with “ground truth”. The “ground truth” spectra have been acquired during a field campaign carried on September 2007. As application of calibrated-validated data, the classification of the volcano Teide and the temperature map are discussed.2010-02-28T23:00:00ZIntegrating new and traditional approaches for the estimate of slip-rates of active faults: examples from the Mw 6.3, 2009 L’Aquila earthquake area, Central Italy
http://hdl.handle.net/2122/8478
Title: Integrating new and traditional approaches for the estimate of slip-rates of active faults: examples from the Mw 6.3, 2009 L’Aquila earthquake area, Central Italy
Authors: Civico, Riccardo; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma1, Roma, Italia
Abstract: This thesis developed a multidisciplinary and multi-scale investigation strategy based on the integration of traditional and innovative approaches aimed at improving the normal faults seismogenic identification and characterization, focusing mainly on slip-rate estimate as a measure of the fault activity.
The L’Aquila Mw 6.3 April 6, 2009 earthquake causative fault was used as a test site for the application, testing, and refinement of traditional and/or innovative approaches, with the aim to 1) evaluate their strength or limitations 2) develop a reference approach useful for extending the investigation to other active faults in the area and 3) translate the results of the methodological approaches into new inputs to local seismic hazard.
The April 6, 2009 L’Aquila earthquake occurred on a so far poorly known tectonic structure, considered having a limited seismic potential, the Paganica - San Demetrio fault system (PSDFS), and thus has highlighted the need for a detailed knowledge in terms of location, geometry, and characterization of the active faults that are the potential sources for future earthquakes.
To fill the gap of knowledge enhanced by the occurrence of the 2009 L’Aquila earthquake, we developed a multidisciplinary and multiscale‐based strategy consisting of paleoseismological investigations, detailed geomorphological and geological field studies, as well as shallow geophysical imaging and an innovative methodology that uses, as an alternative paleoseismological tool, core sampling and laboratory analyses but also in situ measurements of physical properties.
The integration of geomorphology, geology as well as shallow geophysics, was essential to produce a new detailed geomorphological and geological map of the PSDFS and to define its tectonic style, arrangement, kinematics, extent, geometry and internal complexities.
Our investigations highlighted that the PSDFS is a 19 km-long tectonic structure characterized by a complex structural setting at the surface and that is arranged in two main sectors: the Paganica sector to the NW and the San Demetrio sector to SE. The Paganica sector is characterized by a narrow deformation zone, with a relatively small (but deep) Quaternary basin affected by few fault splays. The San Demetrio sector is characterized by a strain distribution at the surface that is accommodated by several tectonic structures, with the system opening into a set of parallel, km-spaced fault traces that exhume and dissect the Quaternary basin.
The integration of all the fault displacement data and age constraints (radiocarbon dating, optically stimulated luminescence (OSL) and tephrochronology) resulting from paleoseismological, geomorphological, geophysical and geological investigations played a primary role in the estimate of the slip-rate of the PSDFS. Slip-rates were estimated for different time intervals in the Quaternary, from Early Pleistocene (1.8 Ma) to Late Holocene (last 5 ka), yielding values ranging between 0.09 and 0.58 mm/yr and providing an average Quaternary slip-rate representative for the PSDFS of 0.27 - 0.48 mm/yr.
We contributed also to the understanding of the PSDFS seismic behavior and of the local seismic hazard by estimating the max expected magnitude for this fault on the basis of its length (ca. 20 km) and slip per event (up to 0.8 m), and identifying the two most active fault splays at present. Our multidisciplinary results converge toward the possibility of the occurrence of past surface faulting earthquakes characterized by a moment magnitude between 6.3 and 6.8, notably larger than the 2009 event, but compatible with the M range observed in historical earthquakes in the area. The slip-rate distribution over time and space and the tectonic style of the PSDFS suggested the occurrence of strain migration through time in the southern sector, from the easternmost basin-bounding fault splay toward the southwestern splays. This topic has a significant implication in terms of surface faulting hazard in the area, because it can contribute defining the fault splays that have a higher potential to slip during future earthquakes along the PSDFS.
By a methodological point of view, the multidisciplinary and multiscale‐based investigation strategy emphasizes the advantages of the joint application of different approaches and methodologies for active faults identification and characterization.
Our work suggests that each approach alone may provide sufficient information but only the application of a multidisciplinary strategy is effective in providing robust results and in defining a proper framework of active faults.2011-12-31T23:00:00ZStudio e realizzazione di un protocollo di compressione dati per reti di sensori sismici
http://hdl.handle.net/2122/8359
Title: Studio e realizzazione di un protocollo di compressione dati per reti di sensori sismici
Authors: Larocca, G.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Catania, Catania, Italia
Abstract: Abstract Permanent seismic network management, in a country defined by active volcanoes, as well as a tectonic faults network, involve evaluating a great number of working and efficiency parameters, especially when risks are of a different nature and can cause economic damage to the local region. One example is the Permanent Seismic Net managed by the Istituto Nazionale di Geofisica e Vulcanologia(INGV), Catania Section, in Italy, which comprises about sixty seismic stations, using analogical and digital technology, located around Mt. Etna, the Peloritani and Hyblean areas, southern Calabria and in the Aeolian archipelago. Digital technology enables increasing management performance, signal to noise ratio and firmness, allowing gradual upgrades from analogical to digital stations. On the other hand, it introduces some new problems such as the small capacity of the transmission channel (available bandwidth), which is hard to manage and causes data acquisition delays.
The seismic signals, in particular, after digitization, are compressed to make better use of the available channel. The use of a lossless compression algorithm causes variable efficiency that depends, generally, on the kind of signal to compress.
We know that signals that change frequently in a time window, and that have a high RMS amplitude, are more difficult to compress with lossless compression, and hence need more bandwidth. Instead, signals with a low RMS amplitude and that change little in a time window, are better compressed with lossless compression and require smaller space channels.
Various active volcanic phenomenology in eruption phases, could cause a big variation in some seismic signal parameters like RMS amplitude, this variation causes channel bandwidth consumption increase. This implies an efficiency evaluation of the Nanometrics® transmission protocol and compression algorithm used by the remote station instruments, especially in critical stages as when a volcano is preparing to erupt.
Alternative protocols are proposed to increase the global evaluation quality of the Nanometrics® protocol and compression algorithm used by the RSP instrumentation. Comparisons between solutions are made by studying the relationship between seismic signal RMS amplitude and its bandwidth consumption. Studying the compression algorithm and researching potential optimization, helps build a plausible evaluation of the bandwidth needed in the critical stages that are typical in active volcanoes like Etna.2006-06-30T22:00:00ZMultiresolution Spherical Wavelet Analysis in Global Seismic Tomography
http://hdl.handle.net/2122/8357
Title: Multiresolution Spherical Wavelet Analysis in Global Seismic Tomography
Authors: Carannante, S.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione CNT, Roma, Italia
Abstract: Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.2008-05-31T22:00:00ZAnalisi dei fenomeni franosi di crollo e del danneggiamento agli edifici indotti dalla sequenza sismica dell'Umbria - Marche (settembre - ottobre 1997)
http://hdl.handle.net/2122/8000
Title: Analisi dei fenomeni franosi di crollo e del danneggiamento agli edifici indotti dalla sequenza sismica dell'Umbria - Marche (settembre - ottobre 1997)
Authors: Marzorati, Simone; Istituto Nazionale di Geofisica e Vulcanologia, Sezione CNT, Roma, Italia
Abstract: Lo studio dei fenomeni franosi avvenuti a seguito della scossa del 14 ottobre 1997 si è
conclusa con la produzione di una mappa della suscettibilità alle frane di crollo. Il metodo
condotto non era mai stato applicato a questo tipo di frane in Italia, sia perché la tecnologia
informatica non era ancora adatta ad elaborare dati georeferenziati e quindi si potevano solo
fornire indicazioni qualitative (vedi caso del Friuli) ed inoltre non erano disponibili banche
dati sufficienti sugli effetti indotti dai terremoti passati. La mappa prodotta ha il limite di riferirsi ad un terremoto specifico, ossia con una
sorgente sismica ed una magnitudo ben definite. La mappa riguarda quindi un solo evento,
in quanto modificando i due dati suddetti è possibile creare scenari diversi, anche in zone
diverse da quella studiata.
Il lavoro svolto ha evidenziato innanzitutto l'importanza della raccolta dei dati, in quanto
disponendo di informazioni dettagliate riferite al territorio è possibile in breve tempo, con
l'utilizzo dei SIT, redarre una mappa come quella del presente studio. Fondamentale è il
modello digitale del terreno da cui sono state ricavate le pendenze, fortemente correlate alle
frane, ma anche i dati sismici; infatti invece della legge proposta si potrebbe applicare ad
esempio una legge di attenuazione migliore, oppure considerare un parametro sismico che
rappresenti meglio il terremoto.
In ogni caso, lo studio proposto ha evidenziato come le frane di crollo siano ben correlate a
certi parametri; questo, in un'ottica di gestione e pianificazione del territorio, pone l'accento
sulla prevedibilità di questo tipo di frane. Infatti non è solo necessario prevedere un
terremoto nello spazio e nel tempo, ma anche gli effetti che un terremoto può causare, in
modo da poter ridurre i rischi per l'uomo. L'analisi relativa al danneggiamento degli edifici ha prodotto vari risultati. In primo luogo è
stato possibile, in un tempo relativamente breve (pochi mesi), redarre le mappe del
danneggiamento relativo all'intera area colpita dal sisma. Il metodo proposto possiede la
caratteristica di valutare il danneggiamento omogeneamente sul territorio, considerando
poche e distinte classi di danno, facilmente individuali per fotointerpretazione. Il materiale
necessario è rappresentato dalle foto aeree di un volo effettuato appena dopo l'evento e dai
dati ISTAT relativi agli edifici residenziali. Anche se non si possono fornire dei valori
quantitativi precisi, in quanto si sottostimano i danni minori, i risultati dello studio possono
essere comunque utilizzati per individuare le priorità di intervento e valutare una prima
assegnazione dei finanziamenti per la ricostruzione.2001-03-19T23:00:00Zcrustal fracturing field and presence of fluid as revealed by seismic anisotropy: case-histories from seismogenic areas in the Apennines
http://hdl.handle.net/2122/7970
Title: crustal fracturing field and presence of fluid as revealed by seismic anisotropy: case-histories from seismogenic areas in the Apennines
Authors: Pastori, Marina; Università degli studi di Perugia
Abstract: During the last decades, the study of seismic anisotropy has provided useful information for the interpretation and evaluation of the stress field and active crustal deformation. Seismic anisotropy can yield valuable information on upper crustal structure, fracture field, and presence of fluid-saturated rocks crossed by shear waves. Several studies worldwide demonstrate that seismic anisotropy is related to stress-aligned, filled-fluid micro-cracks (EDA model, Crampin et al., 1984b; Crampin, 1993).
The seismic anisotropy is an almost ubiquitous property of the Earth and the Shear Wave Splitting is the most unambiguous indicator of anisotropy, but the automatic estimation of the splitting parameters is difficult because the effect of the anisotropy on a seismogram is a second order, not easily detectable effect. Different researchers developed automated techniques aimed to study the Shear Wave Splitting: in this study, the results of different codes are compared in order to evaluate the best method for automatic anisotropy evaluation.
In the last three years, an automatic analysis code, “Anisomat+”, was developed, tested and improved to calculate the anisotropic parameters: fast polarization direction () and delay time (∂t). “Anisomat+” consists of a set of MatLab scripts able to retrieve automatically crustal anisotropy parameters from three-component seismic recordings of local earthquakes. It needs waveforms and hypocentral parameters in the format routinely archived by the Istituto Nazionale di Geofisica e Vulcanologia (INGV).
The code uses horizontal component cross-correlation method: a mathematical algorithm aimed to measure the similarity of the pulse shape between two shear waves.
Anisomat+ has been compared to other two automatic analysis codes (SPY and SHEBA) and tested on three zones of the Apennines (Val d’Agri, Tiber Valley and L’Aquila surroundings). It was observed that, if the number of measures is large enough, at each station the average values of the parameters (fast direction and delay time) are comparable.
The main goal in developing of an automatic code was to have tool able to work on a big amount of data, in a short time, by reducing the errors due to the subjectivity. These two acquirements are very useful and are the basis to develop a quasi real-time monitoring of the anisotropic parameters.
The anisotropic parameters, resulting from the automatic computation, have been interpreted to determine the fracture field geometries; for each area, I defined the dominant fast direction and the intensity of the anisotropy, interpreting these results in the light of the geological and structural setting and of two anisotropic interpretative models, proposed in the literature. In the first one, proposed by Zinke and Zoback (2000), the local stress field and cracks are aligned by tectonics phases and are not necessarily related to the presently active stress field. Therefore the anisotropic parameters variations are only space-dependent. In the second, EDA model (Crampin, 1993), and its development in the APE model (Zatsepin and Crampin, 1995) fluid-filled micro-cracks are aligned or ‘opened’ by the active stress field and the variation of the stress field might be related to the evolution of the pore pressure in time; therefore in this case the variation of the anisotropic parameters are both space- and time- dependent.
I recognized that the average of fast directions, in the three selected areas, are oriented NW-SE, in agreement with the orientation of the active stress field, as suggested by the EDA model, proposed by Crampin (1993), but also, by the proposed by Zinke and Zoback model; in fact, NW-SE direction corresponds also to the strike of the main fault structures in the three study regions. The mean values of the magnitude of the normalized delay time range from 0.005 s/km to 0.007 s/km and to 0.009 s/km, respectively for the L'Aquila (AQU) area, the High Tiber Valley (ATF) and the Val d'Agri (VA), suggesting a 3-4% of crustal anisotropy (Piccinini et al., 2006).
In each area are also examined the spatial and temporal distribution of anisotropic parameters, which lead to some innovative observations, listed below.
o The higher values of normalized delay times have been observed in those zones where most of the seismic events occur. This aspect was further investigated, by evaluating the average seismic rate, in a time period, between years 2005 and 2010, longer than the lapse of time, analyzed in the anisotropic studies. This comparison has highlighted that the value of the normalised delay time is larger where the seismicity rate is higher.
o In the Alto Tiberina Fault area the higher values of normalised delay time are not only related to the presence of a high seismicity rate but also to the presence of a tectonically doubled carbonate succession. Therefore, also the lithology, plays a important role in hosting and preserving the micro-fracture network responsible for the anisotropic field.
o The observed temporal variations of anisotropic parameters, have been observed and related to the fluctuation of pore fluid pressure at depth possibly induced by different mechanisms in the different regions, for instance, changes in the water table level in Val D’Agri (Valoroso et al., GJI submitted), occurrence of the April 6th Mw=6.1 earthquake in L’Aquila (Lucente et al., 2010).
Since these variations have been recognized, it is possible to affirm that the models that better fit my results, both in term of fast directions and of delay times, seems to be those proposed by Crampin (1993) and Zatsepin & Crampin (1995), respectively EDA and APE models.2011-02-16T23:00:00Z