Science makes better understanding of the world we live. Earth, our home, is a dynamic system continuously evolving: seas advance and recede, volcanoes erupt and become extinct, mountain chains build and erode away, these are some examples that provide changes on landscape formation. In the course of humanity, processes that occur both in the solid Earth and its fluid components have had a great influence on life on Earth, causing natural disasters, changes in climate, etc., that have conditioned the development and evolution of different species that inhabit it. Our way of life depends on its resources. How to ensure sustainable use of the planet is one of the current challenges facing humanity. These issues, among others, fed the idea that understanding the Earth System and its interaction with humanity is vital for our survival.

The value of the Geosciences to help us in this way is unquestioned. Nevertheless, these challenges are not just geoscience research problems, but also important research problems in Mathematics, Statistics and Computer Science. In this way, Mathematics provides the rigor, language and theoretical basics of every scientific research. For instance, we can use Plate Tectonics theory to show the step that Mathematics provides from intuition to certainty. Plate Tectonic theory had its beginnings in first decade of twenty century when Alfred Wegener proposed his theory of “continental drift” (Wegener 1912). At such time, Wegener’s idea was very controversial in part because he could not provide a mechanism for continents moving, just that there were some observational evidences. McKenzie applied his physical–mathematical knowledge to study the viscosity of the lower mantle (McKenzie 1966) providing the physical–mathematical basis to the existing two layers in the mantle, each of them in motion, which contribute to continental drift. He refuted two other conceptual models of the Earth: the Earth was a homogeneous sphere and the theory that Earth had an inviscid core and a homogeneous mantle. MacKenzie’s work showed that Earth is far more dynamic than previously thought and added to the growing awareness that convection in mantle was driving continental drift.

Geoscientists use a wide range of modern tools for observing the Earth and for understanding its dynamic evolution. For example, the high spatio-temporal resolution of remote sensing data can provide an overwhelming volume of data in exquisite detail. Nevertheless, some processes can only be observed indirectly and so infrequently. Having in mind that the Earth is a dynamic system with different parts and the previously described, it is clear we need of new conceptual approaches to characterize complex systems that vary strongly in space and time in ways not accounted for in current paradigms. In such a context, modeling is the core of Geosciences and Mathematics interaction. Models are approximations primarily based on physical arguments that require a rigorous mathematical approach: approximating solutions, checking the reliability of the model to describe physical phenomena, using models to predict quantities on which some conclusions can be drawn, etc. In addition, new mathematical tools are needed to process and invert the new data sets acquired directly in the Earth or using remote sensing from air or space.

For instance, we can find important issues regarding the life in an active Earth as earthquakes and eruption predictions. Although Geoscience is moving towards predictive capabilities of volcanic eruptions given the sensitive amount of data and better understanding of the causes, we have no idea why volcanic deformation culminates or not in an eruption. Understanding what determines the rates of magma accumulation in the chamber and what mechanisms make magmas eruptible could help to improve prediction capabilities. These mechanisms involve the inclusion of magmatic processes in the kinematic models actually used to integrate geophysical, geological and geodetic observations (e.g., Anderson and Segall 2013). By other side, understanding earthquakes and their hazards is a major challenge in Earth’s sciences. Nowadays, it has become clear that the traditional statistical paradigm used to describe earthquake behavior (large earthquakes are spatially focused and temporally quasi-periodic) often fails, particularly within continents. Still little is known about the physics of faults where earthquakes occur or how faults form or why and how the earthquakes migrate between faults (e.g., Keilis-Borok 2009). Again data integration and modeling for quantitative interpretation are required to understand such processes.

Other grand challenging questions in Earth Sciences are related to Earth’s interior since our rock-sampling reach is limited to the upper tens of kilometers of the Earth’s crust. Nowadays it is recognized that large-scale processes such as Plate Tectonics are driven by the nature of materials that make up the planet down to the smallest atomic scales, as thought for instance for the trigger of earthquakes (Lay and Garnero 2011). The role of mantle plumes and their depth of origin are part of an intense debate. The subduction cycles or why subduction zones are developed are opened questions, also related with the Earth structure and geodynamics. New mathematical approaches that can be able of including more physical complexity into convection models are needed for the quantitative interpretation of geophysical, geochemical and geological data from a holistic perspective in terms of geodynamical relevant parameters as temperature, composition and rheology (e.g., Cammarano et al. 2011; Afonso et al. 2015; Foulger et al. 2015).

Some illustrative examples that are well understood by the general public are associated with climate and the habitability of the planet (Henderson-Sellers and McGuffie 1987, http://www.ipcc.ch). The mean global surface temperature of the Earth has risen since the beginning of the industrial age with the advent of CO2 and other greenhouse gases emissions. The potentially serious consequences of global warming mark the need to understand, for instance, the fate of the Atlantic Meridional Overturning Circulation which reduction is likely to have strong implications for subtropical Atlantic temperatures and the position of the intertropical convergence zone (e.g., Smeed et al. 2014). Geological proxies have revealed that the climate history of the planet is a combination of both variability and stability. Nevertheless, future climate projections depend on new mathematical advances to understand the thermodynamic and transport properties of the Earth’s atmospheric-oceanic system. Many environmental issues and the presence and placement of many Earth’s resources involve the role of fluid flow and transport. Landscape evolution and transport of environmental fluxes over the ground surface are scientific and social problems. It remains a challenge to include fine-scale features that are very important for modeling sediment transport (e.g., individual hill slopes, channel-river bank morphology) due to the strong nonlinearities of the sediment-transport laws (e.g., Garcia-Castellanos and Jiménez-Munt 2015). The ability to asses and extract minerals, petroleum, natural gas and groundwater and to safely dispose of wastes depends on understanding flow of fluids. In hydrocarbon reservoirs and volcanic systems, the simulation of multiphase fluid flow is an important problem due to the large viscosity and density ratios involved (Longo et al. 2012).

Focusing on the mathematical problems arising in the context of addressing such geoscientific challenges provides the opportunity to bring the expertise of geoscientists together with that of mathematicians to develop important insights and solve these challenges proving the enrichment and the link between theoretical and applied parts. To this end, the workshop “Mathematics and Geosciences: Global and Local Perspectives” was organized by the Institute of Mathematical Sciences (ICMAT) and the Institute of Geosciences (IGEO) under the patronage of the Spanish Council for Scientific Research (CSIC), Madrid Autonomous University (UAM), the Institute of Interdisciplinary Mathematics (IMI), Madrid Complutense University (UCM) and Technical University of Madrid (UPM). The workshop was held in Madrid ICMAT from 4th to 8th November 2013 to facilitate a fruitful interaction among a broad and geographically distributed group of mathematicians and geoscientists. The workshop was one of the year events of the Mathematics and the Planet Earth organized in Spain (MPE2013 2014). A first volume was published recently (Díaz et al. 2015).

The second volume of the Topical Issue includes some contributions presented at the meeting and others, related papers. It embraces 18 papers on different topics relating to Mathematics and Geosciences.

Holliday et al. apply a previously presented method for calculating probabilities for large events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides which can be unexpected and devastating, to the calculation of large earthquake probabilities in California-Nevada, USA. The method counts the number of small events since the last large event and then converts this count into a probability using a Weibull probability law. They consider a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. The model is extended to systems in which the events have a finite correlation length. They modify previous results by employing the correlation function for near mean field systems having long range interactions, an example of which is earthquakes and elastic interactions. Then they construct an application of the method and show examples of computed earthquake probabilities.

The work carried out by Cho et al. explores the use of the Laplace–Fourier-domain full waveform inversion technique to deep-sea seismic data. It is a difficult attempt since the deep-sea layer reduces the amplitude of signals. To overcome this problem, and reduce the water layer’s effect, they performed a downward continuation and built a macro-velocity model through refraction tomography which is used as an initial model for Laplace–Fourier inversion. This scheme is applied to both synthetic and field data from Sumatra. Limitations of this technique are discussed in the paper.

Khazaei et al. present a discrete element model, using particle flow code, that allows direct modeling of stick–slip behavior in pre-existing weak planes such as joints, beddings, and faults. The model is used to simulate a biaxial sliding experiment from literature on a saw-cut specimen of Sierra granite with a single fault. They represent the fault by the smooth joint contact model. In addition, they develop an algorithm to record the stick–slip induced microseismic events along the fault. Once the results compared well with laboratory data, they conduct a parametric study to investigate the evolution of the model’s behavior due to varying factors such as resolution of the model, particle elasticity, fault coefficient of friction, fault stiffness, and normal stress. The results show a decrease in shear strength of the fault in the models with smaller particles, smaller coefficient of friction of the fault, harder fault surroundings, softer faults, and smaller normal stress on the fault.

Edge detection is a useful tool in the interpretation of potential field data, and the existing edge detection filters are almost functions of first-order horizontal and vertical derivatives. Ma et al. propose step-edge detection filters to improve the resolution of edge detection results, which use the functions of different-order derivatives to accomplish the edge detection task. They demonstrate the proposed filters on synthetic potential field data, and the results show that the new methods can recognize the edges of the sources more precisely and clearly. They also discuss the application effect of different step-edge detection filters and apply the proposed filters to real potential field data.

The paper by Eshagh presents some integral formulae for recovering the sub-crustal stress from terrestrial gravimetric data. The formulation that is proposed follows Runcorn’s theory, but from the practical point of view it allows the inclusion of high degree gravity model. He develops three novel methods to recover the Stress function (S) from terrestrial gravity anomalies: (1) direct integration with limited spectral kernel, (2) integral inversion with closed-form kernels and (3) integral inversion with limited spectral kernel. Finally, he applies them for modeling the sub-crustal stress in Iran and its surrounding countries showing that these integral methods are useful when the terrestrial gravity data of the area are used for estimating the stresses.

The gravity recovery and climate experiment (GRACE) mission has shown that it is possible to make detailed gravity measurements from space for climate dynamics and other purposes. To build the groundwork for a more advanced satellite-based gravity survey, the level of accuracy needed for precise estimation of fault slip in earthquakes must be estimated. Shultz et al. turn to numerical simulations of earthquake fault systems and use these to estimate gravity changes. The current generation of Virtual California (VC) simulates faults of any orientation, dip, and rake. They discuss these computations and the implications they have for accuracies needed for a dedicated gravity monitoring mission. Preliminary results are in agreement with previous results calculated from an older and simpler version of VC. Computed gravity changes are in the range of tens of μGal over distances up to a few hundred kilometers, near the detection threshold for GRACE.

The paper by Pavón-Carrasco et al. is focused on the reliability and fidelity of archaeomagnetic and volcanic records to recover the past evolution of the Earth’s magnetic field. The authors compiled the palaeomagnetic data available for the last 400 years and compared palaeomagnetic data with historical predictions. They used the historical model GUFM1, based on a massive compilation of historical data from observations picked up by seamen in naval shipping and geomagnetic observatories, to provide an accurate vision of the directional geomagnetic field elements. The results show statistical agreement between archaeomagnetic data and directions given by the geomagnetic field model, concluding that the heated archeological materials are good recorders of the past Earth’s magnetic field. In contrast, volcanic materials provide directions affected by an inclination shallowing. This systematic error is also observed when comparing recent magnetic records from lava flows with the International Reference Geomagnetic Field (IGRF) model. On average, the inclination error is around 3°, being systematically lower than the historical geomagnetic field model predictions. Although the mean flattening deviation is low, this error should be taken into account when accurate spatial and temporal evolution of the ancient geomagnetic field is analyzed.

Sánchez-Reales, Vigo and Trottini study, for the first time, variations in absolute surface geostrophic currents (SGC) using satellite data only. Their approach combines 18 years’ altimetry data, which provide reliable measurements of absolute sea level (ASL), with a gravity field and steady-state ocean circulation explorer geoid model to obtain dynamic topography, and achieves unprecedented precision and accuracy. They overcome the main limitations of existing approaches based solely on altimetry data, and approximations based on in situ data. Features of annual variations of SGC are also addressed. As a result of their study they provide new absolute SGC climatology in the form of a 52-week data set of surface current fields, gridded at quarter degree longitude and latitude resolution and resolving spatial scales as short as 140 km.

Escapa et al. review and discuss the inconsistency of IAU2000 non-rigid earth nutation model (MHB). Given the complexity of the Earth rotation, it usually used a twofold approach for modeling his motion. Its long-term behavior is modeled by the precession theory whereas the theory of nutation is used to model the short-term behavior. The problem with such approach comes from considering different values of the parameters used to describe the motion in both theories. This lack of consistence might originate numerical differences that would be incompatible with nowadays accuracy requirements for Earth Orientation parameters predictions. Here, the authors discuss the effects of considering slightly different values of the dynamical ellipticity in both precession and nutation theories.

With increased geoid resolution provided by the gravity and steady-state ocean circulation explorer (GOCE) mission, the ocean’s mean dynamic topography (MDT) can be now estimated with an accuracy not available prior to using geodetic methods. However, an altimetric-derived MDT still needs filtering to remove short wavelength noise unless integrated methods are used in which the three quantities are determined simultaneously using appropriate covariance functions. Sánchez-Reales, Andersen and Vigo study nonlinear anisotropic diffusive filtering applied to the ocean’s MDT and a new approach based on edge-enhancing diffusion (EED) filtering is presented. EED filters enable controlling the direction and magnitude of the filtering, with subsequent enhancement of computations of the associated surface geostrophic currents (SGCs). Applying this method to a smooth MDT and to a noisy MDT, both for a region in the Northwestern they found that EED filtering preserves all the advantages that the Perona and Malik filter have over the standard linear isotropic Gaussian filters. Moreover, EED is shown to be more stable and less influenced by outliers. This suggests that the EED filtering strategy would be preferred given its capabilities in controlling/preserving the SGCs.

Galán de Sastre and Bermejo present in their paper a Lagrange-Galerkin hp-finite element method to calculate the numerical solution of a nonhydrostatic ocean model. This model is composed of the incompressible Navier–Stokes equations with Coriolis and buoyancy terms, two scalar advection–diffusion systems for temperature and salinity, and an equation of state for the density. To integrate the equations of the nonhydrostatic model, the authors propose a second-order projection method in combination with a Lagrange-Galerkin method for time discretization along the trajectories of the fluid particles and higher order hp-finite element method for the discretization in space of the differential operators. The Lagrange-Galerkin method yields a Stokes-like problem the solution of which is computed by a second-order rotational splitting scheme that separates the calculation of the velocity and pressure, the latter is decomposed into hydrostatic and nonhydrostatic components. They focus on the behavior of their method on density driven flows because they are relevant for many ocean phenomena such as water interchange through the Gibraltar’s Strait.

The paper by Cea and Rodríguez develops coupled Hydrological–Hydraulic models that enable to analyze water movement in watershed as well as analyze the rainfall-runoff. They note that the rainfall in situ is principally responsible for damage in many cases, for example, in some Spanish Mediterranean regions as in the area around the township Alginet (Spain). The authors use the well-known Saint–Venant equations on an unstructured mesh by finite volume to simulate the water transfer introduced upstream of the modeled area and they incorporate different hydrological situations calculating the runoff hydrograph cell level where they take into account four processes: precipitation, loss, transformation of excess rainfall in direct runoff and cost base. The results of their simulations show that the models can predict the water evolution and can be considered as a promising tool in the rainfall-runoff processes simulation.

Díaz and Gómez-Castro present a mathematical analysis study of the shape of chemical reactors, in particular for reactors designed for the treatment of wastewaters. They simplify the modeling by assuming a single chemical reaction with a monotone kinetics leading to a parabolic equation with a non-necessarily differentiable function. They assume that an ideal homogenization process was applied (by passing to the limit to zero on the porosity of the solid bed) so that the chemical reaction can be assumed as distributed over the reactor cylinder. Their main goal is to give a proper conceptual justification of why these reactors are wide and low, using natural techniques in homogenization in partial differential equations.

Cannavò and Palano provide a technical report of a new version of a previous software tool namely PlatE-Motion 2.0 (PEM 2.0). This tool, initially developed for easy-to-use file exchange with the GAMIT/GLOBK software package, allows inferring the Euler pole parameters by inverting the observed velocities at a set of sites located on a rigid block. The tool is open source and freely available for the scientific community.

The study of the fractal dimensions for the identification of bedrock lithology was carried out by Cámara et al. Their starting point is that geographic information system (GIS) technologies and the increasing availability and resolution of digital elevation data have greatly facilitated the delineation, quantification, and study of drainage networks. It is well known that drainage networks can exhibit different drainage patterns depending on the hydrogeological properties of the underlying materials. This study investigates the possibility of inferring geological information of the underlying material from fractal and linear parameters describing drainage networks automatically extracted from 5-m resolution LiDAR digital terrain model (DTM) data. According to the lithological information (scale 1:25,000), the study area is comprised of 30 homogeneous bedrock lithologies, the lithological map units (LMUs). Their results imply that the information included in a 5-m resolution LiDAR DTM and the appropriate techniques employed to manage it are the only inputs required to identify the underlying geological materials.

Uruk archeological site, which is located in Al-Muthanna Governorate southern Iraq, is studied by Al-Khersan et al. by integrated geophysical methods, ground penetration radar (GPR) and electric resistivity tomography (ERT) to image the historical buried structures. The GPR images show large radar attributes characterized by its continuous reflections having different widths. GPR attributes at shallower depth are mainly representing the upper part of Babylonian Houses that can often be found throughout the study area. In addition, radargrams characterized objects such as buried items, buried trenches and pits which were mainly concentrated near the surface. The ERT results show the presence of several anomalies at different depths generally having low resistivities. The map of the archeological anomalies distribution and 3D view of the foundations at the study area using GPR and ERT techniques clearly show the characteristics of the Babylonian remains. A contour map and 3D view of Uruk show that the archeological anomalies are concentrated mainly at the NE part of the district with higher values of wall height that range between 6 and 8 m and reach to more than 10 m. At the other directions, there are fewer walls with lower heights of 4–6 m and reach in some places the wall foot.

A new mathematical model for patchy landscapes in drylands is introduced in the paper by Kinast et al. The model concerns the dynamics of biogenic soil crusts and their mutual interactions with vegetation growth. The identification of spatially uniform and spatially periodic solutions that represent different vegetation-crust states, and map them along the rainfall gradient are analyzed. A significant difference between the current and earlier models of patchy landscapes is found in the bistability range of vegetated and unvegetated states; the incorporation of crust dynamics shifts the onset of vegetation patterns to a higher precipitation value and increases the biomass amplitude. This new model may shed new light on the effects of biogenic crusts on the response of dryland ecosystems to rainfall variability, and may improve understanding of desertification processes.

The paper by San José Martínez et al. deals with the study of pore space soil structure of X-ray computed tomography CT images of soil columns. Their study uses mathematical morphology as source of a plethora of different mathematical techniques. They provide a guide to design the process from image analysis to the generation of synthetic models of soil structure to investigate key features of flow and transport phenomena in soil. In this work, they explore the ability of morphological functions built over Minkowski functionals with parallel sets of the pore space to characterize and quantify pore space geometry of columns of intact soil. These morphological functions seem to discriminate the effects on soil pore space geometry of contrasting management practices in a Mediterranean vineyard.