Our recent work emphasizes some aspect of using a distributed model which should be always kept in mind when planning simulations with GEOtop.
Space and Time Resolutions: How are the space and time resolutions selected?
In terms of spatial discretization, the model uses grid-based DEMs with varying depths for individual grid boxes and a temporal resolution of any calculations or forcings (e.g. Richards equation, energy balance, channel flow time steps, etc.). What are the consideration which leaded to space-time resolutions and computational burden, also in terms of the numerical solution strategies adopted ?
Addressing subgrid variability. Once spatial and temporal variability are chosen, the modeler should ask if spatial variability within the grid is addressed in a consistent way. Otherwise, she/he can produce inconsistent results.
As a general rule, distributed models are computationally expensive. The authors should address the computational limitations, if any, of the model, in particular for watershed applications which involve comparison to remotely sensed imagery. What tradeoffs are required between computational expense and the space-time resolution of the model?
Non calibrated parameters: A sound literature review should be chosen for the parameters of the models which are kept fixed. Many of them, especially those which regard the soil properties, including its depth, are, however, derived from a mixing of field measurements,their local estimation, and their spatial extension (for this see also the paragraph below). These procedures can vary from case study to case study, depending also on the data available. Some suggestions and procedures are however in the paper by Simoni et al., 2006 and, more extensively, in the report of the work for Sauris (Armanini et al., 2006) and Simoni (2007). It can be seen, that many procedure are still open issues and this makes more important the discussion. Parameter including the atmospheric boundary layer (ABL) quantities, have a introductory discussion in Endrizzi (2007) but clearly, the discussion remains open.
Calibration Strategy: A calibration strategy should be documented. The calibration strategy should explicitly indicate over what time periods it is performed (e.g. event, seasonal, annual), what data are used for calibration (e.g. discharge, soil moisture), what parameters are related to which data sets or portions of the data (e.g. hydrograph peak or recession). In many of hydrological studies a split sample test is used to validate the calibrated model w.r.t. events not used in the calibration period. Keep in mind that calibration to a single flood event in any hydrological model is an extremely weak test of model performance.
Model initialization. Modelers need to adequately describe initial conditions and be aware of their effects on the simulations, which is particularly critical for a distributed model with vertical profiles of soil moisture and temperature and a water table position. For example, the model should be run for a sufficiently long period of time prior to ensure both static and dynamical equilibration in the surface and subsurface states prior to the analysis period.
This can be performed by running a drainage experiment starting from totally saturated conditions in each basin (with no ET or rainfall) and allowing the basin to drain for sufficiently long periods. This will lead to a ‘static’ equilibrium condition which is particularly adapted to each terrain, soil depth, channel network combination. A ‘dynamical’ equilibration can be achieved by forcing the model with the meteorological conditionsrepeatedly (periodic forcing) lasting sufficiently long and then analyzing the results from a period after various periodic cycles have passed. This allows each combination of prognostic quantities to dynamically adapt to the forcing.
Both techniques minimize initialization errors. To be specific: two processes have a very slow adaptation to "mean conditions", the water content and temperature in the ground, say below 1 m. This implies that incorrect initial conditions at the bottom layer can influence the time evolution of the system for years. Thus the dynamical equilibrium to obtain can be very far from a set of arbitrary initial condition and the modelers always needs to make an "educated guess" to be successful in his/her simulations. A rule of thumb for the initial soil moisture distribution is to take an equilibrium condition, which implies hydrostatic distribution of pressure in both the vadose and saturated zones (e.g. Cordano and Rigon, 2006). The latter condition, in turn, implies to guess the initial position of the water table, which could be below the bottom boundary of the control volume. Obviously some measures of this quantity would help, however local (in a point) measurements imply to search a method for their spatial extensions (see below). Either if the water table is above of below the bottom layer, a flux or gradient condition must be given at the bottom. One reasonable assumption to build this condition is to use the state of the system itself, especially to give the hydraulic conductivity. Clearly this boundary condition is dynamic with the varying soil moisture contents at the bottom. Please, note that assuming constant water content throughout the ground layers, would imply a constant water input into the considered volume.
A rule of thumb for the bottom temperature would be to assign at some depth (1.2 m, for instance) the mean annual air temperature (which is spatially varying). These last hints are of general validity, however for particular sets of simulations, other choices can be made. What it is important is to discuss consciously the choices made.
Spatial series of soil properties
These include soil depth, soil permeability, and van Genucthen parameters (if the van Genuchten parametrization has been chosen). Soil depth can be assessed locally by measurements and, assuming an equilibrium soil profile, using the Heimsath theory as in Bertoldi et al. (2006). Many aspects of the issue are still to be addressed. Also the other quantities estimation are very relevant topics. A. Bellin gave some major contributions on the aspects related tosaturated soil conductivity. Other clues could arrive from the literature by David Russo (mainly on Water Resorces Res.).
Spatial time series of meteorological data
A prerequisite for any model is to provide it with the best input data as is reasonablypossible. If these data are of poor quality, then the entire hydrologic simulation is flawed from the outset.
Particularly in mountainous areas, where complex interactions among topography, vegetation canopies, and climate exist, it is essential that spatial interpolation methods that account for these effects be applied to estimate the spatial fields of meteorological input. A good and up-to-date reference for method of interpolation is Garen and Marks (2005) (and references therein). In that paper, the authors show show a combination of limited measurements, models, and carefully applied spatial interpolation methods can be used to develop the spatial field time series of the forcing data required for the simulation of the development and melting of the seasonal snowcover. The arguments of the papers apply with almost no change to the general case of simulating the hydrological cycle. The meteorological input includes spatial field time series of precipitation, air temperature, dew point temperature, wind speed, and solar and thermal radiation. These variables have particular characteristics and levels of data availability that make it necessary to use a variety of procedures to develop spatial fields of each. It is also essential to consider the effects of the forest canopy on the solar and thermal radiation.
Mass is conserved. Water in input needs not to be lost and equate storage and outflow, in all of their forms.
Energy must be conserved (if the energy budget is evaluated): this is much more difficult to assess, meaning that energy in input must equate the energy storage, plus energy outflow and dissipation (usually thermal effect of frictions are neglected).
Validation strategies I - Makes some null hypothesis
Regarding the prognostic mean and variance of the quantities. The modeler should make the statement that they equals the measured quantities and validate the statement statistically (are the mean value of observed and distributed data the same? The variances are the same ?) . A possible validation of the distribution of the quantities can follow (the "bulk" distribution are the same ?). For instance in our work on Little Washita we found that soil moisture distributions derived from remote sensing are significantly different from the modeled ones. Yet the model reproduce with high fidelity the ground truth. In fact remote sensed data are not measurements but models themselves.
Validation strategies: II Assessing the physical realism.
Must be explicated in the planning phase of the simulations. The modeler should provide an indication of the dynamics at the pixel scale, illustrating the rainfall, infiltration fronts, soil moisture variations, water table dynamics, lateral transfers, etc. Although the validation data may not be present, this type of analysis allows readers to assess the ‘physical realism’ present in the model. Without it, the model capabilities are not well identified and cannot be related to observations (e.g. discharge, soil moisture patterns).
Validation strategies: III Assessing the spatial patterns.
Methods for comparing spatial patterns should be defined. As a minimum requirement, global distributions of parameters and main moments should be compared (computed-measured). The spatial analysis of many authors tend to be qualitative. The reader is essentially asked to visually compare maps. This is not enough.