Wednesday, June 15, 2022

Evaporation and Transpiration as you have never seen before

A consistent part of root zone and surface water evaporates and returns to the atmosphere to eventually form clouds and precipitation again. The process follows quite complicate routes and is different when happening from liquid surfaces, soil or vegetation (and BTW animals).  In this group of lectures we try to figure out the physical mechanisms that act in the process and give some hint on methods to estimate evaporation and transpiration with physically based models. 

Courtesy of Luca Chisté (http://www.lucachiste.it/)

Saturday, June 4, 2022

Realism, Reliability, Replicability, Reproducibility, Robustness, Reusability of (Hydrological) Models

Often it is said that a model has to be realistic, reliable, reproducible, replicable robust and reusable. However, these characteristic are not uniquely defined. Forcing some aspects, and maybe restricting the normal semantics, we try here to define their meaning.

Realism  (of models) -   It is mainly concerned with reproducing nature behavior (though some philosophical argument can be made, see, for instance, Dietrich et al., 2003). A realistic model reproduces the variety and the behavioral (dynamical) complexity of the (hydrological) system it is intended to describe. In evaluating this characteristic, one cannot limit the analysis to the model core itself, but must also look at other aspects such as the calibration tools or if the model is integrated with data assimilations techniques. (See also reliability)

Reliability (of models) - A model is always only relatively reliable. Therefore in the present context we define a model reliable if it is possible to give, at least theoretically, an error of the estimated quantities in any of the circumstances it can be used. According to the scope a simulation is done, the reliability is enough if the error (estimate) is acceptable for the scope. Reliability, from a more philosophical point of view is related to the concept of Popper's falsification process (e.g. Nearing et al., 2021). However, the current shortcut to assess reliability of models is the use of goodness of fit (GOF) indexes, like the Nash-Sutcliffe (e.g. McCuen et al., 2006) one or the Kling-Gupta Efficiency (Kling and Gupta, 2011). These indicators are clearly a good thing but a barely sufficient way to get a clear picture of the reliability of models and the cojoint use of further indicators has been invoked (Addor, 2017) to this scope. Even those are probably insufficient and confidence intervals and more accurate analysis of uncertainty could be produced.




Replicability  (of models behavior) - Replicability refers to the fact that multiple runs of the same model with the same inputs and the same setup must produce always the same results. A model behavior is replicable if its workflow is recorded or appropriately documented and the workflow deployed verbatim (not forgetting the observations made in Ceola et al., 2015). If the model is stochastic, however, the replicability concept is transferred to the statistics of the models output. A special case is the one of models that depend on parameter calibration. Because parameter fitness is usually established with stochastic searches, in this case, the replicability of the whole running actions is impossible. However, with parameters frozen, simulations must be replicable.


Reproducibility (of models)-  The results presented in a science context, either from a real or a virtual, computer-based, experiment must be reproducible by third parties autonomously by following the same type of procedures used in the experiments. This is one of the pillar of science. Being specific, models behavior should be reproducible by other codes whose implementation follows the information contained in the documentation of the original software. As a matter of fact the precise reproducibility of models is often difficult cause to hidden implementation details or behavior of some internally used algorithms.


Robustness (of models) It is a property of the informatics and numerics of the models. A robust models works for the largest foreseen set of use cases without issuing exceptions or being able to manage them when they are unavoidable. Model algorithms and design should be accurately planned for this purpose (to obtain what is called a software system product, Frederick, 1995). Besides, specifically in the DARTHs context, we expect that the model can be run on different operating systems without taking care of the different platform details.


Reusability (of models) -  A model is maximally reusable if any of its parts can be reused effortlessly inside other models that share the appropriate characteristics. The Modelling By Components (MBC) paradigm aims also to enhance this property. Considered together with the Robustness, reusability quality allows to simulate a great set of physical (hydrological) situations, even those for which the model was not initially conceived.


References
  • Addor, Nans, Andrew J. Newman, Naoki Mizukami, and Martyn P. Clark. 2017. “The CAMELS Data Set: Catchment Attributes and Meteorology for Large-Sample Studies.” Hydrology and Earth System Sciences 21 (10): 5293–5313. https://doi.org/10.5194/hess-21-5293-2017.
  • Beven, Keith. 2016. “Facets of Uncertainty: Epistemic Uncertainty, Non-Stationarity, Likelihood, Hypothesis Testing, and Communication.” Hydrological Sciences Journal 61 (9): 1652–65.
  • Brooks, Frederick P., Jr. 1995. The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition. Pearson Education. https://play.google.com/store/books/details?id=Yq35BY5Fk3gC.
  • Ceola, S., B. Arheimer, E. Baratti, G. Blöschl, R. Capell, A. Castellarin, J. Freer, et al. 2015. “Virtual Laboratories: New Opportunities for Collaborative Water Science.” Hydrology and Earth System Sciences 19 (4): 2101–17. https://doi.org/10.5194/hess-19-2101-2015.
  • Clark, Martyn P., Dmitri Kavetski, and Fabrizio Fenicia. 2011. “Pursuing the Method of Multiple Working Hypotheses for Hydrological Modeling.” Water Resources Research 47 (9). https://doi.org/10.1029/2010wr009827.
  • Cox, R. T.,1946. Probability, frequency and reasonable expectation. American Journal of Physics, 14, 1–13.doi:10.1119/1.1990764
  • David, O., J. C. Ascough II, W. Lloyd, T. R. Green, K. W. Rojas, G. H. Leavesley, and L. R. Ahuja. 2013. “A Software Engineering Perspective on Environmental Modeling Framework Design: The Object Modeling System.” Environmental Modelling & Software 39 (c): 201–13. https://doi.org/10.1016/j.envsoft.2012.03.006.
  • David, Olaf, Wes Lloyd, Ken Rojas, Mazdak Arabi, Frank Geter, James Ascough, Tim Green, G. Leavesley, and Jack Carlson. 2014. “Modeling-as-a-Service (MaaS) Using the Cloud Services Innovation Platform (CSIP).” In International Congress on Environmental Modelling and Software. scholarsarchive.byu.edu. https://scholarsarchive.byu.edu/iemssconference/2014/Stream-A/30/.
  • Dietrich, W. E., D. G. Bellugi, L. S. Sklar, and Jonathan D. Stock, Arjun M. Heimsath, Joshua J. Roering. 2003. “Geomorphic Transport Laws for Predicting Landscape Form and Dynamics.” In Prediction in Gemorphology, 135:103–32. Geophysical Monograph.
  • Formetta, G., A. Antonello, S. Franceschi, O. David, and R. Rigon. 2014. “Hydrological Modelling with Components: A GIS-Based Open-Source Framework.” Environmental Modelling & Software 55 (May): 190–200. https://doi.org/10.1016/j.envsoft.2014.01.019.
  • Gupta, Hoshin Vijai, and Harald Kling. 2011. “On Typical Range, Sensitivity, and Normalization of Mean Squared Error and Nash-Sutcliffe Efficiency Type Metrics.” Water Resources Research 47 (10). https://doi.org/10.1029/2011wr010962.
  • Knoben, Wouter Johannes Maria, Martyn P. Clark, Jerad Bales, Andrew Bennett, S. Gharari, Christopher B. Marsh, Bart Nijssen, et al. 2021. “Community Workflows to Advance Reproducibility in Hydrologic Modeling: Separating Model-Agnostic and Model-Specific Configuration Steps in Applications of Large-Domain Hydrologic Models.” Earth and Space Science Open Archive. https://doi.org/10.1002/essoar.10509195.1.
  • Martin, Robert C. 2009. Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall. https://play.google.com/store/books/details?id=hjEFCAAAQBAJ.
  • McCuen, Richard H., Zachary Knight, and A. Gillian Cutter. 2006. “Evaluation of the Nash–Sutcliffe Efficiency Index.” Journal of Hydrologic Engineering, September, 1–6.
  • Nativi, Stefano, Paolo Mazzetti, and Max Craglia. 2021. “Digital Ecosystems for Developing Digital Twins of the Earth: The Destination Earth Case.” Remote Sensing 13 (11): 2119.
  • Nearing, Grey S., Yudong Tian, Hoshin V. Gupta, Martyn P. Clark, Kenneth W. Harrison, and Steven V. Weijs. 2016. “A Philosophical Basis for Hydrological Uncertainty.” Hydrological Sciences Journal 61 (9): 1666–78.
  • Nearing, Grey S., Frederik Kratzert, Alden Keefe Sampson, Craig S. Pelissier, Daniel Klotz, Jonathan M. Frame, Cristina Prieto, and Hoshin V. Gupta. 2021. “What Role Does Hydrological Science Play in the Age of Machine Learning?” Water Resources Research 57 (3). https://doi.org/10.1029/2020wr028091.
  • Rigon, Riccardo, Giuseppe Formetta, Marialaura Bancheri, Niccolò Tubini, Concetta D’Amato, Olaf David, and Christian Massari. 2022. “HESS Opinions: Participatory Digital Earth Twin Hydrology Systems (DARTHs) for Everyone: A Blueprint for Hydrologists.” Hydrology and Earth System Sciences Discussions, 1–38.
  • Rizzoli, A. E., M. G. E. Svensson, E. Rowe, M. Donatelli, R. M. Muetzelfeldt, T. van der Wal, F. K. van Evert, and F. Villa. 2006. “Modelling Framework (SeamFrame) Requirements.” SEAMLESS.