This is a first trial to produce some standard templates for documenting our modeling work. It starts from the idea that we use a component-based environment (OMS) and there are at least three types of actors that can use (or help to produce) our software:
Developers: these are those who develops the components. What they need to know is the overall scope of the component, its design, its classes, the algorithms it uses and some reference to check it all.
Linkers: these are those who uses the components to create modelling solutions (MS). Their work is made before run-time by means of a scripting language (in our case based on Groovy). In our case they produce a .sim file that allows the execution of the model inside the OMS3 console. They need also to have information about the IO data each component require and they could not be necessarily know the internals of each component, like the cook do not need to know how the cooker internals are made (they call it encapsulation of information, and information hiding, which has a positive connotation in object oriented modeling)
Users : They just run MS, but they should also provide information about their run. So one of the template can be used for documenting single runs of MS, specifying the inputs and the setup of the numerical experiment.
Thus, we actually delineated four types of documentation (for internals, for externals of each component, for their compositions, and for specific runs) which we can solve with four templates. They are produced in LaTeX and are still experimental:
One could observe that actually these four types of documentation can should be produced at different stages of the component's production.
The Developer documentation should be produced while designing the component, and, potentially, before the component is produces. Its structure can greatly help the design, and avoid the production of software without design. The Linkers' documentation should be produced just after the production of the component, while the component is tested. The Users documentation which presente assembly (composition) of components during experiments setup, or, if of the fourth type, before (in contemporary) a run is sent. This implies a little discipline but it would help a lot to save personal and collective time. Besides, it would be easy then to obey to the requirement of making replicable research.
All of this is work in progress and will be improved during the effective use of the templates. Suggestions are welcomed.
Saying this is to produce components documentation, we still are missing an appropriate type of documentation for Application, like, for instance the OMS console. For this type of documentation, we also have a template, copied from the uDig walktroughs. An example, also in LaTeX, can be found here.
Please observe that these executables assume the use of the MeteoIO libraries, for input data interpolation and treatment, and not at all the use of the internal routines, called meteodistr. Therefore some of the examples, based on the latter routines, could not work.
There is an ongoing discussion in the GSoC mentors of which I am part this years, which regards a student felt depresses, and the way we judge his/her performance. The issue has a broader perspective brought up by Robert C. Helling (GS), which is interesting to annotate.
"To put the issue in a broader perspective: In my paid life, I am the scientific coordinator of a graduate program at a major German university. After a few years, I realised there was quite a number of students that have to take breaks from university due to mental health problems (roughly 2 / 100 students / year) and I was worried that somehow this was the fault of our program (for example by putting to much pressure on the students). So I looked into this a but and sought advice from experts in this area.
What I learned was that this is totally normal, it’s a phenomenon that is just not visible as there is a shyness to talk about this as mental health problems are still too much regarded as a personal weakness: Roughly one in three people at some point during their life suffer from a mental health problem that is so severe that it affects their ability to work and/or participate in social life for an extended period of time. And these do not show up at random times but usually a turning points in their life, for example a loved one dies, relationship break-up, you move to a new social environment (e.g. city / country / university) etc etc.
In the cases I met, they were relieved to hear that it is not just their personal problem (often perceived as a failure) but that this is quite common as I just explained. So in conclusion, I don’t think it helps to cover up the existence of these problems and pretend they don’t exist but talk about them openly (of course anonymously)."
On my personal experience, I know too that failing is not so uncommon and can happen, for various reasons, to everybody. Not only geniuses get sick (not to forget John Nash) even if the debate if creative and mental disorders are connected support certain evidence that "mads" re also creative. Anyway, I better support the idea, that looked very close, all of us have insane little behaviors, which, paraphrasing Marcel Proust, " are necessary to make reality more bearable" but the equilibrium is unstable.
However, as modellers, or scientists, we have to continue to investigate about models reliability. All of my scientific activity is in discussing issues related to land-atmosphere interactions, where I know there is much road to run behind us, and land-atmosphere interactions in climate models are not an exception.
A few readings made me to write this blogpost. The first brought to my attention this TEDx talk by Steve Easterbrook, in which he analyses how Climate Models are built, in order to justify their correctness. Interesting is also his paper on GMD that treats the case. He, and coauthor, J. Pipitone, applied methods of software analysis to infer models quality. I found the method they used interesting but not decisive (not applicable to my model for instance). However the introduction, and the overall arguments in their paper, are a point of view to keep firmly in mind. Nice the discussion about what is a good model for scientists, as compared to what is a good model for others, for instance computer scientists.
In my opinion, Climate Science is not settled; relatively yes: detractors' arguments are really very weak. But that there is much work to do and big room for improvements is indeed a fact.
Here it comes the other paper, by I.C. Prentice (GS), recently published in Atmospheric Chemistry and Physics, where he also presents a nice and informative review of climate models. The paper, when it goes to the details of what a new model should be is pretty much VIC+ oriented (which could be not new enough to some), but it is crystal clear in singling out some of the more important aspects of the matter. Reading it is good.