Performance Assessment Significance

The solar radiation literature is rich in validation reports of new, isolated models, or in performance assessment studies of similar models being intercompared. But, readers or users might ask, what is the significance of all these results? Interest­ingly, from a philosophical perspective, it has been boldly postulated that “ Verifi­cation and validation of numerical models of natural systems is impossible” and that “Models can only be evaluated in relative terms, and their predictive value is always open to question” (Oreskes et al. 1994). These arguments are certainly de­batable and can appear of hardly any concern in the context of daily engineering tasks, for instance. Nonetheless, a thorough literature review reveals that, indeed, radiative models are not always “validated” or “verified” convincingly, due to the non-observance of some important rules, which are discussed in what follows.

• Rule #1—Datasets independence

The dataset used to validate a model should be as independent as possible from that used to derive it. This can be generally achieved by first randomly selecting, for instance two subsets, one for development, one for validation. The random selection is a better procedure than using, e. g., two years of data for model derivation and one year for model validation, since autocorrelation is likely to exist from one year to the other. Another aspect of this rule is that circular calculations should be abso­lutely avoided. This would occur, for instance, if the model under scrutiny is used in inverted mode to derive the turbidity data that it needs in normal mode. This rule seems obvious, but is not always followed in practice.

• Rule #2—Uncertainty analysis

As indicated in previous sections, an uncertainty analysis of the reference dataset (presumably measured) is essential here. A sensitivity analysis carried out on the model’s inputs can then determine how they should be filtered to consider only those conditions that can lead to prediction errors lower than the experimental uncertainty, and the experimental uncertainty should be established and stated.

• Rule #3—Data filtering

All available measured data points are not necessarily good for validation pur­poses. They need to be checked first for inconsistencies, egregious errors, etc. Ir – radiance data from research-class sites are generally well quality-controlled, but spurious data can still exist. (See the discussion on data quality assessment in Chap. 1, Sect. 8.3.) A posteriori tests are recommended with data from any source, using different possible strategies (Claywell et al. 2005; Hay 1993a; Muneer and Fairooz 2002; Muneer et al. 2007). Also, as a result of Rule #2 above, all input data conducive to low accuracy should be discarded. For instance, it is generally observed that under low-sun conditions (high zenith angles) both measured and modeled un­certainties become too high to draw valid conclusions.

• Rule #4—Unity of time and space

Ideally, model inputs should have the same time resolution as the validation data, and should be obtained at the same site. This can be rarely achieved due to the frequent constraint that some inputs are not measured on site at the proper fre­quency, and must therefore be extrapolated, interpolated, or averaged in various ways. When these imperfect data are used for a highly-sensitive input, the model’s performance can be significantly degraded. This has been demonstrated in the case of clear-sky meteorological models for instance, in relation with the use of instan­taneous vs time-averaged turbidity data (Battles et al. 2000; Ineichen 2006; Olmo et al. 2001). This rule cannot be respected either when the model uses gridded in­put data, such as cloud information, and is tested with site-specific reference data, or “ground truth”. This problem is known to introduce significant random errors (Perez et al. 2002). (See also the accuracy discussion at http://eosweb. larc. nasa. gov/cgi- bin/sse/sse. cgi? na+s05#s05 and references therein.)

• Rule #5—Proper ancillary data

Only the best possible ancillary data should be used, particularly for the most model-sensitive inputs. This is often critical to the performance of a model. Using low-quality ancillary data results in biased or inconclusive performance assessments (Ineichen 2006).

• Rule #6—Radiative closure

Ideally, a model validation should be conducted as a radiative closure experiment. This means that all inputs to the model be measured independently with co-located instruments, at the required frequency, and with sufficiently small uncertainty. If the error bars (estimated uncertainty) in the modeled results overlap the measurement error bars (uncertainty in the measurements) to a significant extent, then the model can be considered validated. If not, either the model’s intrinsic performance is defi­cient, or the input data are of too low quality and do not satisfy the requirements of Rule #5.

• Rule #7—Validity limits

In most cases, a model is only validated for specific atmospheric or climatic con­ditions (see Rule #4). It is important to specify the limits of validity of the model to avoid inadvertent extrapolations from users. It is also common observation that empirically-determined equations using high-order polynomials are subject to diver­gence if used outside of their intended limits. More efficient mathematical modeling is therefore recommended, using, e. g., polynomial ratios. Finally, the required time scale of the input data must be clearly identified to avoid misinterpretations. For in­stance, Gueymard et al. (1995) showed that using a sunshine-based global radiation model with empirical coefficients originally developed for yearly-mean sunshine, but incorrectly applied to monthly means for convenience, could be detrimental.

Updated: August 18, 2015 — 2:26 am