With physical and empirical model evaluation of single or multiple models, an important problem is that, often, the performance evaluation units and temporal resolution used in the new evaluation are different from those in the original validation. This is a much larger problem for satellite-based model evaluation, discussed briefly in the next subsection.
For example, a model developer may quote RMSE and MBE errors in terms of percentage errors in monthly-mean totals of radiation, while an independent evaluation may compute the errors in daily irradiation units (MJ m~2 or kWhm~2), on the basis of hourly diurnal profiles. Note that, from Sect. 5.3, both the model limits of validity (Rule 7 in section 5.3), and Rule 4 (unity of time and space) should be considered when independent validation is developed. Attempts to extrapolate or interpolate model performance beyond the validation limits should be described in detail, with appropriate caveats. Disparate criteria make it difficult to decide how the variations in data sets (i. e., site dependencies) may be affecting models.
On the other hand, quantitative results in any form still convey an idea of the expected uncertainty in models, as long as the analysis is adequately described. For example, in the original detailed report on his Direct Insolation Simulation Code (DISC) separation model for converting hourly global horizontal to hourly direct beam irradiance, (Maxwell 1987) shows monthly average MBE and RMSE errors as percent deviations, and shows diurnal profiles of hourly differences between modeled and measured data for specific dates. The performance results are based on data from three sites not used in the model development. In an independent evaluation of the Maxwell model, Perez et al. (1990b) show MBE and RMSE errors for 13 individual sites (as well as two additional models) sorted by zenith angle and clearness index (KT) in irradiance units (Wm~2). Their analysis gives a more detailed picture of the model performance, but is difficult to compare with the original analysis.
With the increasing interest and need for solar resource information, and reduced availability of ground-based measurements in many countries, an increasing number of authors have carried out independent evaluation studies of multiple solar radiation models. Excellent examples of these review articles include studies considering five models and four data sets (Perez and Stewart 1986); three models and fourteen data sets (Perez et al. 1990b); five models at two sites (Badescu 1997); 38 models of atmospheric (infrared) emission and 15 data sets (Skartveit et al. 1996); a combination of twelve transposition models for tilted surfaces and four albedo submodels at four sites (Psiloglou et al. 1996); two sunshine models, three empirical models, and four datasets (Iziomon and Mayer 2002); seven sunshine models and 77 sites (Soler 1990); six irradiance models at four sites (Battles et al. 2000); 21 irradiance models and six datasets (Gueymard 2003a); and eight models at sixteen sites (Ineichen 2006). Moreover, the Solar Heating and Cooling Programme’s Task IX of the International Energy Agency (IEA) supported two important validation studies using many models and international datasets. One was devoted to the prediction of hourly or daily horizontal radiation from meteorological data (Davies and McKay 1982, 1989; Davies et al. 1984, 1988), and the other to the prediction of hourly or daily tilted radiation from horizontal data (Hay and McKay 1985, 1986; Hay 1993b). The latter study included most, if not all, transposition models of the literature then available. The former IEA study only included a small number of meteorological models. A different approach is used in Sect. 6.3, where a sample of all known clear-sky models able to predict both direct and diffuse radiation instantaneously on a horizontal surface is reviewed. This study being for demonstration purposes only, it is geographically limited to only one site, using the small benchmark dataset previously mentioned in Sects. 3 and 5.1.