Category Modeling Solar Radiation at the Earth’s Surfac

Model Performance Benchmarking and Ranking

This section provides a concrete example of how a comparative performance as­sessment study can be conducted when a large number of models is involved. In the present case, a rather exhaustive literature survey provided a list of 35 clear – sky broadband irradiance models that can predict instantaneous or short-term (e. g., hourly) direct and diffuse irradiances from limited information on the optical prop­erties of the atmosphere. A computer program that can use the benchmark dataset

Fig. 20.6 Comparison between meteorological (METSTAT) site-based and (Perez) satellite-based direct normal irradiance (DNI) solar radiation estimates for the 1991-2005 US National Solar Radiation Data Base update (Wilcox et al. 2007)...

Read More

Satellite Model Evaluation

Many of the issues discussed above become much more noticeable and complicated for validation of models based on satellite input data. The questions of temporal and spatial consistency are particularly vexing, as satellite data, while uniform, are usu­ally sparse in time compared to surface observations. Spatial concerns are an even greater problem, since surface observations are ‘point’ observations, and satellite observations are spatially extended, even if at very high spatial resolution. Perez et al. (1997, 2001) provide a detailed review of these issues...

Read More

Physical and Empirical Model Evaluation

With physical and empirical model evaluation of single or multiple models, an im­portant problem is that, often, the performance evaluation units and temporal reso­lution used in the new evaluation are different from those in the original validation. This is a much larger problem for satellite-based model evaluation, discussed briefly in the next subsection.

For example, a model developer may quote RMSE and MBE errors in terms of percentage errors in monthly-mean totals of radiation, while an independent eval­uation may compute the errors in daily irradiation units (MJ m~2 or kWhm~2), on the basis of hourly diurnal profiles. Note that, from Sect. 5.3, both the model limits of validity (Rule 7 in section 5...

Read More

Independent Model Performance Evaluation

Independent evaluation of models based on input parameters and measured data for different sites is prevalent in the literature. One or more model is evaluated with one or more new and independent validation data sets.

Read More

6.1 Performance of Model Elements

One informative segment of model performance is the comparison of model ele­ments or functions to previously-developed similar model components. Examples include so-called “simple” broadband or spectral transmittance models. These types of models were briefly mentioned in Sect. 4, and described in Eq. (20.4). Compar­isons of individual transmittance functions from some similar models are detailed elsewhere (Gueymard 1993, 2003b). These model elements may be changed or im­proved during the model’s development or as different, possibly more detailed in­formation on model parameters become available.

For instance, Bird and Hulstrom (1981a, b) compared the transmittance func­tions developed for their model with those of several other authors (Atwater and Ball 1978; Davies and Hay 197...

Read More

Some Performance Assessment Results

Solar radiation model performance is usually carried out from two different perspec­tives. First, the assessment of models made by their developers is usually referred to as validation, as discussed in previous sections. Secondly, models can be evaluated by testing against independent data sets, usually by authors independent from the model development.

Read More

Performance Assessment Significance

The solar radiation literature is rich in validation reports of new, isolated models, or in performance assessment studies of similar models being intercompared. But, readers or users might ask, what is the significance of all these results? Interest­ingly, from a philosophical perspective, it has been boldly postulated that “ Verifi­cation and validation of numerical models of natural systems is impossible” and that “Models can only be evaluated in relative terms, and their predictive value is always open to question” (Oreskes et al. 1994). These arguments are certainly de­batable and can appear of hardly any concern in the context of daily engineering tasks, for instance...

Read More

Quantitative Assessment

Most generally, the performance of various models is evaluated against a single dataset, in the aim of selecting the best performing model for this particular dataset, and by extrapolation, for the climatic conditions represented by the dataset. Using qualitative information from plots such as Figs. (20.3)-(20.5) would be too diffi­cult or subjective for this task. A statistical analysis of the actual modeling errors must therefore be performed. (Sect. 5.3 goes into more details on how to isolate “modeling errors”.) An individual error, eu is, by definition, the difference between a predicted value (of radiation, presumably) and the corresponding “true value”. It
must be emphasized that the true value is never known...

Read More