The high-dispersion inverse sensitivity curve is defined to be the product of the low-dispersion inverse sensitivity curve and a wavelength-dependent high-to-low absolute calibration function (Cassatella 1994, 1996, 1997a, 1997b):
C = n / N
where C is the calibration function, n is the low-dispersion net flux normalized to the exposure time, and N is the high-dispersion ripple-corrected net flux also normalized to the exposure time. The calibration function represents the efficiency of high-dispersion spectra relative to low-dispersion and was determined empirically using pairs of high- and low-dispersion spectra obtained close in time so as to minimize the effects of the time-dependent sensitivity degradation. C is represented functionally as a polynomial in the following form: where is wavelength in Ångstroms. The coefficients used in the calibration function are given in Table 11.11.