Next: The Science Data Files Up: DH_Final Previous: The Instrument   Contents


Chapter 3

Pipeline Processing

3.1 Introduction

The science data processing pipeline for FUSE data is called CalFUSE. This chapter gives a brief overview of CalFUSE. While the "Intermediate" user might only be interested in getting acquainted with the FUSE data analysis software provided on the CalFUSE homepage, the "Advanced" user needs to be fully familiar with the pipeline functionalities and tools provided along with it. Because CalFUSE was run in tandem with the Operations Pipeline Unified System (OPUS), a brief description of its function is included as well.

3.2 OPUS

FUSE science data were dumped from the spacecraft solid state recorder 6-8 times a day when the satellite passed over the ground station at the University of Puerto Rico at Mayaguez. After the data were transferred to the Satellite Control Center at JHU and checked for completeness, corresponding data about the instrument and spacecraft were extracted from the engineering telemetry archive. The science and engineering data files were sent to a FUSEspecific version of the automated processing system, OPUS (Rose et al. (1998)). OPUS ingested the data downlinked by the spacecraft and produced the data files that served as input to the CalFUSE pipeline. OPUS generated six data files for each exposure; the four raw data files (one for each detector segment, see Section 4.2.1); and two time-resolved engineering files (the housekeeping and jitter files, see Section 5.2). It then managed the execution of CalFUSE as well as the files produced by CalFUSE and called the additional routines that combine spectra from each channel and exposure into a set of observation-level spectral files. OPUS read the FUSE Mission Planning Database (which contained target information from the individual observing proposals and instrument configuration and scheduling information from the mission timeline) to populate raw file header keywords and to verify that all of the data expected from an observation were obtained.

3.3 Overview of CalFUSE

The CalFUSE pipeline was designed with three principles mind. The first was that CalFUSE would follow the path of a photon backwards through the instrument, correcting for the instrumental effects introduced in each step, if possible. The interested reader is referred to Dixon et al. (2007) for details regarding each of the steps that data go through when running CalFUSE.

The second principle was to make the pipeline as transportable and modular as possible. CalFUSE is written in C and runs on the Solaris, Linux, and Mac OS X (versions 10.2 and higher) operating systems. The pipeline consists of a series of modules called by a shell script. Individual modules may be executed from the command line. Each performs a set of related corrections (screen data, remove motions, etc.) by calling a series of subroutines.

The third principle was to maintain the data as a photon list in an Intermediate Data File (IDF) until the final module of the pipeline. Input arrays are read from the IDF at the beginning of each module, and output arrays are written at the end. Bad photons are flagged but not discarded, so the user can examine, filter, and combine processed data files without re-running the pipeline. This makes the IDF files important for those who wish to perform customized operations on FUSE data. The contents of the IDFs are discussed in Section 4.2.1 and on the at MAST (see below).

Investigators who wish to re-process their data (mostly "Advanced" users) may retrieve the CalFUSE C source code and all associated calibration files from the : Detailed instructions for running the pipeline and descriptions of the calibration files are provided there as well.

Next: The Science Data Files Up: DH_Final Previous: The Instrument   Contents