Besides the standard search and file upload interfaces, K2 data may be retrieved in a
number of ways as described below. Most of these options are now possible
because almost all of the K2 data (e.g., catalogs, light curves, Full Frame Images,
Target Pixel Files, Raw Cadence data, etc.)
are stored online in publicly-accessible directories.
FTP and HTTP
Individual files may be downloaded via FTP or through your browser (HTTPS) to download K2 data
and catalogs. For FTP, connect to archive.stsci.edu anonymously and cd to pub/k2
You will see the available directories using ls. For HTTP, just go to
https://archive.stsci.edu/pub/k2/.
Examples for the browser paths to light curves and target pixel files
are shown below, where XXXXYYZZZ is the EPIC ID and N is the campaign number.
Gzipped Tar files containing all the light curves for a given campaign
can also be downloaded. Each file will be no larger than 5 GB.
They can be downloaded through your browser at URL:
https://archive.stsci.edu/pub/k2/lightcurves/tarfiles/
or via anonymous ftp (connect to archive.stsci.edu, cd /pub/k2/lightcurves/tarfiles).
Tar files of gzipped Target Pixel files can be found at
https://archive.stsci.edu/pub/k2/target_pixel_files/bundles/
The files are grouped according to Campaign number and the first 4 digits of the
EPIC IDs, and have a name in the form cN_XXXX0000.tar. For example,
C13 files with EPIC IDs 248245056 and K2 248245986 would be stored in file
c13_248200000.tar Note these files can be large (i.e., over 50 GB in
some cases).
Catalogs
Currently the only downloadable K2 catalog is a CSV version of EPIC.
See the EPIC entry on the search & retrieve
page for more information.
WGET and CURL Scripts
If your system supports Wget or Curl, there are several other options for retrieving
data, which will create a shell script on your desktop computer.
You may then run the script from the command line to copy the requested
files directly to your computer. One advantage to using shell scripts
is that large requests are submitted one file at a time avoiding memory issues
on the MAST servers.
Note these scripts are primarily intended for Linux,
Unix, and Mac users but alternatives may exist for Windows users.
Note that Macs are not shipping with wget installed for systems 10.9 and later. If you have installed wget from external
source such a fink, the version of wget may not work with our systems. The most current version does seem to work.
We currently offer 2 methods for generating shell scripts of CURL or WGET
commands. Either method will create a script file on your desktop computer
that can be run to download the found files (e.g, using the sh command).
We also give examples of how to create your own WGET commands.
Customized scripts can be created from the
K2 data search page
by choosing one of the the output format options:
FILE: WGET LC,
FILE: WGET TPF,
FILE: CURL LC,
or FILE: CURL TPF.
Output formats can also be used in a web service request from your browser.
For example,
https://archive.stsci.edu/k2/data_search/search.php?kic_teff=8040..8050
&outputformat=CURL_file&action=Search
will download a script with 289 curl commands
for retrieving light curves for targets with effective temperatures between 8040 and 8050 deg.
Note the output format options for the web service are specified slightly differently
(i.e., CURL_TPF_file, or WGET_TPF_file).
See https://archive.stsci.edu/vo/mast_services.html
for more examples of MAST services and allowed parameters.
If you know what data you want, a quick way to create shell scripts is
to use one of our
available IDL or Python programs. These programs
accept several parameters for specifying ID numbers, cadence, dates, quarters,
data type, and command type. For example (assuming IDL is installed on your desktop
computer),
return all available long-cadence target pixel files for K2 ID 7730747:
get_k2, '205248134', data_type="target_pixel_file".
A python example to retrieve TPF files for ID 205248134:
python get_k2.py '205248134' -t target_pixel_file.
Type python get_k2.py -h to see all the available python arguments.
Here are some examples of creating your own wget commands
where instead of retrieving one file per command (as above), you
retrieve entire directories:
Download a whole directory of data using WGET
wget -q -nH --cut-dirs=6 -r -l0 -c -N -np -R 'index*' -erobots=off https://archive.stsci.edu/pub/k2/target_pixel_files/c2/205200000/48000/
Batch Requests
If you know exactly which datasets you want, another method for retrieving data
(and bypassing the search step)
is to use the dataset retrieval page at
https://archive.stsci.edu/cgi-bin/kepler/dataset_lookup
The list of datasets can be entered with a space or a comma delimiter,
or as an uploaded file,
but they must be specified with both the EPIC ID and the campaign number of the
observation in a form like: KTWO202483641-C02
(or with a cadence type like KTWO202483641-C02;LC).