monochrome. This means that we now have a visual representation of data in the non-visible parts of the spectrum. Ultraviolet is particularly important as this is the hot
areas where new stars are being created and represents
new light. Infrared at the other end of the scale represents much older light and is associated with dying stars.
Being able to study this data in addition to the visible
light is important to gain a better understanding of what
is happening in the universe.
This then raises an issue of how to r epresent the light
that would normally be invisible to the human eye. This
was solved by the creation of the “Hubble Palette” where
the invisible light is mapped to a visible prime color i.e. the
ultraiolet is the blue component, the infrared the red component etc. This has been extended to amateur astronomers who map narrowband images to color components
thereby creating the false color Hubble Palette images.
With the theory out of the way we now need to get
some good clean images with minimal noise and good
light data. The light that is being collected is often very
faint and almost indistinguishable from the dark background. So a method of capturing images and increasing the light data whilst minimising the noise is employed.
The capturing the light images is very similar to the way
astrophotographers capture images on the ground. The
telescope takes a number of images then stacks them
together. One of the reasons this has to be done on the
ground is to remove artefacts such as plane trails and
satellite trails. Up on orbit the Hubble telescope is flying
higher than any plane so why would this stacking need
to take place? Well there may not be planes to contend
with, but there are still satellites flying in higher orbits and
also cosmic rays which will be captured by the camera.
Photoshop layers palette
representing separate
image layers for each filter
dataset as the first set of
images, as well as adjustment layers to change the
brightness profile for each
layer and apply hue to
each filter layer. Additional
curves adjustments apply
to the composited image.
Credit: STScI, OPO, Zolt
Levay
Stacking images has two effects. Firstly it removes all
cosmic rays, satellite trails and other transient, unwanted
data. This is done in a software application which effectively looks at two images and compares one to another.
If a pixel is set in one and not in the other then it is likely to
be an artefact that needs to be removed. Secondly the
more images that are combined together the more the
data signal is enhanced whilst reducing the random background noise. The result of this process is an image that is
clean, of good light quality and lower noise. This process
has to be repeated for each different wavelength (filter)
that will be incorporated into the final image.
The stacked images are then ready for processing into
the final images. This is done by ensuring that all the components are of the same size and orientation with all the
stars lining up as the images are layered on top of each
other. This must be done prior to processing the images as
they must all be in perfect alignment for the color components to be able to be merged into the final image.
Initial color composite from HST WFC3 images of Stephan’s Quintet (left) rendered
in hues assigned to datasets from several separate filters. The same image is adjusted (right) to improve the contrast, tonal range, and color.
Credit: STScI, OPO, Zolt Levay
Once aligned the images are imported into a graphics processing package such as Adobe Photoshop. They
are assigned layers within a single image. You can think
of this process as placing three different transparent images on tracing paper and then shining a light behind it
to project the combined image. This now is when each
different layer is associated with a color enabling the
combined image to be rendered as a full color image.
The image is then modified with various transition tools to
lighten and increase contrast both to the individual layers and the image overall. This will ultimately produce the
final image.
If the layers that were combined were taken with red,
green, and blue filters then the final image will be a true
lifelike color image. If on the other hand the layers represent narrowband image data then the color mapping
will produce a false color image. This is where the famous
Hubble Palette is derived from. The Hubble Palette normally has the Hydrogen Alpha data mapped as green,
the Sulphur II data mapped as red and the Oxygen III
data assigned to blue. This color mapping produces the
dramatic false color images that we are all used to seeing from the Hubble Telescope.
This is a rather simplistic explanation of the process,
and there are many other steps that are allied to the
data to produce the final images. The main interesting
thought though is that the processing of data from the
Hubble Telescope is very similar to that that amateur astronomers use from Earth based telescopes. NASA make
the data from the Hubble Telescope available to the
public and it is possible to create your own Hubble images by combining and processing the data. This will be
the topic of a future article.