In part one of this series on noise and computational photography, I wrote about some of the basics of what we can do with software-enhanced imaging. Now that I have a host of new source images of the night sky from the edge of Ireland it’s time to start processing them and see what we can get in practice.
Foreward
This is somewhat of an interactive post. Working with images like this takes a considerable amount of time because of the processing requirements. There is therefore a mixture below of general concepts I use and specific processes for specific images. In the end, we use a form of trial and error on a grand scale: make an assumption about the final image, apply the processing, wait a few hours/days, check the results. If this doesn’t provide a workable image, we probably made some poor assumptions and so we need to go back to the beginning and change those. In the case of the images I took on this expedition, it turned out that removing the dark frame helped very little (and hurt some) because I didn’t have a long enough duration between the beginning of the image sequences and the end. In other words, the stars didn’t move enough relative to the horizon and the median stack sucked away the brightness of the stars.
TL;DR Please don’t be surprised when reading the twists and turns in this article – it’s not a precise science.
RAW adjustments

All of the source images were shot in RAW. JPEG compression introduces a variety of artifacts that get in the way of the editing process:
- Light values are limited to 8 bpp in a logarithmic scale. Ideally we would be performing our calculations in a linear space with some extra head room to prevent losing information through the math we will apply.
- The image is divided into 8×8 blocks and the block edges can stick out.
- The color information is scaled down, leaving a single color pixel for each luminosity pixel, which is stored at full resolution.
These are great qualities for lossy compression, but I’m not trying to save space – I’m trying to preserve the master image data until we get our final product (which will be saved as a JPEG image, by the way).
My preferred editing tool is Lightroom. It makes short work of photo-editing operations. Because I’ll lose some information along the way, I’m going to intentionally mess with these photos at the beginning to bring out the important features.

- Turn up the brightness. I want the dimmest object of interest to be visible. This step will amplify the noise, but our median stacking will take care of this. Boost that baby way up – go ahead, crank it to eleven.
- There’s a dark tower in the foreground. I’m not sure if I’ll recover any detail from it because of how poorly illuminated it is, but I’m going to try anyway by maxing out the Shadows parameter and making that tower face visible. Right now it looks like a flat playground for noise with no distinguishable features. Update: after some testing, there was nothing recoverable in the tower, so I have abandoned this step.
- Apply camera and lens correction. The profiles in Lightroom will correct for several types of distortion.
- Export the images in a format that
imagemagick
can read. If possible, use a 16 bpp TIFF format. (I haven’t been able to get this working with my Lightroom exports, so it’s 8 bpp for me for now)
Generate the dark frame
Traditional dark frames are taken with the shutter closed and measure camera noise. I’m adopting that term to generate an image that embodies everything but the stars. We’ll subtract this frame from the star images so we can process them separately. It contains a noise-reduced copy of the illumination from light pollution and foreground objects.
convert *.tif -evaluate-sequence median darkframe.tif

Subtract the dark frame
This is easy with grayscale images but difficult with color. The reason is that color data cannot be simply subtracted or added; it needs to be computed along a color space that can interpret what the subtraction means. (What does red minus green mean?)
for FILE in Stars*.tif; do composite "$FILE" -compose minus_dst darkframe.tif nodarkframe/`basename "$FILE"`; done
Find the center of rotation
Easy if Polaris is in view but not impossible if it isn’t. We could use a feature extraction technique to generate a transformation matrix or we could use simple geometry to get a quick approximation. I like to start by trying the latter because it’s usually quick enough and good enough and doesn’t require fidgeting with SiFT. Illustrator or Inkscape work well for this stage.

- Create a composite image with the maximum pixel value selected.
convert *.tif -evaluate-sequence max startrails.tif
– Open that image in a vector illustrator and find some star trails. -
Make a circle at one end of the trail and duplicate at the other end. The line passing through the intersections of these two circles will also pass through the center of rotation.
- Repeat for a few different star trails at several different points in the image.
- This isn’t a pure science here and you will not likely hit the exact center due to approximating. However, you can get close and then use a trial and error method with a test circle to see how well it fits.
- When satisfied, find which pixel sits at the center or rotation.
Our approximation got fairly close – I’m calling it at x, y = (1017, 1378)
Rotate the stars
The earth covers 360º in 24 hours. That’s 0.0041667º per second. Your camera hopefully stored the time it took the picture in the EXIF data. We will extract that value to determine the elapsed time from one photo to the next and use that to calculate the angular rotation. A quick trip back to ImageMagic will let us rotate and overlay the stars.
#!/bin/bash # The epoch time of the center image # I'm going to rotate around that one because it makes them all fit the best START=90125.6 CX="1017" CY="1378" for FILE in images*.jpg; do TIME=`exiftool $FILE | grep "Create Date" | grep "\." | awk -F ":" '{print $4,$5,$6,$7}' | awk '{print ($1-14)*24*3600+$2*3600+$3*60+$4}'` DELTA=`echo "($TIME-$START)*0.0041667" | bc -l | awk '{printf "%f", $0}'` convert $FILE -distort SRT "$CX,$CY $DELTA rotated/$FILE done
That looks more complicated than it really is. The TIME
line simply extracts the epoch time in seconds for when the image was taken. You’ll have to explore the EXIF data from your pictures to figure out the best way to get this. The DELTA
line calculates the angular displacement from the current image to the chosen anchor image. The convert
line does the actual transformation through ImageMagick.
If the geometry doesn’t work that well
We can fall back to the use of feature-extraction algoirthms to align our images automatically. Note, however, that if you attempt to push a few hundred into hugin or ICE or even Photoshop you are likely to kill them. They are dumb and will attempt to align all the images together. You are smart, however, and know more about the images than the computer does.
We’ll save plenty of time by taking a long-route around the alignment process: choose the anchor image and align everything else one-at-a-time.
#!/bin/bash ALIGN="/path/to/hugin/align_image_stack" COUNT="0" for FILE in images*.jpg; do # image-20 is the one I have chosen as my anchor $ALIGN -a aligned_ image-20.jpg $FILE rm aligned_0000.tif mv aligned_0001.tif aligned/$FILE.tif
This handles hugin‘s output and gives us really-well aligned images.

It’s actually hard to get more ideal source images for hugin to align because the stars are unmistakable. The dark regions don’t even mess it up because of the relative intensity of the stars and darkness.
Build the star image
Now the exciting part! We have to make a decision for a tradeoff between brightness of dim objects and noise reduction. It took me several attempts before I found something I liked. Luckily with the help of a bash script I had little to lose when taking another guess.
A little theory
Let’s consider an image behind one of the pixels – a distant galaxy for example. If that galaxy sends out enough light, it will fall within the range that our camera picks up. Suppose that every one hundred collected photons translates into a single light level in the sensor: we will consider three different scenarios that could result in our image.
- A galaxy sends us a million photons during the exposure – we get a bright pixel at a value of 10,000 or 61% brightness. This pixel is full of information that jumps over the noise and we can easily work with it.
- A galaxy sends us fifty thousand photons during the exposure – we get a dim pixel at a value of 50 or 0.3%. The light is still there in the data, but it’s probably below the amount of brightness that the noise introduces. We call this being under the noise floor. It will be much harder to distinguish this light from noise and so we will need to use some statistics to make it clearer.
- A galaxy sends us one thousand photons during the exposure – we have reached the edge and can’t count on getting any light value from the sensor. However, we still have one hope: only statistics can help us to recover that pixel. That galaxy will rarely send exactly one thousand photons. It will hover around that value in a normal distribution. Noise will completely drown out the galaxy’s light, but the noise is uniformly distributed. This means that the median of an image stack will tend to stabilize that galaxy’s light value and eliminate the noise. This is also true for the brighter-but-still dim pixel we looked at in the previous bullet point, but this is the cooler situation.
As we focus on the third bullet point above, we can start to realize the magic of what image stacks can bring us. While a simple median stack can clarify the brighter stars and objects, if we add together the light values from different exposures, we can pull out information that no single image contains.
If that really dim galaxy only sends more than a thousand photons 50% of the time, then on average, we would expect it to be there at least once for every two exposures, like a coin toss. If we add a hundred exposures, we would expect a final value to contain fifty thousand photons, back at a usable brightness. The noise can get amplified too, but it will also tend to cancel itself out as we add.
Back to the real world
Where’s the tradeoff? If we add all of our images together, it will be too bright for the bright areas and too noisy. So we have to guess how bright we want our final image, add together as many exposures as it takes to get there, then take the median of those groups. I played with groups of sixteen, eight, and four exposures added together and settled on four.
#!/bin/bash COUNT="0" for FILE in images*.jpg; do GROUP=`echo "$COUNT % 4" | bc` if [ ! -e groups/group-$GROUP.jpg ] then cp blank.jpg groups/group-$GROUP.jpg fi convert groups/group-$GROUP.jpg $FILE -evaluate-sequence add groups/group-$GROUP.jpg COUNT=`echo "$COUNT + 1 " | bc` done
The final step is to simply take the median of the resultant groups just as we did earlier with the individual images and then edit the final product in Lightroom to taste. Purdy, ain’t it?

Freebies
Along the way we generate lots of intermediate images and it can be fun to sequence them into film – just a ffmpeg
away.
Reblogged this on Snell Family Adventures and commented:
First I wrote about the personal experience for my photographic expedition in Ireland, but here I’m adding a technical howto on transforming bland images into the awesome.
That’s complicated! How many images did you finally work with, and how long were you standing in the freezing cold shooting them?
In the end I had 276 source images to work with across five different shoots. As a result of my trials, I now wish I had shot fewer scenes and taken more time on a single one. It was difficult with how late the stars came out though (mixed with the humidity making the horizon hard to see).
Unfortunately, the Milky Way didn’t start to rise above the horizon until around 1:00 am or so and would have been ideal between 3:00 am and 4:00 am. I was unwilling to wait another few hours for it and the clouds set in before that time anyway. All in all, I headed out around 7:00 pm or 8:00 pm I think and came back at 1:30 am. The cold set in as it grew dark, but the wind was there at the lighthouse the whole time.
My next shot will probably be outside of Tucson with a dark and dry sky and I’ll probably “shoot” for around three to six hours with a much lower ISO setting – can’t wait! Now if only I could do remote-control shooting with mirror lockup…