Tag Archives: processing

Why Not Shoot an M-21 While I’m Here

I was at the Museum of Flight for the IPMS exhibit but, while I was visiting, I figured it would be churlish not to take a picture of the M-21 that dominates the main hall.  It is actually a bit difficult to photograph and there is a lot of contrast with the background and it is always busy so a bit cluttered.  I knew it wasn’t going to be a great shot but decided to crop tighter on the airframe and shoot bracketed exposures and maybe go with an HDR process.  It isn’t great but it came out better than I had expected.

Negative Lab Pro

In previous posts I have described my efforts at scanning old negatives using a digital camera, macro lens and a light table.  I have had mixed success with the process for converting the negatives into positives with some films responding better than others.  I was okay with the output but thought things could be better.  A YouTube video showed up on my page that was about scanning negatives with a digital camera and I decided to watch to see if they did anything different to me.  The technique for shooting the negatives was similar enough but they introduced me to a Lightroom plugin called Negative Lab Pro.

I downloaded a trial of the software and gave it a go.  I was sufficiently impressed with the output that I stumped up the cash for the full version.  It isn’t cheap but, given that I can now use it on several thousand images, I figured it was worth the investment.  The plugin requires a small amount of effort.  I revert the images back to a normal San without any of my previous edits and conversions.  The first thing to do after that is to take a white balance reading from some of the visible edge of the film to neutralize any color shift.  Then you crop in on the image.  Apparently, it is important to avoid getting any unexposed edges in shot as this messes with the algorithm.

Then you open up the dialog box.  It analyses the image and does a conversion.  You then get some basic sliders to tweak the settings such as exposure and color balance.  There are some auto setting check boxes but I haven’t found them to be too helpful so far.  Then you click okay and the image is ready to do further editing in Lightroom.  You can also do batch conversions of images if you want although I think it is probably better to focus on individual processing.  I have been playing with this on a range of images so far and I like the results.  My old negatives are not that great and this is not going to suddenly make them amazing but I am impressed how much more I can get out of some of the scans using this software.

Learning a Better Way to Blend in Photoshop

I occasionally use the Statistics function in Photoshop to blend multiple images in order to get rid of the distractions that I don’t want like people or vehicles.  Up until now, this has been a real pain to do.  I would identify the images in Lightroom but would have to open Photoshop, go into the Statistics function, use the browse function in there to select the images and then it would run everything in one go.  This was not a convenient way to go and the output image then needed to be manually added to Lightroom which is not handy.

It turns out that there is a better way.  This may have been in Photoshop all along and I never knew or it could have been a recent addition.  Either way, it is there and I shall now use it for future projects.  I have even created a Photoshop action to cover the process and assigned a function key so it will now do the heavy lifting without my intervention.  It all starts out in Lightroom.  Select all the images that will be used for the blend.  Then use Edit>Open As Layers and a new document will open in Photoshop with all shots as layers.

If everything has been shot on a tripod, things will be properly aligned by default but I often do these things on the spur of the moment so they are hand held.  Consequently, while my efforts to keep pointing in the same direction are not bad, the first task is to select all layers and Auto Align layers to tidy things up.  Next, go into the Layer tab and, under Smart Object, convert to a Smart Object.  This may take a little while.

Next step is to go back into Layers>Smart Objects>Stack Mode.  This brings up the same options as you get through the Statistics function.  Select Mean and send it on its way and you end up with a shot that, depending on the number of shots taken and the clear space in enough of them, results in a clear shot.  Usually I find that I haven’t got enough shots of the right type to get everything to disappear so some ghostly elements may remain but they are certainly less distracting than the figures in the original shots.  I have no idea what the other modes will achieve and the descriptions Adobe provides in their help files are so obscure as to be virtually useless. Instead I shall have to experiment with them to see what happens.  Thankfully, now I have this new method, I can undo the last step easily to try each option which would not have been possible using the Statistics dialog.  Another win!

Blending to Remove Traffic

During a previous visit to Vancouver, I experimented to blending images of the same scene to remove objects I didn’t want included.  When photographing the bridge at Deception Pass, I decided to have another go at this.  The bridge was very interesting but I found the traffic on the bridge to be a distraction.  Looking at some of the shots afterwards, it wasn’t as bad as I thought at the time but, even so, I decided to try processing the shots.

This was the same approach as before.  Load all of the images into Photoshop using the Statistics function and use Median to average things out and hopefully remove the items that I didn’t want to appear.  It seemed to work pretty well.  The top shot has the output while the one below is one of the input shots cropped in along with the final result to show what was removed.

My First Attempt at Focus Stacking

I first read about focus stacking a long time ago and I have been meaning to try it for ages.  The premise is to take a series of shots with the focus set in different positions throughout the scene and then to use software to blend the images together to create on image with focus all the way through the shot.  This seemed like a simple thing to have a try with but I never got around to having a go.  Then I came across a situation that looked like it might be a good example to try.

I was visiting a model show at the Museum of Flight.  I was taking a few photos of some of the more expertly crafted models on display.  I was shooting with a longer lens and using a relatively small aperture to try and minimize the shallow depth of field that you get when shooting small objects close up.  I decided to shoot a model of a Fairey Gannet and the shallow depth of field triggered something in the deep recesses of my brain about focus stacking.  Of course, I had not planned for this so no tripod and just an effort to get focus on different parts of the model without moving the camera too much.

I took the shots and got on with my visit.  When I got home, I almost forgot about the stacking experiment but, fortunately, I did remember.  I exported the images to Photoshop as layers of the same shot.  Then, since they were hand held, I did an Auto-Align action to get them in place.  After that, Auto-Blend was selected.  It seemed to realize that they were a blend stack rather than a panorama – quite clever – and the software quickly did its thing.  Despite not taking too many shots and do it all hand held, the result came out pretty well.  The top shot is the finished product while the lower two show the extremes of the focus range for the original shots.  If I had managed a shot focused right on the back of the fin, the result may have been a bit better still.

My Approach to Shooting and Processing on Crappy Weather Days

This is the finished image. This is pretty much what it looked like to the naked eye (through the viewfinder) when I took the shot given how dark the sky was.

A rare arrival was due on a day that was not good from a weather perspective.  It was dull and rainy and so not what you would hope for.  Conditions like this mean I try to exploit some of the features of the camera and the processing options available.  First, how to set up the camera?  With the light being bad and variable, I went to a pretty high ISO level.  I shot in aperture priority mode and added a lot of exposure compensation.

In my experience, the metering is pretty good when shooting against the sky in clear weather but, when there is a lot of cloud, the camera tends to treat the clouds as too bright and it underexposes the subject too much.  I use a lot of exposure compensation in this case with a setting of +2.0 being used on this day.  The reason I do this is that, aside from the exposure question mark, there is a lot more information available in the lighter end of the exposure curve.  Shooting in RAW gives you options.

This is how the camera recorded the image. This is the in camera JPEG that I extracted from the RAW file using Instant Raw From JPEG.

If you were to look at the aircraft at the time, you would see a dark and menacing sky but you would see plenty of detail on the plane.  The camera does not see that for the original shot.  The aircraft would be very dark.  When processing, this dark area would give you something to work with but the variation in data would be more limited.  Shoot overexposed and you get more to work with.

This approach will only work well if you are shooting RAW.  If you are using JPEG, too much of the usable data will be discarded during the processing in the camera.  To show you what I mean, here are two images.  These are both from the same shot.  One is the RAW file as it showed up when imported in to Lightroom and the other is the embedded JPEG that you can extract from the RAW file and which can be seen when the file is first imported before the rendering is undertaken.  As you can see, the JPEG is over exposed but the RAW rendering seems even more so.

There is way more data in the RAW file though.  Immediately, as I bring the exposure slider back down, the clouds go from being white to quite dark – just as they appeared on the day.  Meanwhile, the fuselage of the aircraft has a lot of the data intact and maintains a lot of the brightness that you could see at the time.  Very little needs to be done with the blacks and they are almost in the right spot by the time the exposure is good for the clouds.  The fuselage might be a bit too dark though.  A small tweak of the blacks and a little boost in the shadows to compensate for too much darkening with the exposure slider and suddenly the shot is looking a lot more like it did when I saw it develop.

My RAW processing baseline always results in a slightly more overexposed shot the embedded JPEG includes. When you first open the image, the embedded image you see in the previous shot initially shows up and then it renders the RAW file. This was the initial RAW rendering prior to any adjustments.

One advantage of shooting on such a crummy day is that the sky is a giant softbox – in this case a very soft one!  The result is that the light is a lot more even than on a sunny day.  The darker look can actually make the colors look a bit more intense than if they were losing out to the whites when the sun is right on them.  While there was only one plane I was specifically there for, playing around with these other shots and working on the technique was a nice extra benefit.

Sensor De-mosaicing and Southwest Colors

I have been pondering the way in which the method by which digital images are captured is affected by what is being photographed.  As part of my workflow, I render 1:1 versions of the images and then quickly weed out the ones that are not sharp.  This needs you to be able to see some detail in the shot that shows whether the sharpness is there.  I have found that, if a Southwest Airlines 737 is in the new color scheme, something odd happens.

Digital image sensors actually capture one of three colors.  Each pixel is sensitive to a certain color – either red, green or blue – courtesy of a filter.  They colors are arranged on the sensor in a pattern called a Bayer pattern.  The camera then carries out calculations based on what the pixels around each location see to calculate what the actual color should be for each location.  This process is known as de-mosaicing.  It can be a simple averaging but more complex calculations have been developed to avoid strange artifacts.

When I photograph the new Southwest scheme, something strange occurs around the N number on the rear fuselage.  It looks very blotchy, even when every other part of the airframe looks sharp and clear.  I am wondering whether the color of the airframe and the color of the registration digits are in some way confusing the de-mosaicing algorithm and resulting in some odd elements to the processed image that weren’t there in real life.  If any of you have photographed this color scheme, can you see whether you had something similar and, if you did or didn’t, let me know what camera you were shooting with so we can see if it is manufacturer specific or not.

Sacramento Roundhouse

One end of the railroad museum in Sacramento is a roundhouse. It is accessible still from the line outside and I was there for a modern locomotive that was being unveiled. Access comes via a turntable which sits right next to the path along the river. I figured I would put together a panorama of the scene. However, I only had my phone (albeit able to shoot raw). I had never tried shooting a pano sequence with it before having only used its internal pano function.

I wasn’t controlling the exposure (although there is a manual function in the app I use) but I had noticed that the Lightroom pano function seemed quite adept at dealing with small exposure variation. I took the sequence and there was not a big difference across them. When I got home, I added them to Lightroom and had a go at the stitching function. It worked better than I had expected. Some small distortions were there but it actually was rather good. I had not been happy about the reduced size of the pano function of the phone so this has provided a better option to use in the future.

Blue Angels at Oceana (And High ISO)

I have only been to the Oceana show once.  I headed down there with my friends Ben and Simon.  We weren’t terribly lucky with the weather.  There was flying during the show but things were overcast and deteriorated as the show went on.  The finale of the show was, naturally for a big Navy base, the Blue Angels.  I was shooting with a 1D Mk IIN in those days and that was a camera that was not happy at high ISO settings.

The problem was, the light was not good and the ISO needed to be cranked up a bit.  Amusingly, if you were shooting today, the ISO levels would not be anything that caused concern.  Current cameras can shoot at ISO levels without any noise levels that would have been unthinkable back then.  However, I did learn something very important with this shoot.  The shot above is one that I got as one of the solo jets got airborne.  I used it as a test for processing.

I processed two versions of the image, one with a lot of noise reduction dialed in and one with everything zeroed out.  I think combined them in one Photoshop image and used a layer mask to show one version in one half of the image and the other for the second half.  When I viewed the final image on the screen, the noise in one half was awfully apparent.  It was a clear problem.  However, I then printed the image.  When I did so, things were very different.  If you looked closely, you could see a little difference.  However, when you looked from normal viewing distances, there was no obvious difference between the two.

My takeaway from this is that viewing images on screens has really affected our approach to images.  We get very fixated on the finest detail while the image as a whole is something we forget.  We print less and less these days and the screen is a harsh tool for viewing.

Creating Lens Profiles for Adobe Software

UPDATE:  It turns out, the upload process for the profile sends to an address that doesn’t work.  While I try to fix this, if you want the profiles to use, you can download them by clicking here.

Within Adobe processing software, there is lens correction functionality built in to the Lightroom Develop module (or Adobe Camera Raw in Photoshop) that compensates for distortion and vignetting in the lens the image was taken with.  Adobe has created a large number of lens profiles but they never created one for the Canon 500mm in its initial version.  Adobe also has an online tool for sharing profiles but this does not include one for this lens either.  The 600mm had a profile and it was supposedly close so I had been using that for a while.  Recently, though, I was shooting with the 1.4x teleconverter fitted and this introduced some new effects which required some manual tweaking to offset.

I still wasn’t happy with the result so I decided it was time to bite the bullet and create some profiles from scratch.  Adobe has a tool for creating a lens profile.  It involves printing out some grid targets which you then shoot a number of times to cover the whole of the frame.  It then calculates the profile.  I was shooting at both 500mm and 700mm so I needed a few targets.  To make a complete profile it is a good idea to shoot at a variety of focusing distances and with a range of apertures.  The tool comes with many targets.  Some I could print at home but some of the larger ones I got printed at FedEx and mounted on foam core to make them more rigid.  Then it was time to shoot a bunch of very boring shots.

The software is not the most intuitive I have ever worked with but it eventually was clear what I had to do.  (Why do some manual writers seem like they have never used the process they are writing about?)  I found out how to run the analysis for different charts and distances separately and append the data to the profile as I go.  I did need to quit the program periodically because it would run out of memory which seems like an odd bug these days.  After much processing and some dropped frames as a result of poor shooting on my part (even on the tripod I got some blur occasionally with very slow shutter speeds) it got a profile out.  The proof of the pudding is in the eating of course (that is what the actual phrase is for those of you that never get past the pudding part) so I tried the profile out on some recent shots.  It works!  I was rather delighted.  I may shoot a few more samples in good conditions to finish things off but this was a rather happy outcome.  Once I have tweaked the profiles sufficiently, I shall upload them to Adobe and anyone can use them.