Tag Archives: processing

My First Attempt at Focus Stacking

I first read about focus stacking a long time ago and I have been meaning to try it for ages.  The premise is to take a series of shots with the focus set in different positions throughout the scene and then to use software to blend the images together to create on image with focus all the way through the shot.  This seemed like a simple thing to have a try with but I never got around to having a go.  Then I came across a situation that looked like it might be a good example to try.

I was visiting a model show at the Museum of Flight.  I was taking a few photos of some of the more expertly crafted models on display.  I was shooting with a longer lens and using a relatively small aperture to try and minimize the shallow depth of field that you get when shooting small objects close up.  I decided to shoot a model of a Fairey Gannet and the shallow depth of field triggered something in the deep recesses of my brain about focus stacking.  Of course, I had not planned for this so no tripod and just an effort to get focus on different parts of the model without moving the camera too much.

I took the shots and got on with my visit.  When I got home, I almost forgot about the stacking experiment but, fortunately, I did remember.  I exported the images to Photoshop as layers of the same shot.  Then, since they were hand held, I did an Auto-Align action to get them in place.  After that, Auto-Blend was selected.  It seemed to realize that they were a blend stack rather than a panorama – quite clever – and the software quickly did its thing.  Despite not taking too many shots and do it all hand held, the result came out pretty well.  The top shot is the finished product while the lower two show the extremes of the focus range for the original shots.  If I had managed a shot focused right on the back of the fin, the result may have been a bit better still.

My Approach to Shooting and Processing on Crappy Weather Days

This is the finished image. This is pretty much what it looked like to the naked eye (through the viewfinder) when I took the shot given how dark the sky was.

A rare arrival was due on a day that was not good from a weather perspective.  It was dull and rainy and so not what you would hope for.  Conditions like this mean I try to exploit some of the features of the camera and the processing options available.  First, how to set up the camera?  With the light being bad and variable, I went to a pretty high ISO level.  I shot in aperture priority mode and added a lot of exposure compensation.

In my experience, the metering is pretty good when shooting against the sky in clear weather but, when there is a lot of cloud, the camera tends to treat the clouds as too bright and it underexposes the subject too much.  I use a lot of exposure compensation in this case with a setting of +2.0 being used on this day.  The reason I do this is that, aside from the exposure question mark, there is a lot more information available in the lighter end of the exposure curve.  Shooting in RAW gives you options.

This is how the camera recorded the image. This is the in camera JPEG that I extracted from the RAW file using Instant Raw From JPEG.

If you were to look at the aircraft at the time, you would see a dark and menacing sky but you would see plenty of detail on the plane.  The camera does not see that for the original shot.  The aircraft would be very dark.  When processing, this dark area would give you something to work with but the variation in data would be more limited.  Shoot overexposed and you get more to work with.

This approach will only work well if you are shooting RAW.  If you are using JPEG, too much of the usable data will be discarded during the processing in the camera.  To show you what I mean, here are two images.  These are both from the same shot.  One is the RAW file as it showed up when imported in to Lightroom and the other is the embedded JPEG that you can extract from the RAW file and which can be seen when the file is first imported before the rendering is undertaken.  As you can see, the JPEG is over exposed but the RAW rendering seems even more so.

There is way more data in the RAW file though.  Immediately, as I bring the exposure slider back down, the clouds go from being white to quite dark – just as they appeared on the day.  Meanwhile, the fuselage of the aircraft has a lot of the data intact and maintains a lot of the brightness that you could see at the time.  Very little needs to be done with the blacks and they are almost in the right spot by the time the exposure is good for the clouds.  The fuselage might be a bit too dark though.  A small tweak of the blacks and a little boost in the shadows to compensate for too much darkening with the exposure slider and suddenly the shot is looking a lot more like it did when I saw it develop.

My RAW processing baseline always results in a slightly more overexposed shot the embedded JPEG includes. When you first open the image, the embedded image you see in the previous shot initially shows up and then it renders the RAW file. This was the initial RAW rendering prior to any adjustments.

One advantage of shooting on such a crummy day is that the sky is a giant softbox – in this case a very soft one!  The result is that the light is a lot more even than on a sunny day.  The darker look can actually make the colors look a bit more intense than if they were losing out to the whites when the sun is right on them.  While there was only one plane I was specifically there for, playing around with these other shots and working on the technique was a nice extra benefit.

Sensor De-mosaicing and Southwest Colors

I have been pondering the way in which the method by which digital images are captured is affected by what is being photographed.  As part of my workflow, I render 1:1 versions of the images and then quickly weed out the ones that are not sharp.  This needs you to be able to see some detail in the shot that shows whether the sharpness is there.  I have found that, if a Southwest Airlines 737 is in the new color scheme, something odd happens.

Digital image sensors actually capture one of three colors.  Each pixel is sensitive to a certain color – either red, green or blue – courtesy of a filter.  They colors are arranged on the sensor in a pattern called a Bayer pattern.  The camera then carries out calculations based on what the pixels around each location see to calculate what the actual color should be for each location.  This process is known as de-mosaicing.  It can be a simple averaging but more complex calculations have been developed to avoid strange artifacts.

When I photograph the new Southwest scheme, something strange occurs around the N number on the rear fuselage.  It looks very blotchy, even when every other part of the airframe looks sharp and clear.  I am wondering whether the color of the airframe and the color of the registration digits are in some way confusing the de-mosaicing algorithm and resulting in some odd elements to the processed image that weren’t there in real life.  If any of you have photographed this color scheme, can you see whether you had something similar and, if you did or didn’t, let me know what camera you were shooting with so we can see if it is manufacturer specific or not.

Sacramento Roundhouse

One end of the railroad museum in Sacramento is a roundhouse. It is accessible still from the line outside and I was there for a modern locomotive that was being unveiled. Access comes via a turntable which sits right next to the path along the river. I figured I would put together a panorama of the scene. However, I only had my phone (albeit able to shoot raw). I had never tried shooting a pano sequence with it before having only used its internal pano function.

I wasn’t controlling the exposure (although there is a manual function in the app I use) but I had noticed that the Lightroom pano function seemed quite adept at dealing with small exposure variation. I took the sequence and there was not a big difference across them. When I got home, I added them to Lightroom and had a go at the stitching function. It worked better than I had expected. Some small distortions were there but it actually was rather good. I had not been happy about the reduced size of the pano function of the phone so this has provided a better option to use in the future.

Blue Angels at Oceana (And High ISO)

I have only been to the Oceana show once.  I headed down there with my friends Ben and Simon.  We weren’t terribly lucky with the weather.  There was flying during the show but things were overcast and deteriorated as the show went on.  The finale of the show was, naturally for a big Navy base, the Blue Angels.  I was shooting with a 1D Mk IIN in those days and that was a camera that was not happy at high ISO settings.

The problem was, the light was not good and the ISO needed to be cranked up a bit.  Amusingly, if you were shooting today, the ISO levels would not be anything that caused concern.  Current cameras can shoot at ISO levels without any noise levels that would have been unthinkable back then.  However, I did learn something very important with this shoot.  The shot above is one that I got as one of the solo jets got airborne.  I used it as a test for processing.

I processed two versions of the image, one with a lot of noise reduction dialed in and one with everything zeroed out.  I think combined them in one Photoshop image and used a layer mask to show one version in one half of the image and the other for the second half.  When I viewed the final image on the screen, the noise in one half was awfully apparent.  It was a clear problem.  However, I then printed the image.  When I did so, things were very different.  If you looked closely, you could see a little difference.  However, when you looked from normal viewing distances, there was no obvious difference between the two.

My takeaway from this is that viewing images on screens has really affected our approach to images.  We get very fixated on the finest detail while the image as a whole is something we forget.  We print less and less these days and the screen is a harsh tool for viewing.

Creating Lens Profiles for Adobe Software

UPDATE:  It turns out, the upload process for the profile sends to an address that doesn’t work.  While I try to fix this, if you want the profiles to use, you can download them by clicking here.

Within Adobe processing software, there is lens correction functionality built in to the Lightroom Develop module (or Adobe Camera Raw in Photoshop) that compensates for distortion and vignetting in the lens the image was taken with.  Adobe has created a large number of lens profiles but they never created one for the Canon 500mm in its initial version.  Adobe also has an online tool for sharing profiles but this does not include one for this lens either.  The 600mm had a profile and it was supposedly close so I had been using that for a while.  Recently, though, I was shooting with the 1.4x teleconverter fitted and this introduced some new effects which required some manual tweaking to offset.

I still wasn’t happy with the result so I decided it was time to bite the bullet and create some profiles from scratch.  Adobe has a tool for creating a lens profile.  It involves printing out some grid targets which you then shoot a number of times to cover the whole of the frame.  It then calculates the profile.  I was shooting at both 500mm and 700mm so I needed a few targets.  To make a complete profile it is a good idea to shoot at a variety of focusing distances and with a range of apertures.  The tool comes with many targets.  Some I could print at home but some of the larger ones I got printed at FedEx and mounted on foam core to make them more rigid.  Then it was time to shoot a bunch of very boring shots.

The software is not the most intuitive I have ever worked with but it eventually was clear what I had to do.  (Why do some manual writers seem like they have never used the process they are writing about?)  I found out how to run the analysis for different charts and distances separately and append the data to the profile as I go.  I did need to quit the program periodically because it would run out of memory which seems like an odd bug these days.  After much processing and some dropped frames as a result of poor shooting on my part (even on the tripod I got some blur occasionally with very slow shutter speeds) it got a profile out.  The proof of the pudding is in the eating of course (that is what the actual phrase is for those of you that never get past the pudding part) so I tried the profile out on some recent shots.  It works!  I was rather delighted.  I may shoot a few more samples in good conditions to finish things off but this was a rather happy outcome.  Once I have tweaked the profiles sufficiently, I shall upload them to Adobe and anyone can use them.

Shooting RAW on the Phone

The update to iOS 10 brought with it the possibility to shoot in RAW on the iPhone.  For some reason Apple didn’t bother to incorporate this feature in the base phone app but they did make it available to other camera app developers.  Camera+ is one that I use a bit so I figured I would start shooting in RAW via that.  Obviously RAW means larger files but, since I download my files to the desktop frequently and tend to clear out the phone, this wasn’t a concern.

First thing I found out was that other apps could see the shots.  I had taken a few shots and wanted to upload to Facebook and it turned out there wasn’t a problem doing so.  However, the main benefit was anticipated to post processing back on the desktop.  With the SLR shots (is there any point to saying DSLR these days?), it is possible to recover a lot from the highlights and shadows.  Would the same be possible with the phone?  Sort of.  You can get a bit more in these areas than would be the case with the JPEG when things are quickly lost.  However, the sensor data is still not anywhere close to being as adaptable as it is for an SLR.  You get more flexibility to pull the sky back but it is still pretty limited.

Is it worth using?  Definitely.  While it might not be the post processing experience you will be used to with SLR files, it is certainly better than the JPEGs provide.  The increase in file size is hardly an issue these days so I will using it from now on.  The camera app doesn’t have the pan and time lapse stuff so easily to hand so the phone’s base app will still get used but, aside from that, it will be my choice.  My main gripe now is that they have a random file naming protocol that is a little difficult to get used to.  Small problems, eh?

Enfuse for HDR

I am a little late to discovering the Enfuse plugin for working with HDR images.  I started out many years ago using Photomatix.  At the time, it was the go to software for creating HDR images.  Then Adobe got a lot better with their HDR software within Photoshop and I started to use that.  Even more recently, Adobe built HDR processing in to Lightroom and I didn’t need to go to Photoshop at all.  The HDR software worked reasonably well so I stuck with it.  I sometimes felt that it didn’t do as good a job of using the full range of the exposures but it was okay.

I wasn’t entirely satisfied though so have kept an eye on other options.  Someone mentioned Enfuse to me so I decided to give it a go.  It is a plugin for Lightroom and, in the free download, you can try it out but with a limitation on the output image size of 500 pixels.  Obviously this isn’t useful for anything other than testing but that is the point.

The first thing I tried it on was a shot I made at Half Moon Bay looking up at a P-51 Mustang prop and directly into the sun.  This is certainly as much of a range of exposures as you are likely to get.  The perfect thing for an HDR trial.  The results in the small scale file seemed pretty impressive so I decided to buy the package.  There is no fixed price.  You make a donation via PayPal and get a registration code.  I am impressed by the quality of some of the work people put out so I am happy to donate for what they do.  With the software activated, I reran the P-51 shots.  Below is the version I got from Lightroom’s own HDR and following it the version from Enfuse.

C59F8003enfuseHDR.jpg C59F8003-HDR.jpgI did have some issues initially.  Lightroom was not reimporting the image after it was created.  This turned out to be an issue with the way I named the file in the dialog and a tweak to that seemed to fix things.  Strangely, it had been fine on the trial so I have no idea why it became an issue but it is done.  I also played with a slightly less extreme case with an F-22 and, as above, the Lightroom version is first and the Enfuse version is second.  I was really pleased with the result on this one with a very natural look to things.  So far, I see Enfuse being a useful tool for my HDR going forward.

AU0E0447enfuseHDR.jpg AU0E0447-HDR.jpg

How Low Can You Go?

The high ISO capabilities of modern cameras are a constant source of discussion whenever a new camera comes out.  It was quite funny to see everyone get so excited about the multi-million ISO range on the Nikon D5 when it was announced, only to see that the high ranges were nothing more than moose with a bit of an image overlaid on them.  Not a big surprise but still funny to see how much everyone was going nuts about it before the reality set in.

Consequently, I was interested to see what the new bodies I bought were really capable of.  I have already posted a little about some of the shots I took as the light faded at SFO.  I was shooting with a tripod and a gimbal mount to make things easier but I was also working within the ISO range of the camera.  I went with auto ISO and exposure compensation while shooting in aperture priory and wide open to get what I could.  However, I really wanted to see what was possible so I changed to manual mode, exposure compensation and auto ISO to see what could be done.  Auto ISO is not going to use the extended ranges of ISO.

AE7I2701.jpgAE7I2701jpeg.jpgI don’t know about the Nikon cameras but the Canon cameras tend to have three extended range ISO settings at the high end.  There is the highest ISO setting that it recognizes and then there are H1, H2 and H3.  They don’t name them with the actual ISO settings but you know what they are based on what you see on the camera.  The manufacturer does not label them as normal ISO settings because they do not stand behind them as a capability.  There is a good reason for that.  They are just like the highest Nikon settings.  Useful if you have no option but not very good otherwise.

The same was true with my older bodies.  They had a very high ISO range that was not great but it would do in a pinch.  At the Albuquerque Balloon Fiesta I shot an Aero Commander in the pitch black that flew over and I saw stuff in the shot I couldn’t see with the naked eye.  This is with a camera that is ancient by modern standards.  I expected a bit more with the latest generation.  Certainly, there is more to be achieved with what we have now. However, post processing becomes a part of the story.

My first experience with these shots was in Lightroom.  The shots did not look good at all.  However, there was a clue in all of this.  The first view in Lightroom is based on the JPEG that is baked into the raw file.  It looked okay until it was rendering by Adobe at which point it looked a lot worse.  This piqued my interest.  Sure enough, at the extended ISO ranges, the shots looked pretty awful.  Lots of purple backgrounds.  These were not going to be any good.  However, the initial preview had looked good., is this a case of Lightroom not being able to render the shots well?  I figured I should try going to the source.

AE7I2747.jpgAE7I2747jpeg.jpgAt various ISAP symposiums, the Canon guys have talked about how their software is the one that you should use since only Canon know how to decode their shots properly.  They have the recipe for the secret sauce.  Since Digital Photo Professional (DPP), Canon’s own software for decoding raw files, is so terrible to use, I never bother with it.  The raw processing in Lightroom (and ACR since they are the same) is so much easier to use normally and works really well. DPP is just awful in comparison.  However, we are now dealing with the extremes of capabilities of the camera.  The embedded previews seemed better so maybe it is possible that DPP will be able to do a better job.

You can now be the judge.  Here are some pairs of shots.  They are the same shot in each case.  The first is processed in Adobe Lightroom and the second is processed in DPP.  I think it is clear that DPP is better able to work with the raw files when it comes to extreme ISO settings.  The shots certainly have a more normal look to them.  The Lightroom shots look really messed up by comparison.  It doesn’t mean I will be using the extended ISO ranges on a regular basis.  Jumping to DPP for processing is not helpful on a regular basis.  However, if the need arises, I know that I can push the camera a lot further and use DPP to get something that is okay if not great.  This could be handy at some point.

Camera Profiling

AE7I0561-2.jpg AE7I0561.jpgFor all of my previous cameras I have created profiles.  When I got the new cameras I decided not to bother and to go with the profiles that are built in to Camera Raw/Lightroom.  This was working okay for a while but there were some shots where I felt like the adjustments were having slightly odd effects.  It was almost like the files had less adjustability than my old Mark IV files.  This didn’t seem likely.  I figured I would have a go at creating profiles and see whether that made any difference.

AE7I0336.jpg AE7I0336-2.jpgThe profiles are relatively easy to create.  I have a color card that has twelve different color squares.  You take a shot of it in RAW mode.  Then comes the slightly annoying step.  You have to cover it to a DNG file.  Not sure why, since this is all Adobe software, they can’t combine the steps but never mind.  Then you open the profiling software.  Pull up the DNG file, align the four color dots with the corner color squares and let it do its thing.  Choose a name and the profile is saved on your computer where the Adobe software can see it.

AE7I0439-2.jpg AE7I0439.jpgIt does make a difference.  The thing I found most interesting was that the profiles for the two cameras were quite different.  It shows up most in the blues for my bodies which, given I shoot aircraft a lot, is no small deal.  The shots here are versions of the same images with the default profiles and the new profiles for comparison.  Everything else is the same so the difference is purely profile related.

AE7I0619-2.jpg AE7I0619.jpg