Tag Archives: processing

Learning a Better Way to Blend in Photoshop

I occasionally use the Statistics function in Photoshop to blend multiple images in order to get rid of the distractions that I don’t want like people or vehicles.  Up until now, this has been a real pain to do.  I would identify the images in Lightroom but would have to open Photoshop, go into the Statistics function, use the browse function in there to select the images and then it would run everything in one go.  This was not a convenient way to go and the output image then needed to be manually added to Lightroom which is not handy.

It turns out that there is a better way.  This may have been in Photoshop all along and I never knew or it could have been a recent addition.  Either way, it is there and I shall now use it for future projects.  I have even created a Photoshop action to cover the process and assigned a function key so it will now do the heavy lifting without my intervention.  It all starts out in Lightroom.  Select all the images that will be used for the blend.  Then use Edit>Open As Layers and a new document will open in Photoshop with all shots as layers.

If everything has been shot on a tripod, things will be properly aligned by default but I often do these things on the spur of the moment so they are hand held.  Consequently, while my efforts to keep pointing in the same direction are not bad, the first task is to select all layers and Auto Align layers to tidy things up.  Next, go into the Layer tab and, under Smart Object, convert to a Smart Object.  This may take a little while.

Next step is to go back into Layers>Smart Objects>Stack Mode.  This brings up the same options as you get through the Statistics function.  Select Mean and send it on its way and you end up with a shot that, depending on the number of shots taken and the clear space in enough of them, results in a clear shot.  Usually I find that I haven’t got enough shots of the right type to get everything to disappear so some ghostly elements may remain but they are certainly less distracting than the figures in the original shots.  I have no idea what the other modes will achieve and the descriptions Adobe provides in their help files are so obscure as to be virtually useless. Instead I shall have to experiment with them to see what happens.  Thankfully, now I have this new method, I can undo the last step easily to try each option which would not have been possible using the Statistics dialog.  Another win!

Blending to Remove Traffic

During a previous visit to Vancouver, I experimented to blending images of the same scene to remove objects I didn’t want included.  When photographing the bridge at Deception Pass, I decided to have another go at this.  The bridge was very interesting but I found the traffic on the bridge to be a distraction.  Looking at some of the shots afterwards, it wasn’t as bad as I thought at the time but, even so, I decided to try processing the shots.

This was the same approach as before.  Load all of the images into Photoshop using the Statistics function and use Median to average things out and hopefully remove the items that I didn’t want to appear.  It seemed to work pretty well.  The top shot has the output while the one below is one of the input shots cropped in along with the final result to show what was removed.

My First Attempt at Focus Stacking

I first read about focus stacking a long time ago and I have been meaning to try it for ages.  The premise is to take a series of shots with the focus set in different positions throughout the scene and then to use software to blend the images together to create on image with focus all the way through the shot.  This seemed like a simple thing to have a try with but I never got around to having a go.  Then I came across a situation that looked like it might be a good example to try.

I was visiting a model show at the Museum of Flight.  I was taking a few photos of some of the more expertly crafted models on display.  I was shooting with a longer lens and using a relatively small aperture to try and minimize the shallow depth of field that you get when shooting small objects close up.  I decided to shoot a model of a Fairey Gannet and the shallow depth of field triggered something in the deep recesses of my brain about focus stacking.  Of course, I had not planned for this so no tripod and just an effort to get focus on different parts of the model without moving the camera too much.

I took the shots and got on with my visit.  When I got home, I almost forgot about the stacking experiment but, fortunately, I did remember.  I exported the images to Photoshop as layers of the same shot.  Then, since they were hand held, I did an Auto-Align action to get them in place.  After that, Auto-Blend was selected.  It seemed to realize that they were a blend stack rather than a panorama – quite clever – and the software quickly did its thing.  Despite not taking too many shots and do it all hand held, the result came out pretty well.  The top shot is the finished product while the lower two show the extremes of the focus range for the original shots.  If I had managed a shot focused right on the back of the fin, the result may have been a bit better still.

My Approach to Shooting and Processing on Crappy Weather Days

This is the finished image. This is pretty much what it looked like to the naked eye (through the viewfinder) when I took the shot given how dark the sky was.

A rare arrival was due on a day that was not good from a weather perspective.  It was dull and rainy and so not what you would hope for.  Conditions like this mean I try to exploit some of the features of the camera and the processing options available.  First, how to set up the camera?  With the light being bad and variable, I went to a pretty high ISO level.  I shot in aperture priority mode and added a lot of exposure compensation.

In my experience, the metering is pretty good when shooting against the sky in clear weather but, when there is a lot of cloud, the camera tends to treat the clouds as too bright and it underexposes the subject too much.  I use a lot of exposure compensation in this case with a setting of +2.0 being used on this day.  The reason I do this is that, aside from the exposure question mark, there is a lot more information available in the lighter end of the exposure curve.  Shooting in RAW gives you options.

This is how the camera recorded the image. This is the in camera JPEG that I extracted from the RAW file using Instant Raw From JPEG.

If you were to look at the aircraft at the time, you would see a dark and menacing sky but you would see plenty of detail on the plane.  The camera does not see that for the original shot.  The aircraft would be very dark.  When processing, this dark area would give you something to work with but the variation in data would be more limited.  Shoot overexposed and you get more to work with.

This approach will only work well if you are shooting RAW.  If you are using JPEG, too much of the usable data will be discarded during the processing in the camera.  To show you what I mean, here are two images.  These are both from the same shot.  One is the RAW file as it showed up when imported in to Lightroom and the other is the embedded JPEG that you can extract from the RAW file and which can be seen when the file is first imported before the rendering is undertaken.  As you can see, the JPEG is over exposed but the RAW rendering seems even more so.

There is way more data in the RAW file though.  Immediately, as I bring the exposure slider back down, the clouds go from being white to quite dark – just as they appeared on the day.  Meanwhile, the fuselage of the aircraft has a lot of the data intact and maintains a lot of the brightness that you could see at the time.  Very little needs to be done with the blacks and they are almost in the right spot by the time the exposure is good for the clouds.  The fuselage might be a bit too dark though.  A small tweak of the blacks and a little boost in the shadows to compensate for too much darkening with the exposure slider and suddenly the shot is looking a lot more like it did when I saw it develop.

My RAW processing baseline always results in a slightly more overexposed shot the embedded JPEG includes. When you first open the image, the embedded image you see in the previous shot initially shows up and then it renders the RAW file. This was the initial RAW rendering prior to any adjustments.

One advantage of shooting on such a crummy day is that the sky is a giant softbox – in this case a very soft one!  The result is that the light is a lot more even than on a sunny day.  The darker look can actually make the colors look a bit more intense than if they were losing out to the whites when the sun is right on them.  While there was only one plane I was specifically there for, playing around with these other shots and working on the technique was a nice extra benefit.

Sensor De-mosaicing and Southwest Colors

I have been pondering the way in which the method by which digital images are captured is affected by what is being photographed.  As part of my workflow, I render 1:1 versions of the images and then quickly weed out the ones that are not sharp.  This needs you to be able to see some detail in the shot that shows whether the sharpness is there.  I have found that, if a Southwest Airlines 737 is in the new color scheme, something odd happens.

Digital image sensors actually capture one of three colors.  Each pixel is sensitive to a certain color – either red, green or blue – courtesy of a filter.  They colors are arranged on the sensor in a pattern called a Bayer pattern.  The camera then carries out calculations based on what the pixels around each location see to calculate what the actual color should be for each location.  This process is known as de-mosaicing.  It can be a simple averaging but more complex calculations have been developed to avoid strange artifacts.

When I photograph the new Southwest scheme, something strange occurs around the N number on the rear fuselage.  It looks very blotchy, even when every other part of the airframe looks sharp and clear.  I am wondering whether the color of the airframe and the color of the registration digits are in some way confusing the de-mosaicing algorithm and resulting in some odd elements to the processed image that weren’t there in real life.  If any of you have photographed this color scheme, can you see whether you had something similar and, if you did or didn’t, let me know what camera you were shooting with so we can see if it is manufacturer specific or not.

Sacramento Roundhouse

One end of the railroad museum in Sacramento is a roundhouse. It is accessible still from the line outside and I was there for a modern locomotive that was being unveiled. Access comes via a turntable which sits right next to the path along the river. I figured I would put together a panorama of the scene. However, I only had my phone (albeit able to shoot raw). I had never tried shooting a pano sequence with it before having only used its internal pano function.

I wasn’t controlling the exposure (although there is a manual function in the app I use) but I had noticed that the Lightroom pano function seemed quite adept at dealing with small exposure variation. I took the sequence and there was not a big difference across them. When I got home, I added them to Lightroom and had a go at the stitching function. It worked better than I had expected. Some small distortions were there but it actually was rather good. I had not been happy about the reduced size of the pano function of the phone so this has provided a better option to use in the future.

Blue Angels at Oceana (And High ISO)

I have only been to the Oceana show once.  I headed down there with my friends Ben and Simon.  We weren’t terribly lucky with the weather.  There was flying during the show but things were overcast and deteriorated as the show went on.  The finale of the show was, naturally for a big Navy base, the Blue Angels.  I was shooting with a 1D Mk IIN in those days and that was a camera that was not happy at high ISO settings.

The problem was, the light was not good and the ISO needed to be cranked up a bit.  Amusingly, if you were shooting today, the ISO levels would not be anything that caused concern.  Current cameras can shoot at ISO levels without any noise levels that would have been unthinkable back then.  However, I did learn something very important with this shoot.  The shot above is one that I got as one of the solo jets got airborne.  I used it as a test for processing.

I processed two versions of the image, one with a lot of noise reduction dialed in and one with everything zeroed out.  I think combined them in one Photoshop image and used a layer mask to show one version in one half of the image and the other for the second half.  When I viewed the final image on the screen, the noise in one half was awfully apparent.  It was a clear problem.  However, I then printed the image.  When I did so, things were very different.  If you looked closely, you could see a little difference.  However, when you looked from normal viewing distances, there was no obvious difference between the two.

My takeaway from this is that viewing images on screens has really affected our approach to images.  We get very fixated on the finest detail while the image as a whole is something we forget.  We print less and less these days and the screen is a harsh tool for viewing.

Creating Lens Profiles for Adobe Software

UPDATE:  It turns out, the upload process for the profile sends to an address that doesn’t work.  While I try to fix this, if you want the profiles to use, you can download them by clicking here.

Within Adobe processing software, there is lens correction functionality built in to the Lightroom Develop module (or Adobe Camera Raw in Photoshop) that compensates for distortion and vignetting in the lens the image was taken with.  Adobe has created a large number of lens profiles but they never created one for the Canon 500mm in its initial version.  Adobe also has an online tool for sharing profiles but this does not include one for this lens either.  The 600mm had a profile and it was supposedly close so I had been using that for a while.  Recently, though, I was shooting with the 1.4x teleconverter fitted and this introduced some new effects which required some manual tweaking to offset.

I still wasn’t happy with the result so I decided it was time to bite the bullet and create some profiles from scratch.  Adobe has a tool for creating a lens profile.  It involves printing out some grid targets which you then shoot a number of times to cover the whole of the frame.  It then calculates the profile.  I was shooting at both 500mm and 700mm so I needed a few targets.  To make a complete profile it is a good idea to shoot at a variety of focusing distances and with a range of apertures.  The tool comes with many targets.  Some I could print at home but some of the larger ones I got printed at FedEx and mounted on foam core to make them more rigid.  Then it was time to shoot a bunch of very boring shots.

The software is not the most intuitive I have ever worked with but it eventually was clear what I had to do.  (Why do some manual writers seem like they have never used the process they are writing about?)  I found out how to run the analysis for different charts and distances separately and append the data to the profile as I go.  I did need to quit the program periodically because it would run out of memory which seems like an odd bug these days.  After much processing and some dropped frames as a result of poor shooting on my part (even on the tripod I got some blur occasionally with very slow shutter speeds) it got a profile out.  The proof of the pudding is in the eating of course (that is what the actual phrase is for those of you that never get past the pudding part) so I tried the profile out on some recent shots.  It works!  I was rather delighted.  I may shoot a few more samples in good conditions to finish things off but this was a rather happy outcome.  Once I have tweaked the profiles sufficiently, I shall upload them to Adobe and anyone can use them.

Shooting RAW on the Phone

The update to iOS 10 brought with it the possibility to shoot in RAW on the iPhone.  For some reason Apple didn’t bother to incorporate this feature in the base phone app but they did make it available to other camera app developers.  Camera+ is one that I use a bit so I figured I would start shooting in RAW via that.  Obviously RAW means larger files but, since I download my files to the desktop frequently and tend to clear out the phone, this wasn’t a concern.

First thing I found out was that other apps could see the shots.  I had taken a few shots and wanted to upload to Facebook and it turned out there wasn’t a problem doing so.  However, the main benefit was anticipated to post processing back on the desktop.  With the SLR shots (is there any point to saying DSLR these days?), it is possible to recover a lot from the highlights and shadows.  Would the same be possible with the phone?  Sort of.  You can get a bit more in these areas than would be the case with the JPEG when things are quickly lost.  However, the sensor data is still not anywhere close to being as adaptable as it is for an SLR.  You get more flexibility to pull the sky back but it is still pretty limited.

Is it worth using?  Definitely.  While it might not be the post processing experience you will be used to with SLR files, it is certainly better than the JPEGs provide.  The increase in file size is hardly an issue these days so I will using it from now on.  The camera app doesn’t have the pan and time lapse stuff so easily to hand so the phone’s base app will still get used but, aside from that, it will be my choice.  My main gripe now is that they have a random file naming protocol that is a little difficult to get used to.  Small problems, eh?

Enfuse for HDR

I am a little late to discovering the Enfuse plugin for working with HDR images.  I started out many years ago using Photomatix.  At the time, it was the go to software for creating HDR images.  Then Adobe got a lot better with their HDR software within Photoshop and I started to use that.  Even more recently, Adobe built HDR processing in to Lightroom and I didn’t need to go to Photoshop at all.  The HDR software worked reasonably well so I stuck with it.  I sometimes felt that it didn’t do as good a job of using the full range of the exposures but it was okay.

I wasn’t entirely satisfied though so have kept an eye on other options.  Someone mentioned Enfuse to me so I decided to give it a go.  It is a plugin for Lightroom and, in the free download, you can try it out but with a limitation on the output image size of 500 pixels.  Obviously this isn’t useful for anything other than testing but that is the point.

The first thing I tried it on was a shot I made at Half Moon Bay looking up at a P-51 Mustang prop and directly into the sun.  This is certainly as much of a range of exposures as you are likely to get.  The perfect thing for an HDR trial.  The results in the small scale file seemed pretty impressive so I decided to buy the package.  There is no fixed price.  You make a donation via PayPal and get a registration code.  I am impressed by the quality of some of the work people put out so I am happy to donate for what they do.  With the software activated, I reran the P-51 shots.  Below is the version I got from Lightroom’s own HDR and following it the version from Enfuse.

C59F8003enfuseHDR.jpg C59F8003-HDR.jpgI did have some issues initially.  Lightroom was not reimporting the image after it was created.  This turned out to be an issue with the way I named the file in the dialog and a tweak to that seemed to fix things.  Strangely, it had been fine on the trial so I have no idea why it became an issue but it is done.  I also played with a slightly less extreme case with an F-22 and, as above, the Lightroom version is first and the Enfuse version is second.  I was really pleased with the result on this one with a very natural look to things.  So far, I see Enfuse being a useful tool for my HDR going forward.

AU0E0447enfuseHDR.jpg AU0E0447-HDR.jpg