Category Archives: technique

New Lightroom Feature I Don’t Have Yet

I have seen announcements from Adobe about a feature that is coming soon to Lightroom which seems particularly appealing to me.  When dealing with dull and overcast conditions, I shoot quite heavily overexposed.  This gives me a lot more shadow detail to work with and also still allows me to pull back detail in the sky.  On a dull day, a couple of stops of overexposure can work quite effectively.  The Lightroom/Camera Raw editing only has a limited amount you can do with the exposure and shadows sliders so it is not ideal for this.  However, the new version of Lightroom is going to use AI to analyze the image for a subject to be selected or a sky.  What I didn’t realize was that this was already available in Photoshop.  I decided to have a play with it there to see how it works and get a feel for the way it might work in Lightroom soon.

I opened the image in Photoshop as a Smart Object.  I then created a New Smart Object Via Copy to duplicate the object.  In Select, I picked Subject and it did a pretty good job of selecting the airframe.  Some edges were a little vague but overall pretty good.  I used that selection to make a mask on the upper layer.  Then, I was able to open each Smart Object in Camera Raw and edit them each to optimize the sky or the background.  Some tweaking was occasionally required to ensure that it didn’t look like a bad superimposed job but it worked quite well.  Lightroom will have this function as a filter so I should be able to do something similar in there but we shall see when it gets released.  If it works, it could be a great addition to the editing toolset.  I wish I had known about it in Photoshop before to be honest!

UPDATE: The Lightroom update is now out and I have played with it a bit. I think it is even better than the Photoshop implmentation so I shall put together a more detailed post on how it is working out for me.

DxO PureRAW Testing

Whenever you suddenly see a bunch of YouTube videos on a similar topic, you wonder whether a company has been sending out copies of its product to people to get them talking about it.  I think this must be the case with DxO Mark since I have come across a lot of videos about their new raw convertor, PureRAW.  Having watched a couple of the videos – the technique clearly works – I was curious about the capabilities of the product.  Since they provide a 30 day free trial, I decided to give it a go.

One of the topics which seems to get people really worked up if they are too focused on the products and less on the photos you take with them is Raw conversions.  You can shoot JPEGs in camera but, if you shoot Raw, you tend to have a lot more flexibility with post processing.  (For those not in to this stuff – and I am amazed you are still reading this if that is the case – a Raw file is the data that comes off the sensor with very little processing applied.). Software developers come up with their own ways of converting this data into an image.  Camera manufacturers provide their own raw converters but they don’t share the detailed understanding with the software manufacturers so they have to create their own.

The most widespread software provider is Adobe with their Camera Raw convertor built in to Photoshop and Lightroom.  There are others with their own software and you can come across some quite heated discussions online about which is the best.  Hyperbole abounds in these discussions with anyone getting in to the debate almost always dismissing Camera Raw as terrible.  It’s clearly not terrible but it might have its limitations.

PureRAW is a convertor which doesn’t really give you much control.  Instead, it takes the Raw file, does its magic and then creates a new DNG raw file which you can them import direct in to Lightroom (if you choose – which I do) to continue to edit in much the same way you would have previously.  Watching the reviews, they seemed to suggest that for normal shots at normal ISO settings, there was not much in it.  However, for high ISO images, they showed significant differences with reduced noise, sharper images and clearer detail.  Some reviewers thought it might even be a bit oversharpened.

I figured I would try out my own experimentation with some really high ISO images.  I have some shots at ridiculously high ISO settings that I took at night or in poorly lit environments.  These seemed like a good place to start.  The workflow is not ideal – this would not be something I do for all images but only for some that seemed like they would need it – because I have to select the shot from Windows Explorer (getting there by right clicking on the image in Lightroom) and then drag in to PureRAW.  I can drag a whole bunch of shots over there before having to do anything to them.

The program will download profiles for the camera and lens combinations if it doesn’t already have them and you have to agree to this.  Not sure why it doesn’t do it automatically to be honest but I guess there is a reason.  When you have all of the shots of interest selected, you click Process and off it goes.  It isn’t terribly fast but I wasn’t dealing with a huge number of shots.  Interestingly, I took a look at Task Manager to see how much resource it was using and the processor was barely ticking over so it wasn’t stressing the machine at all.  At a later stage, for reasons I shall explain in a while, I did deactivate the use of the graphics card and things got considerably slower.

When the processing is finished, you have the option to export them to Lightroom.  It saves them in a sub folder for the original folder and they all import together.  Since I have Lightroom sort by capture time, the new files arrive alongside the original which makes comparing them pretty simple.  For the 204,000 ISO shot (an extended range ISO for that camera), things were slightly better but still really noisy.  For the 51,000 ISO shots, things actually did appear to be pretty impressive.  I have a normal profile for the camera that I use for the raw conversion and a preset for high ISO conversions and the comparison is not dramatic but it is definitely a sharper, more detailed and slightly cleaner result.

I have put pairs of shots in the post with crops in on each image to give a comparison of the output so you can judge for yourself.  Will I buy the software?  I don’t know.  It is currently $90.  That is quite a bit for software that does one thing only.  The interface with my workflow is a bit clunky and it has benefit in a relatively limited set of circumstances from what I have seen so far.

Now for some further feedback as my experimentation has progressed.  I did try the tool out on some more normal shots.  There are some minor differences from a conversion of the raw within Lightroom but they don’t seem to be significant enough to justify the investment.  I played with some shots that had very contrasty scenes and it was slightly less noisy but, again, not that big a deal.  They also felt over sharpened.

I have had some problems with the program.  After a while, I got conversions where the new DNG file was just black.  This happened on a few occasions.  I found switching to CPU only solved the issue but only after I deleted the DNGs that had been created.  Interestingly, once I went back to Auto mode, it continued to work.  A weird bug and not one unique to me apparently.  I have also had erratic results when it exports to Lightroom with it failing to do so on a number of occasions.  This is really laborious to deal with and, combined with the fact that the drag from Lightroom to PrimeRAW only works on a Mac and not on Windows, the lack of integration is really enough to put me off.

So far, I will let the trial expire.  It is a tool that is capable of some interesting improvements in more extreme situations but the integration is poor and the benefits are limited for me so, with that in mind, it just isn’t worth the expenditure.  If it made more of a difference to normal shots, I might consider it but it currently doesn’t offer enough to justify the cost or the process slowdown.

Polarizing the Overfliers

I was in a location where a couple of the departures from SEA were overflying me.  I happened to have the camera to hand (of course I did) and I had the polarizer on there at the time.  I had an Alaska Airlines 737 (what a shock from SEA) and a Hawaiian Airlines A330.  I grabbed a few shots.  The thing I like about the polarizer is cutting down on the glare from the white fuselages but they were still pretty bright.  The rest of the sky was darkened considerably and, when editing to address the white fuselages, even more dark.  I quite like the deep and moody look it gives to the shots with very little editing involved.  Both jets pulled some vapor as they came through the same area so clearly there was extra moisture in that one spot.  Maybe it was a thermal?

Night Skies on San Juans

One great feature of traveling to more remote areas away from the cities is the clear skies you can get at night.  The ability to see plenty of stars when the sun has gone is great.  With summer approaching, the sun takes quite a while to go down so I had to wait until quite late to get a shot that I wanted.  I could have waited even later but I wasn’t that committed to the shot.  I wasn’t using a fast lens so, even with higher ISO, I was still using a 30 second exposure.  Even at 16mm, this still shows up some motion in the stars.  Ideally, I would have taken a fast wide lens but I didn’t bother renting one for the trip so this will have to do.

Super Resolution

The most recent update for Adobe Photoshop includes a function called Super Resolution.  Many of the third party plugins and stand alone image processing tools come with tools to increase the resolution of images.  In Photoshop you used to have a basic way to increase resolution but it wasn’t that clever and could introduce odd artifacts.  I had been advised to use it in small increments rather than one big increase to reduce the problems but I hardly ever used it.

The new addition to Photoshop is apparently based from machine learning.  If the PR is to be believed, they took loads of high res images and low res versions of the same image and the machine learning came to recognize what might be there in the small shot from what it knew was in the large shot.  I don’t know what the other packages aim to achieve but this new tool in Photoshop has been doubling the resolution of the shots I have played with.  You end up with a file four times the size as a result of this doubling of dimensions.

I have tried it out on a couple of different shots where the resolution was okay but not terribly large and where a higher res shot might prove useful.  So far the tool is available through Camera Raw in Photoshop – not Lightroom.  You need to update Lightroom in order to import the DNG files it produces.  There is a suggestion that Lightroom will get this capability in time which would be more user friendly from my perspective.

My computer is not cutting edge so it takes a little while to process the images.  It forecasts five minutes but seemed to complete the task way faster than that.  In the examples here, I attach a 200% version of the original shot and a 100% version of the new file.  There seems to be a definite benefit to the output file.  I wouldn’t describe this as earth shattering but it is useful if the original file is sharp enough and I might have a need for this for a few items over time.

HDR Processing on a Slide

I decided to try a little experiment with my slide scanning.  Having scanned a bunch of slides and negatives using a DSLR and macro lens set up, I had come across a few slides where the image just didn’t seem to work out very well.  A big part of this is that the original slides were not very well exposed so I was starting from a less than ideal place.  However, when editing the raw file, I found I wasn’t able to get a balance of exposures that I liked, despite slides supposedly having a very narrow dynamic range.

Since I could see some detail in the original slide, I figured an HDR approach might be of use.  I took three shots of the slide with differing exposure – an inconvenient thing to do when tethered since the AEB function didn’t seem to work on the 40D in that mode – and then ran the HDR function in Lightroom on the three exposures.  Despite the borders possibly confusing the algorithm, it seemed to do a pretty reasonable job of getting more of the image in a usable exposure range.  This is not a great image and would not normally be making it to the blog but, as an example of getting something more out of a problem shot, I thought it might be of interest to someone.

Collagewall Installation

Nancy and I had been discussing what pictures to add to the walls in the house.  We were trying to find something that was a nice layout and also could include images from a variety of places.  We settled on the Collagewall from MPix.  I have used MPix for a lot of photo printing requirements over the years so was happy to give this product a go.

They have a variety of configurations that you can choose from.  They have varying dimensions and layouts and you can  pick your images to fit different aspect ratios.  The one we went with was 4.5’ across to fit a large wall space and it included some large and small square format images with a couple of panoramic shots and one vertical thin image.  I did all of the selections and formatting in Lightroom and then just dragged and dropped them in to the configuration tool.  It was very straightforward.

The whole things was printed and shipped quickly and would have been with us shortly thereafter had it not been for a winter storm that meant the package got to spend a week in Salt Lake City.  However, it finally arrived and we could install it.  There is a paper template provided to assist in putting it on the wall.  You tape that in place checking for location and level before getting to work.  A series of pins need to be inserted into the surface of the wall.  Using the template, you can make an initial pin hole with one of the pins without pushing it all the way in.  Then, when all locations have an initial mark, the template can be removed and saved for any future installation.

Then add the full set of pins by pushing them all the way in to the initial holes previously made.  This results in a grid of pins covering the full area of the finished work.  Slots on the back of the prints will then slide over the heads of the pins.  For some of the small prints and the panos, adhesive foam pads are added to provide some stability.  The larger prints are stabilized sufficiently by the pins.  Then you slot everything in to place.

From start to finish, it was probably little more than half an hour to put it up.  A significant portion of that was making sure the template was exactly where we wanted and properly leveled.  Nancy pushed out each print while I was inserting the pins.  Finishing it off was very simple.  The nature of the installation means changing a print out for a replacement would be very easy and then include a folding element that can be inserted in the back to make each print stand on its own if required.  I’m really happy with the way it has come out and might do a smaller installation for another location in the house.  In truth, the longest part of this is choosing the right shots to include.

High ISO Shooting and Processing Technique

I watched a video on YouTube about a way to process shots taken in low light with high ISOs to improve the noise performance.  I wasn’t particularly interested in the approach until I was down on the shore as the sun was going down and I was using a long lens.  I figured this might be a good time to try it out.  The approach is to shoot a lot of shots.  You can’t have anything moving in the shots for this to work but, if it is a static scene, the approach can be used.

Shoot as many shots as you can.  Then import them in to Photoshop as layers.  Use the align function to make sure that they are all perfectly aligned and then use the statistics function to do a mean calculation of the image.  You can do this a couple of ways in Photoshop.  You can make a smart object and then process it or you can process through Statistics.  The averaging function takes a lot of the noise out of the shot.  If you have lots of images, you can make it effectively disappear.  I wasn’t prepared to make that many shots but I tried it with a reasonable number of images.  The whole image isn’t really of interest.  Instead, I include one of the images cropped in and the processed image similarly cropped to allow you to compare.

Diffraction Problems with Window Screens

Occasionally I will get aircraft heading in to Boeing Field come right by the house.  Late Friday afternoon, two Boeing test jets were coming my way.  One was the first 777X and the other was that first 737 Max7.  The usual route brings them just slightly north of the house so I was ready.  However, the Max was heading just slightly south of the normal track and looked like it might go the other side of the house.  At the last minute, I realized it would and ran through the the other side.

I got the window open but didn’t have time to remove the screen.  I thought it would take out some light but figured the large aperture of a big lens would just blur out the screen mesh since it was so close.  Through the viewfinder, things look pretty good.  However, when I downloaded the shots, I realized the shots were totally awful.  The screens had caused shadowing of the images.  The center image was there but I could see shadow versions about and below.  Then I got to one with a beacon flashing and that showed exactly how the pattern of light was scattered.  Based on what I see, I assume this is a diffraction effect.  It is a useless shot but it is very interesting which is why I am sharing it.

Is HDR Necessary Anymore?

I was taking some shots for work recently where the sky had some nice cloud detail and the foreground was in a lot of shade.  Since the pictures were needed for a project, I was covering my bases and shot some brackets to allow me to do some processing in HDR later.  Some people hate HDR but I have always been looking to use it to get a shot that reflects more the human eye’s ability to deal with extremes of contrast.  With a wide range of light levels in a shot, HDR can give you a more usable image.

However, when I was processing the shots, I was struck by how I could use the middle exposure alone and, with some helpful adjustment of exposure, shadows and highlights, I was able to get much the same sort of result as the HDR image provided.  The raw files seem to have enough latitude for processing that going to the bother of taking and processing the HDR image hardly seemed worth it.  There are still situations where the range of exposure is so wide – outdoor sunlight and shady interiors – that it is still probably necessary to bracket and process later.  However, for a lot of the situations I used to use HDR for, there seems little point.  How many of you still shoot HDR?