Occasionally I will get aircraft heading in to Boeing Field come right by the house. Late Friday afternoon, two Boeing test jets were coming my way. One was the first 777X and the other was that first 737 Max7. The usual route brings them just slightly north of the house so I was ready. However, the Max was heading just slightly south of the normal track and looked like it might go the other side of the house. At the last minute, I realized it would and ran through the the other side.
I got the window open but didn’t have time to remove the screen. I thought it would take out some light but figured the large aperture of a big lens would just blur out the screen mesh since it was so close. Through the viewfinder, things look pretty good. However, when I downloaded the shots, I realized the shots were totally awful. The screens had caused shadowing of the images. The center image was there but I could see shadow versions about and below. Then I got to one with a beacon flashing and that showed exactly how the pattern of light was scattered. Based on what I see, I assume this is a diffraction effect. It is a useless shot but it is very interesting which is why I am sharing it.
I got lucky on the timing for one thing during this whole adventure. I was sitting at home playing around with some images and decided I wanted to create a couple of prints. One was a print of the hummingbirds from the back yard and the other was a poster I decided to make of a bunch of lifeboat shots from our visit to the UK last year. My usual print outlet is Mpix so I created the files, uploaded them and sent the order. A few days later a large package arrived on the porch. Shortly before it arrived, I got a message from Mpix saying that they were suspending work as a result of the virus. They are based in Kansas so I guess it took a while to get to them. I am really happy with the prints and it reminded me of how much a physical print is better than looking at something on a screen. I will have to print more when they are back up and running.
A while back, I bought the Lightroom plugin, Negative Lab Pro.This is a plugin that converts digital images of negatives to a positive image.I wrote about it in this post.A short time ago, the developer brought out a version 2.0 upgrade to the plugin.It turns out, the upgrade was free for those of us that had bought the original plugin.I installed the upgrade to see how things have been improved.
Initially, I was very disappointed.The conversion process after the update seemed to be awful.Things looked dark and blotchy and efforts to unconvert and reconvert the images didn’t help.I was perplexed by this since a number of users had already exclaimed how happy they were with the update.If in doubt, follow the old approach of closing stuff and restarting it.I closed Lightroom and reopened it and whatever was wrong before was now fixed.The conversion worked very well.The controls have been expanded to give you a bit more to play with.The main benefit I am seeing so far is in the color balancing.Shots seem to have a more natural look to them without me having to work too hard on the color in the first place.Shots like those with a lot of sky and an odd colored aircraft will still test the algorithm a lot but otherwise it seems to have a good handle on things.It is also now able to handle frame edges without getting confused.You can tell it how much of the edge to ignore which is a useful feature although I have got into the habit of cropping carefully already.
All in all, the upgrade seems to be a good one.Since it hasn’t cost me anything, that is a nice thing to have.It is also good to know that the developer is continuing to work on the product which holds out the hope of further upgrades to come.I continue to recommend this to anyone that has been scanning their old negatives with a digital camera.
I was downloading shots from two cards into Lightroom when one of the downloads seemed to hang. I have seen this before and on those occasions, removing the card and starting again did the trick. This time it didn’t and, when I reinserted the card, the computer said I needed to reformat it. I thought I would try it back in the camera to see if that was okay but no joy. Time for RescuePro Deluxe again. I wrote about using this previously. I had an issue with it one time when I tried a recovery and the same thing happened this time. The card drive letter doesn’t show up (nor do any of the others).
There is a simple fix to see them all which is to press the H key. However, I hadn’t made a note of that previously and couldn’t remember. Fortunately, their help desk gave me the code and the pictures were all swiftly recovered. (I jest. The program works well but recovering everything it can find on a 64Gb card and then working through that to find the files you really want rather than something from months previously is a bit of a slow process. Still, it is a lot better than the alternative of having no shots!)
I do like to experiment with
alternative printing options and, when I heard an ad on the radio for
FractureMe, a company that prints on glass, I was curious as to how it would
look. I decided to make a print with
them and to see how it came out. Their
approach is pretty much how it sounds. A
print is created on glass with a what backing sheet to provide the base and
that is it. Nothing tricky about
preparing the files so I uploaded an eclipse shot I had and placed the
order. I did this just before Christmas
and the lead time was three weeks, probably as a result of a bunch of holiday
I sort of forgot about it for a
while. When I got the shipping
notification, I was quite excited until I realized it would be a week for the
package to make its way across the country.
When it did arrive, I was quite impressed with the way it had been
packed. The image was recessed into a
cardboard mount that was supported by a thick sheet of corrugated card. All of this was wrapped together and then
slotted into mounts on the edge of a far larger box. It was stable and well away from potential
dings. It arrived in great shape along
with a mounting screw for the wall.
The image looks great. The eclipse shot is not a standard type of
image so I haven’t tested color reproduction with this but it does look nice
and the darkness of the shot seems to work with the glass well. The first thing I had to do was clean
it. It seemed to have acquired a lot of
dust – presumably in the packaging phase.
Now it is time to find a spot to keep it long term. For now it is sitting on the mantelpiece.
I first read about focus stacking a long time ago and I have been meaning to try it for ages. The premise is to take a series of shots with the focus set in different positions throughout the scene and then to use software to blend the images together to create on image with focus all the way through the shot. This seemed like a simple thing to have a try with but I never got around to having a go. Then I came across a situation that looked like it might be a good example to try.
I was visiting a model show at the Museum of Flight. I was taking a few photos of some of the more expertly crafted models on display. I was shooting with a longer lens and using a relatively small aperture to try and minimize the shallow depth of field that you get when shooting small objects close up. I decided to shoot a model of a Fairey Gannet and the shallow depth of field triggered something in the deep recesses of my brain about focus stacking. Of course, I had not planned for this so no tripod and just an effort to get focus on different parts of the model without moving the camera too much.
I took the shots and got on with my visit. When I got home, I almost forgot about the stacking experiment but, fortunately, I did remember. I exported the images to Photoshop as layers of the same shot. Then, since they were hand held, I did an Auto-Align action to get them in place. After that, Auto-Blend was selected. It seemed to realize that they were a blend stack rather than a panorama – quite clever – and the software quickly did its thing. Despite not taking too many shots and do it all hand held, the result came out pretty well. The top shot is the finished product while the lower two show the extremes of the focus range for the original shots. If I had managed a shot focused right on the back of the fin, the result may have been a bit better still.
I have only been to the Oceana show once. I headed down there with my friends Ben and Simon. We weren’t terribly lucky with the weather. There was flying during the show but things were overcast and deteriorated as the show went on. The finale of the show was, naturally for a big Navy base, the Blue Angels. I was shooting with a 1D Mk IIN in those days and that was a camera that was not happy at high ISO settings.
The problem was, the light was not good and the ISO needed to be cranked up a bit. Amusingly, if you were shooting today, the ISO levels would not be anything that caused concern. Current cameras can shoot at ISO levels without any noise levels that would have been unthinkable back then. However, I did learn something very important with this shoot. The shot above is one that I got as one of the solo jets got airborne. I used it as a test for processing.
I processed two versions of the image, one with a lot of noise reduction dialed in and one with everything zeroed out. I think combined them in one Photoshop image and used a layer mask to show one version in one half of the image and the other for the second half. When I viewed the final image on the screen, the noise in one half was awfully apparent. It was a clear problem. However, I then printed the image. When I did so, things were very different. If you looked closely, you could see a little difference. However, when you looked from normal viewing distances, there was no obvious difference between the two.
My takeaway from this is that viewing images on screens has really affected our approach to images. We get very fixated on the finest detail while the image as a whole is something we forget. We print less and less these days and the screen is a harsh tool for viewing.
The suspension bridge at Lions Gate in Stanley Park, Vancouver is a magnet for photographers. I was only passing through but, as we watched the traffic moving across the bridge, I was thinking about how to get a shot that didn’t have cars on it. The traffic was steady so there was not way I would get a clear moment. Indeed, while we were there, they changed the lights and reversed the center lane based on the traffic demand.
I didn’t have a tripod but I did decide to experiment with an alternative technique. This is best done using a tripod and a lot of exposures but I figured I would go with shots that were pretty closely aligned and about half a dozen shots. This didn’t work perfectly but it didn’t go too badly. When you get back to the computer, you open up Photoshop. Click on File and Statistics and a dialog opens up. Select all of the files and change the option at the top to Median and check Align Images. Then send it on its way.
If the shots are good and there are enough, the algorithm will look at each shot and see the changing items – cars in this case – as the oddities. It will see what is consistent in each shot and get rid of the odd stuff. If you have it right, the cars will vanish. In this case, there were some overlaps and not enough shots but it still did a reasonable job.
I previously added a Zoomify image to a blog post. It was pointed out that, as a Flash based format, it didn’t work on some mobile devices. That was using the default Photoshop installation. I have now found a way of generating an HTML5 version of the Zoomify output. This is a trial to see whether it works. Let me know in the comments. Click here to see the file.
I am in the process of experimenting with a new approach to scanning old photographs. For many years I have been using a Minolta Scan Dual III scanner. It can accept strips of negatives or slides and does a reasonable job of scanning them in. It is a bit labor intensive and is certainly not fast. Moreover, the scanner is not terribly reliable and it will often hang mid scan requiring me to restart it and close down the application before restarting that too. Since it takes a long time, I often get it running and go and do something else so I might miss the problem.
I do have another imaging tool that works very quickly. In fact I have several of them. These are my current digital cameras. I have bought a set of extension tubes to allow me to treat existing lenses as macro lenses. I have also acquired a small light pad. Cutting some card to shape means I can hold down any old negatives and view them through a hole with illumination from the light pad below. Mount a camera on an arm looking down on the pad and I now have a way to image the negative.
I am taking the images at my desk so I am able to tether the camera to the computer and use Lightroom to capture the images directly. This has actually provided me with an opportunity to drag out one of my older bodies that doesn’t get used anymore. My old 40D has been sitting on a shelf for a long time but it has come back into use for this project. It has more than enough resolution for this task. (Unfortunately, the batteries are now rather old and don’t hold a charge well so I am going to get an AC adapter from Amazon for ten dollars which should free me to scan as much as I want.)
I slide the negative into the holder and check the rough alignment through the viewfinder. Fortunately, although it took me a while to find it, the 40D does have Liveview so I can make use of that to make sure the alignment is right. I use the trigger release in Lightroom’s tether dialog to take the shot to avoid disturbing the setup. If an image needs over or under exposure, I have to remember that it is a negative so I have to use exposure compensation in the opposite sense. The shot is imported straight in the Lightroom when it is taken. The first thing that I need to do is reverse the tone curve to change the negative to a positive. A white balance correction will take out the color cast of the negative and I now have an image to work with. I have a preset for given film types that does this during the import process.
The image is now recognizable but not there yet. Now I have to do some manual manipulation to tidy it up. The sliders have to be used carefully in this case because they are now working in reverse as a result of the tone curve that I applied. This requires some thought. Exposure is still exposure but is reversed. Usually shots look a bit washed out so, what would normally by the Blacks slider is now the Whites. Shadows are handled with the Highlights and vice versa. It takes a bit of getting used to but it is not too hard after some practice. I tried using Auto Tone but it did not do a great job. I imagine the algorithms were not designed for operating in reverse!
With everything set up, I can work through a shoot very quickly. Choosing which ones to ignore and reshooting if something doesn’t look right can be done pretty much on the fly. Is the image quality great? It’s okay but not amazing. However, many of the originals are not that great either. For the majority, it actually does a pretty decent job and sets me up for something that I can do more work on if I need to. It is a big improvement on my previous approach and now I will make quick scans when I need them rather than be dreading the time involved and avoiding all but the must have shots to save time.