The most recent update for Adobe Photoshop includes a function called Super Resolution. Many of the third party plugins and stand alone image processing tools come with tools to increase the resolution of images. In Photoshop you used to have a basic way to increase resolution but it wasn’t that clever and could introduce odd artifacts. I had been advised to use it in small increments rather than one big increase to reduce the problems but I hardly ever used it.
The new addition to Photoshop is apparently based from machine learning. If the PR is to be believed, they took loads of high res images and low res versions of the same image and the machine learning came to recognize what might be there in the small shot from what it knew was in the large shot. I don’t know what the other packages aim to achieve but this new tool in Photoshop has been doubling the resolution of the shots I have played with. You end up with a file four times the size as a result of this doubling of dimensions.
I have tried it out on a couple of different shots where the resolution was okay but not terribly large and where a higher res shot might prove useful. So far the tool is available through Camera Raw in Photoshop – not Lightroom. You need to update Lightroom in order to import the DNG files it produces. There is a suggestion that Lightroom will get this capability in time which would be more user friendly from my perspective.
My computer is not cutting edge so it takes a little while to process the images. It forecasts five minutes but seemed to complete the task way faster than that. In the examples here, I attach a 200% version of the original shot and a 100% version of the new file. There seems to be a definite benefit to the output file. I wouldn’t describe this as earth shattering but it is useful if the original file is sharp enough and I might have a need for this for a few items over time.
I decided to try a little experiment with my slide scanning. Having scanned a bunch of slides and negatives using a DSLR and macro lens set up, I had come across a few slides where the image just didn’t seem to work out very well. A big part of this is that the original slides were not very well exposed so I was starting from a less than ideal place. However, when editing the raw file, I found I wasn’t able to get a balance of exposures that I liked, despite slides supposedly having a very narrow dynamic range.
Since I could see some detail in the original slide, I figured an HDR approach might be of use. I took three shots of the slide with differing exposure – an inconvenient thing to do when tethered since the AEB function didn’t seem to work on the 40D in that mode – and then ran the HDR function in Lightroom on the three exposures. Despite the borders possibly confusing the algorithm, it seemed to do a pretty reasonable job of getting more of the image in a usable exposure range. This is not a great image and would not normally be making it to the blog but, as an example of getting something more out of a problem shot, I thought it might be of interest to someone.
I watched a video on YouTube about a way to process shots taken in low light with high ISOs to improve the noise performance. I wasn’t particularly interested in the approach until I was down on the shore as the sun was going down and I was using a long lens. I figured this might be a good time to try it out. The approach is to shoot a lot of shots. You can’t have anything moving in the shots for this to work but, if it is a static scene, the approach can be used.
Shoot as many shots as you can. Then import them in to Photoshop as layers. Use the align function to make sure that they are all perfectly aligned and then use the statistics function to do a mean calculation of the image. You can do this a couple of ways in Photoshop. You can make a smart object and then process it or you can process through Statistics. The averaging function takes a lot of the noise out of the shot. If you have lots of images, you can make it effectively disappear. I wasn’t prepared to make that many shots but I tried it with a reasonable number of images. The whole image isn’t really of interest. Instead, I include one of the images cropped in and the processed image similarly cropped to allow you to compare.
As the sun starts to set, the clouds that are a regular feature of the Pacific Northwest start to have a benefit. They can be lit in all sorts of interesting ways and it is slightly lazy but still worthwhile to get shots of them. The levels of contrast in the shot are fine with the naked eye but a bit of a stretch for a camera sensor. It can do a decent enough job but it is the sort of thing where bracketing for HDR might give you more to work with so I did give that a go.
I was taking some shots for work recently where the sky had some nice cloud detail and the foreground was in a lot of shade. Since the pictures were needed for a project, I was covering my bases and shot some brackets to allow me to do some processing in HDR later. Some people hate HDR but I have always been looking to use it to get a shot that reflects more the human eye’s ability to deal with extremes of contrast. With a wide range of light levels in a shot, HDR can give you a more usable image.
However, when I was processing the shots, I was struck by how I could use the middle exposure alone and, with some helpful adjustment of exposure, shadows and highlights, I was able to get much the same sort of result as the HDR image provided. The raw files seem to have enough latitude for processing that going to the bother of taking and processing the HDR image hardly seemed worth it. There are still situations where the range of exposure is so wide – outdoor sunlight and shady interiors – that it is still probably necessary to bracket and process later. However, for a lot of the situations I used to use HDR for, there seems little point. How many of you still shoot HDR?
Adobe periodically updates the processing algorithms that are used by Lightroom and Photoshop. Each update provides some improvements in how raw files are processed and it can be good to go back to older shots and to see how the newer process versions handle the images. I find this particularly useful for images shot in low light and with high ISO.
I have some standard process settings I use but have also experimented with modified settings for use with high ISOs and the higher noise levels that come with them. I got to some night launch shots from an old Red Flag exercise and had a play with the images. The E-3 launch was actually as the light was going down but it still had some illumination so it didn’t need much work.
The KC-135 and B-1B shots were a different story and were at high ISOs and with very little light. I was able to update the process version and apply some new settings I had worked out since the original processing and it resulted in some pretty reasonable outputs considering how little light there was to work with.
With the ferries coming and going to the terminal at Orcas, I was able to have plenty of chances to take photos. I did get standard shots of the boats in low light conditions. They are not easy to shoot since they are constantly moving. No long exposures at low ISOs are possible so it is high ISO and the associated noise. However, I did decide to experiment with some long exposures and blending of shots. The boats make a curving approach to the terminal. I thought this might make a nice long exposure. It worked okay but the curve is a bit disguised by being too low down to really appreciate it. However, it was fun to try.
Winter in the PNW does not mean reliable conditions for photographing planes. If the weather is bad, you might decide it isn’t worth going out. If it is raining and threatening to rain harder, there is a strong possibility you would skip a shot opportunity. However, 727s are getting pretty rare these days so that seems worthy of a trip out.
The weather was unpleasant when it made its approach but not as bad as it got a short while later. I went with my normal approach for shooting in really bad conditions by pushing the overexposure pretty high. I include a couple of edits. For the main image, I actually blended two different process versions in Photoshop to get the combination that most reflects how the shot looked through the view finder. The other edit is a straightforward Lightroom edit where the angle and the light suited it.
I’m sure a bunch of my relatives will look away for this post. Maybe they aren’t fans of focus stacking but it could be the spiders that put them off. My macro lens has been out a lot during the pandemic since it provides something to photograph close at home that is a bit different. In fact, I have got so used to having it available, when I am out with a normal lens and come across something small and interesting, I am a bit frustrated to realize I can’t get a close up shot.
The problem with the lens is that it is not a very advanced one and the autofocus on it is pretty crap. When I am trying to hand hold the lens and something is moving and so am I, things get a little unpredictable. We had a few spider webs in the backyard with the owners sitting in the middle. The afternoon sun provided great illumination so I figured I should give it a go. I tend to go to manual focus and move to get the shot but with the breeze moving the web a lot, things are pretty tricky. This is what prompted me to try cheating.
I figured that focus stacking does a good job of increasing the area in focus and it manages to align images and make use of what is already in focus. If I can be straight on to the spider and stay reasonably still and roughly at the right focus point, let the web move towards and away from me and fire a bunch of shots off hand held. Ignore the ones that have nothing in focus and then let Photoshop work on the remainder.
It isn’t a perfect solution and some weird things happen at the edges of the frame but the center works out pretty well and you can crop in a little to address the edges. I was quite pleased with the outcome to be honest. It is making the best of a few bad elements but it did do quite well. You don’t get to control what is in focus for each shot so getting a complete set to work with is unlikely but overall, not a bad experiment.
Summer weather means lots of sunny days but also means lots of heat haze. I was at Boeing Field one sunny afternoon and there were two jets parked across the field that I wanted shots of – one was an Illinois ANG KC-135R and the other was a Falcon 20. Looking through the viewfinder, both of the were shimmering in the heat haze that a warm and reasonably humid day brings. This is the downside of summer in the Pacific Northwest.
Not long before I had watched a video on YouTube about photographing Saturn through a telescope. The image of Saturn was all over the shop but they were using a software technique to take multiple images and build a more stable and sharper final image. It worked reasonably well and this got me thinking about how to do something similar. In the past I have used Photoshop to blend together multiple images to remove the moving elements of a shot like people or traffic. I wrote about it in this post.
I thought I would see if something similar could be done. I put the frame rate on to high and steadied myself before firing off a few seconds of shots. I wanted a lot of images to provide the best opportunity for the statistical analysis to find the right solution. Importing this in to Photoshop as layers and then auto aligning them allowed the analysis tool to do its thing. I don’t think the result is quite what I want and I may experiment with different analysis methods – median versus mean for example – to see which ones are most effective. However, there is clearly a smoothing out of the distortion and, if I needed to get a shot on a hazy day when there wouldn’t be another chance, I would definitely fall back on this approach to see whether it produced something more usable.