The most recent update for Adobe Photoshop includes a function called Super Resolution. Many of the third party plugins and stand alone image processing tools come with tools to increase the resolution of images. In Photoshop you used to have a basic way to increase resolution but it wasn’t that clever and could introduce odd artifacts. I had been advised to use it in small increments rather than one big increase to reduce the problems but I hardly ever used it.
The new addition to Photoshop is apparently based from machine learning. If the PR is to be believed, they took loads of high res images and low res versions of the same image and the machine learning came to recognize what might be there in the small shot from what it knew was in the large shot. I don’t know what the other packages aim to achieve but this new tool in Photoshop has been doubling the resolution of the shots I have played with. You end up with a file four times the size as a result of this doubling of dimensions.
I have tried it out on a couple of different shots where the resolution was okay but not terribly large and where a higher res shot might prove useful. So far the tool is available through Camera Raw in Photoshop – not Lightroom. You need to update Lightroom in order to import the DNG files it produces. There is a suggestion that Lightroom will get this capability in time which would be more user friendly from my perspective.
My computer is not cutting edge so it takes a little while to process the images. It forecasts five minutes but seemed to complete the task way faster than that. In the examples here, I attach a 200% version of the original shot and a 100% version of the new file. There seems to be a definite benefit to the output file. I wouldn’t describe this as earth shattering but it is useful if the original file is sharp enough and I might have a need for this for a few items over time.
I decided to try a little experiment with my slide scanning. Having scanned a bunch of slides and negatives using a DSLR and macro lens set up, I had come across a few slides where the image just didn’t seem to work out very well. A big part of this is that the original slides were not very well exposed so I was starting from a less than ideal place. However, when editing the raw file, I found I wasn’t able to get a balance of exposures that I liked, despite slides supposedly having a very narrow dynamic range.
Since I could see some detail in the original slide, I figured an HDR approach might be of use. I took three shots of the slide with differing exposure – an inconvenient thing to do when tethered since the AEB function didn’t seem to work on the 40D in that mode – and then ran the HDR function in Lightroom on the three exposures. Despite the borders possibly confusing the algorithm, it seemed to do a pretty reasonable job of getting more of the image in a usable exposure range. This is not a great image and would not normally be making it to the blog but, as an example of getting something more out of a problem shot, I thought it might be of interest to someone.
Nancy and I had been discussing what pictures to add to the walls in the house. We were trying to find something that was a nice layout and also could include images from a variety of places. We settled on the Collagewall from MPix. I have used MPix for a lot of photo printing requirements over the years so was happy to give this product a go.
They have a variety of configurations that you can choose from. They have varying dimensions and layouts and you can pick your images to fit different aspect ratios. The one we went with was 4.5’ across to fit a large wall space and it included some large and small square format images with a couple of panoramic shots and one vertical thin image. I did all of the selections and formatting in Lightroom and then just dragged and dropped them in to the configuration tool. It was very straightforward.
The whole things was printed and shipped quickly and would have been with us shortly thereafter had it not been for a winter storm that meant the package got to spend a week in Salt Lake City. However, it finally arrived and we could install it. There is a paper template provided to assist in putting it on the wall. You tape that in place checking for location and level before getting to work. A series of pins need to be inserted into the surface of the wall. Using the template, you can make an initial pin hole with one of the pins without pushing it all the way in. Then, when all locations have an initial mark, the template can be removed and saved for any future installation.
Then add the full set of pins by pushing them all the way in to the initial holes previously made. This results in a grid of pins covering the full area of the finished work. Slots on the back of the prints will then slide over the heads of the pins. For some of the small prints and the panos, adhesive foam pads are added to provide some stability. The larger prints are stabilized sufficiently by the pins. Then you slot everything in to place.
From start to finish, it was probably little more than half an hour to put it up. A significant portion of that was making sure the template was exactly where we wanted and properly leveled. Nancy pushed out each print while I was inserting the pins. Finishing it off was very simple. The nature of the installation means changing a print out for a replacement would be very easy and then include a folding element that can be inserted in the back to make each print stand on its own if required. I’m really happy with the way it has come out and might do a smaller installation for another location in the house. In truth, the longest part of this is choosing the right shots to include.
I watched a video on YouTube about a way to process shots taken in low light with high ISOs to improve the noise performance. I wasn’t particularly interested in the approach until I was down on the shore as the sun was going down and I was using a long lens. I figured this might be a good time to try it out. The approach is to shoot a lot of shots. You can’t have anything moving in the shots for this to work but, if it is a static scene, the approach can be used.
Shoot as many shots as you can. Then import them in to Photoshop as layers. Use the align function to make sure that they are all perfectly aligned and then use the statistics function to do a mean calculation of the image. You can do this a couple of ways in Photoshop. You can make a smart object and then process it or you can process through Statistics. The averaging function takes a lot of the noise out of the shot. If you have lots of images, you can make it effectively disappear. I wasn’t prepared to make that many shots but I tried it with a reasonable number of images. The whole image isn’t really of interest. Instead, I include one of the images cropped in and the processed image similarly cropped to allow you to compare.
Occasionally I will get aircraft heading in to Boeing Field come right by the house. Late Friday afternoon, two Boeing test jets were coming my way. One was the first 777X and the other was that first 737 Max7. The usual route brings them just slightly north of the house so I was ready. However, the Max was heading just slightly south of the normal track and looked like it might go the other side of the house. At the last minute, I realized it would and ran through the the other side.
I got the window open but didn’t have time to remove the screen. I thought it would take out some light but figured the large aperture of a big lens would just blur out the screen mesh since it was so close. Through the viewfinder, things look pretty good. However, when I downloaded the shots, I realized the shots were totally awful. The screens had caused shadowing of the images. The center image was there but I could see shadow versions about and below. Then I got to one with a beacon flashing and that showed exactly how the pattern of light was scattered. Based on what I see, I assume this is a diffraction effect. It is a useless shot but it is very interesting which is why I am sharing it.
I was taking some shots for work recently where the sky had some nice cloud detail and the foreground was in a lot of shade. Since the pictures were needed for a project, I was covering my bases and shot some brackets to allow me to do some processing in HDR later. Some people hate HDR but I have always been looking to use it to get a shot that reflects more the human eye’s ability to deal with extremes of contrast. With a wide range of light levels in a shot, HDR can give you a more usable image.
However, when I was processing the shots, I was struck by how I could use the middle exposure alone and, with some helpful adjustment of exposure, shadows and highlights, I was able to get much the same sort of result as the HDR image provided. The raw files seem to have enough latitude for processing that going to the bother of taking and processing the HDR image hardly seemed worth it. There are still situations where the range of exposure is so wide – outdoor sunlight and shady interiors – that it is still probably necessary to bracket and process later. However, for a lot of the situations I used to use HDR for, there seems little point. How many of you still shoot HDR?
Adobe periodically updates the processing algorithms that are used by Lightroom and Photoshop. Each update provides some improvements in how raw files are processed and it can be good to go back to older shots and to see how the newer process versions handle the images. I find this particularly useful for images shot in low light and with high ISO.
I have some standard process settings I use but have also experimented with modified settings for use with high ISOs and the higher noise levels that come with them. I got to some night launch shots from an old Red Flag exercise and had a play with the images. The E-3 launch was actually as the light was going down but it still had some illumination so it didn’t need much work.
The KC-135 and B-1B shots were a different story and were at high ISOs and with very little light. I was able to update the process version and apply some new settings I had worked out since the original processing and it resulted in some pretty reasonable outputs considering how little light there was to work with.
I recently bought some replacement valve cores for my bicycle tires. I notice that part of the core was bent so decided to replace it. It is a quick job to change the core over and, prior to throwing the old core away, I figure I would play with the macro lens. I first too a picture of the still assembled core trying to angle it to show how badly bent the part was. Then I figured I could take the core apart altogether. Another focus stack and I could show the parts separated. I love the detail you get of the metal surfaces when you shoot macro.
Summer weather means lots of sunny days but also means lots of heat haze. I was at Boeing Field one sunny afternoon and there were two jets parked across the field that I wanted shots of – one was an Illinois ANG KC-135R and the other was a Falcon 20. Looking through the viewfinder, both of the were shimmering in the heat haze that a warm and reasonably humid day brings. This is the downside of summer in the Pacific Northwest.
Not long before I had watched a video on YouTube about photographing Saturn through a telescope. The image of Saturn was all over the shop but they were using a software technique to take multiple images and build a more stable and sharper final image. It worked reasonably well and this got me thinking about how to do something similar. In the past I have used Photoshop to blend together multiple images to remove the moving elements of a shot like people or traffic. I wrote about it in this post.
I thought I would see if something similar could be done. I put the frame rate on to high and steadied myself before firing off a few seconds of shots. I wanted a lot of images to provide the best opportunity for the statistical analysis to find the right solution. Importing this in to Photoshop as layers and then auto aligning them allowed the analysis tool to do its thing. I don’t think the result is quite what I want and I may experiment with different analysis methods – median versus mean for example – to see which ones are most effective. However, there is clearly a smoothing out of the distortion and, if I needed to get a shot on a hazy day when there wouldn’t be another chance, I would definitely fall back on this approach to see whether it produced something more usable.
While scanning through some images, one of the shots that showed up in my catalog was an HDR processing of some shots of a US Army Chinook. It had been processed with a plugin that I had previously experimented with. I thought it looked over vibrant but I was impressed with the way the dark interior of the helicopter had shown up while the outside was also well lit. I decided to have another go at processing the images.
I used Lightroom initially to do the processing. It came out surprisingly well and looked not unlike the outcome from the plugin. However, there was some ghosting on people in the shot and there was a lot of chromatic aberration. I have noticed issues with Lightroom making a worse job of it than Photoshop so I decided to try HDR Pro in Photoshop as well and use Camera Raw for tone mapping. The outcome was very similar from an overall perspective. However, the ghosting was virtually eliminated and the aberration was not apparent either. It clearly is still a better bet than Lightroom.