I will freely admit this is not my idea. It is something that I read about recently on an astrophotography post that caught my attention. I was about to make a visit to a museum where I thought I might end up taking some interior images in confined space. My 16-35mm lens was probably going to do the trick but I wondered whether the fisheye might be a better bet if things were really tight. My only concern with that is the distortion is such a feature of that lens that it might not be worthwhile.
Then I came across the aforementioned article and it talked about shooting panos with a fisheye. The article was concerned with wide sky shots for astrophotography, but I thought it might work for me too. Supposedly, stitching together multiple fisheye shots takes out a lot of the distortion while still giving you the wide reach. I decided to experiment with this in advance to see if it worked.
I played with this indoors but taking a sequence of shots with good overlap between them making sure to catch as wide an image as possible. I was using the fisheye with full frame coverage rather than the circular version of the image. In Lightroom, I had to turn off the profile correction since that plays with the shots a lot and then set the pano function to work. It combined the images very easily and, sure enough, the verticals across the shot were not all vertical and not distorted at all. This could be something I now use a lot in the future when working in confined spaces. I will need to test it for closer subjects first since I suspect that will be a lot more testing for the alignment issues in pano stitching.
As with all software tools, Lightroom has been constantly evolving since the initial release. If I were to see the original version of the software, I would probably be shocked at how limited it was. I do come across old edits and, when I convert it to the latest develop presets I have created, it is shocking how much of a change can result. One area that has gone through various updates over time is the tools for healing or cloning. They have been okay but definitely had limitations – not least pulling in odd artifacts from other areas.
A recent addition to the tools has been Generative Remove. This is an AI driven method for selecting and removing elements of the image. I try to do any of this before any cropping because I have previously found cropping to confuse the healing tools by leaving stuff out of sight that it tries to reincorporate. I don’t know whether this matters for Generative Remove or not, but I have stuck with the same sequence just in case.
The selection process is really simple. Brush around an area and it will fill it in. You can refine the selection with brushes to add or remove areas. I have used it a lot to remove power lines where a click at one end and shift click at the other gives you a quick straight line. Then let it do its thing. It will provide three options for the solution, and you can decide if one of them works or make it try again. Generally, I have found the results to be very good and no obvious artifacts as a result of the healing. No doubt they will continue to refine the process, but I think it is a big step forward in cleaning up elements of images that you don’t want and is now something I will consider for images that I would otherwise have cast aside.
A recent post was focused on some shots from BFI when I was dropping the shutter speed. I had also been playing with this one gloomy morning at Seattle Tacoma International a while back. I was waiting for a specific movement but was passing time with some of the more regular movements. Since they weren’t the most exciting subjects, I tried dropping the shutter speed down to make the motion more apparent. They were really dramatic shutter speeds, but it made a slightly more interesting shot than would other have been the case.
One of the features that was added to the Canon EOS R3 via a firmware update was a ridiculously high frame rate mode. The fastest frame rate in normal shooting is 30fps (which is clearly ridiculous itself for anyone that has been photographing for a long time). The extra mode comes with limitations. Once you start shooting, autofocus and exposure monitoring are suspended so you get a lot of shots with the same settings. However, this does allow you to get 194fps!!! Yes, that is not a typo. It will only do this for a maximum of 50 frames but that is raw capture – not a jpeg. You get to select how many frames are taken which I have to admit I didn’t realize until recently. I was shooting with a limit of 10 frames for quite a while and wondering why. I’ve fixed that now.
There are relatively few times when this mode is actually useful. The viewfinder does black out when you use it so, if you are tracking something, a little bit of predictive guesswork is in order. If you were shooting a baseball pitch being hit, this could be pretty handy. I decided to use it on the Blue Angels pair crossing during Seafair to see how things work out. The answer is pretty good. I include a sequence of shots so you can see what even this frame rate gives you for two fast jets head on to each other. A limited tool but one that could be utilized. I have also been using it for very lower shutter speed experimentation but that will be another post.
When Canon announced the RF 200-800 lens, I was mildly interested but not too bothered by it. However, in an example of how easily a weak mind can be influenced, when I watched some reviews by those that had used the lens, I started to be more curious. The focal length range was always of interest, but the aperture range had initially put me off. The reviewers suggested that the excellent ISO performance of modern mirrorless cameras meant this wasn’t an issue. Also, while not in anyway cheap, the lens was very well priced for the range it offered.
I went to my local shop and placed a deposit for one of the lenses. This was many months ago. After that, things got very quiet. I was beginning to think that I would never see an actual lens. Then I saw something on a Canon rumor site that said August was likely to be a time when a lot of lenses got delivered. Whatever the blockage had been, there seemed to be some relief. The last Tuesday of July (stuff seems to get delivered to stores on Tuesday I guess), I get a phone call telling me that my lens has arrived. Hurrah!
After work, I headed down to pick it up. I then headed down to the water in Kenmore – a short distance from the store – to give it a quick go. I didn’t have a lot of time, but I got a quick feel for some of its quirks. Initially I was a little unsettled. The stabilization seemed very effective, but it did make tracking things that were moving slowly a bit jerky as if the stabilization didn’t believe that I actually wanted to be moving. I worried that this would be an issue. However, the images seemed to be rather sharp so maybe it knew what it was doing.
When I got home, I did spend a little time looking at the hummingbirds on the bushes. The light was very low, which should be a problem for a lens with smaller apertures, but it seemed to work very well and the images were surprisingly sharp and clean. I then took it to its first airshow. Again, results were really very pleasing. The 800mm reach was so helpful since the show line was quite distant and I was veery happy with the framing I could get. The jumpiness in the viewfinder is still something I find rather distracting but it doesn’t seem to be an issue for the images, so I guess the stabilization knows what it is doing. I also shot some video at 800mm handheld and, while there was initial wobbling, there comes a moment when it seems to get what is going on and then it is rock steady. Quite bizarre. I think this lens could be a key part of my shooting going forward. We shall see as my experience grows with it.
One of the special parts of the trip to Arizona was that Mark and I got invited along by our friend Joe to a night shoot at the Pima Air and Space Museum. I had seen some images from previous night shoots and the idea of photographing the many interesting airframes there in the dark intrigued me. The museum is excellent and well worth a visit, but it can be hotter than hell there and the light can be quite harsh, so this was a great alternative to try.
When I was a student, I used to do quite a lot of night photography. In the days of film, you played a lot more of a guessing game as to how things were working out. Also, film suffered from what was known as reciprocity failure so you could really extend the exposure in low light without necessarily ruining things. Digital is a lot more linear and also gives you the chance to see how things are coming out and have another go.
A lot of the attendees had done this more than once and had come equipped with a variety of tools to play with. Lights on stands, wands of different LEDs, huge flashlights etc. Plenty of things to work with. I had brought some tools along but was definitely keeping it simpler. Joe offered us some lights to work with but, since this was a new effort for me, I decided to keep it simple and try to get one approach worked out.
I had a tripod so I could leave the camera in place and then a couple of strobes to play around with. I had to make some set up adjustments first. Take off IS from the camera since it can wander over long exposures and make things blurry. Second, put the strobes on manual power and experiment with how well they do illuminating things. What I didn’t do but should have with hindsight was to go to bulb mode rather than 30 seconds on the shutter. At some points with the larger airframes, I was very frantic in trying to get everything lit in the 30 seconds. It proved to be rather energetic, and I was pretty pooped by the end of it.
I would open the shutter and then move around the airframe illuminating it with pops of the strobe. I quickly learned to shield the strobe, so it didn’t illuminate me and add me in to the shot. I also came to realize how the larger areas when I stood back a bit needed more light to compensate. All of this is logical but not something I thought of before trying it. More research/planning would have been a good idea. I was also surprised how my shadow could show up in some shots when I have no idea how it would have got there.
I did photograph some of the more famous assets in the collection – how can you ignore a B-58 or a B-36 – but I did also take time for others that were just of more interest to me. The size of the place meant you could easily not come across one of the other photographers for a while. They were helpful in pointing out the hazards of guy wires. Some of the larger planes have wires to stabilize them and these are basically invisible in the dark. If you are running around popping off flashes, you could easily collide with something unyielding. Fortunately, nothing like this for me but maybe some luck in that?
Would I do it again? Absolutely! It was very interesting and got some nice results. It also taught me a lot about what I wasn’t doing right and would set me up for a few ideas of how to do things differently in the future. I think a large flashlight would be an addition I would make, and I would definitely use the cable release and bulb mode. My thanks to Joe for taking us along and to the team for letting us join in.
In one of the bigger updates of Lightroom and Photoshop, Adobe introduced the Enhance functions adding either resolution or noise reduction. The noise reduction has been very effective for some of the shots I have taken with very high ISO levels. I decided to edit a shot with varying levels of noise reduction to see how things look. Since I had a bunch of cheetah shots taken in low light, I figured that would be a good subject.
You can vary the noise reduction level from 1-100. I made five edits with one unchanged and the remainder at 25, 50, 75 and 100. I then layered them in to one file to show the comparison. The unchanged edit is on the right while the 100 noise reduction is one the left. I felt like my previous experience had been that a level around 50 was a good outcome for much of what I had shot. When I looked at these results, I again concluded that the middle level was the best compromise. The 100 was just too much and 75 looked like things were a bit smudged. You can judge what you think. I shall experiment with levels each time I use it but it does give me a good idea of what to start with.
When RAW capture first became available on my phone, I started to use it. Initially, I had to use a third party camera app which was fine but it did have some quirks about it and some things that just didn’t work right, despite some extensive communication with the developer. Then the camera app of the phone got updated to allow RAW capture and I have been using that ever since. There is something very strange about it, though. When I import the images in to Lightroom, they are always about one stop overexposed. I am curious whether this is a function of the raw format for Apple in order to preserve details in the shadows or whether it is a weirdness with my phone. Included are two images – one with the base settings after import and one edited. This is representative of what I get. It doesn’t hurt the end result but it is rather strange. Anyone have similar results?
For some reason, I recently came back to an old photo I took of a Delta Connection CRJ900 as it climbed out of O’Hare. It had climbed right by the moon as it was rising in the eastern sky towards the end of the day. I had liked the photo at the time but now I was thinking about how to do a better job of editing it. Now I have been using the masking tools in Lightroom a lot more, I figured I could take different approaches for the jet and the background. The results were a lot better than my original efforts and I quite like how it now looks.
I used to play with time lapses a fair bit. I would shoot a series of images and use LRTimelapse to process them. However, that software had a license agreement that meant, when they upgraded the software, they required you to update your license and the old version was deactivated. This was very annoying. I figured I would be able to keep using the old version but apparently not. I don’t do it that much to justify the cost and was disinclined to use that software after this experience.
My latest cameras have a time lapse function built into them which I had been meaning to try out. I had done this on my little M6 but not with the latest bodies. What to use them on, though. I figured an experiment doesn’t require me to be original in the subject. Just try it out and see how it works. Consequently, I thought melting ice would be good enough. My first effort was not successful. I hadn’t given it enough time to record the melting fully. Second was better but, while the timing was okay, I had focused on the ice cube when it started melting and it slid across the plate as it melted and out of frame. The mode on the camera sets focus and exposure on the first shot so this meant everything was well out of focus.
This is why you experiment with things. The last try worked pretty much as intended. (I should note that I did all of these in the evening, so the lighting didn’t change during the shoot.) I had a long enough time for the ice cube to almost fully melt, it didn’t move, and the lighting was fine. Watching the ice disappear and the cube gradually sink into the water that is progressively growing was rather fun. This isn’t some epic revelation of the nature of melting ice, but it did teach me about some functionality of the camera.