Lightroom

HDR in Photography: Dead, Dated, or Ready for a Comeback?

For years, HDR in photography has carried a bit of baggage.

Mention it to most photographers and they'll immediately picture those crunchy, overcooked images from the early 2010s. Glowing edges, strange colours, and a look that screamed "processing" louder than the actual subject. And honestly, fair enough. That version of HDR put a lot of people off, and for good reason.

The “HDR” Trend back in the early 2010s

But here's what's changed: HDR isn't what it used to be.

What we're talking about today is not that old exposure-blended, tone-mapped look that most of us learned to avoid. This is proper HDR editing, pulling more out of the image's dynamic range and displaying it on screens that can actually show it. It's less about creating a dramatic effect and more about giving the image room to breathe.

That distinction changes the conversation completely.

So what is HDR now?

At its simplest, HDR means high dynamic range; more tonal range than a standard dynamic range image can show. It’s not blending images together, it’s having the ability to really show what already exists in that file.

That sounds technical, but the practical version is straightforward. Think about a scene with a blazing sky, deep shadows, and subtle detail in between. In a standard SDR workflow, you end up squeezing all of that into a smaller box. You protect the highlights, lift the shadows, and find some kind of compromise.

With modern HDR editing, you're not forcing that compromise in the same way. You're working in a way that allows more brightness information to survive the edit, so when viewed on an HDR-capable screen, the image can look much closer to what the scene actually felt like.

That's the key difference.

This isn't about making everything loud. It's about giving the image more range.


Check out this web page I put together to check if your display / device is capable of HDR.

Take a look on your computer, mobile and tablet device (if you have one)

🔗 LINK: hdrviewer.lovable.app


Why the old HDR got a bad name

Let's be honest: old-school HDR deserved a fair amount of the criticism it got.

A lot of it was used as a shortcut to rescue badly exposed images, and the results were often heavy-handed. Software like Photomatix, which was the go-to tool for HDR processing back in those early days, made it incredibly easy to push things too far. Shadows were crushed, highlights flattened, and that distinctive grungy, over-cooked look became almost a signature of the era. At its worst, it was gimmicky. You knew exactly what you were looking at the moment you saw it.

Worth saying though: Photomatix is still around and still a perfectly viable option. Used with some restraint, it's capable of much more conservative, natural-looking results than its early reputation might suggest. But back then, subtlety wasn't really the point for a lot of people using it.

That's why many photographers developed a kind of instinctive resistance to anything labelled HDR.

But modern HDR is a different thing entirely.

It's not trying to shout at you. It's trying to reveal more subtlety. And when it's done well, most people won't even register that they're looking at an HDR image. They'll just think it looks rich, deep, and beautifully displayed.

Who is actually doing this?

More people than you might think.

The biggest shift is that the industry around HDR has finally started to catch up. More screens support it, editing software is building in proper HDR workflows, and image sharing is slowly becoming more compatible. That matters, because a workflow only becomes genuinely useful when you can see the result and actually share it.

Photographers are already experimenting with it in landscape work, cityscapes, interiors, sunsets, and any scene where the contrast is simply too much for a standard file to hold comfortably. It makes particular sense when the subject contains bright highlights that you want to keep bright, without the rest of the image falling apart around them.

So yes, people are doing it. Not everyone, and not for every image. But enough that it's moving from niche curiosity toward something more mainstream.

Why it matters now

This is where HDR becomes genuinely interesting from a photographer's point of view.

We've reached a point where many viewers already have HDR-capable phones, tablets, laptops, televisions, and monitors. The image you edit is no longer always limited to the old one-size-fits-all SDR world. Some people can actually see more of what you intended when you made it.

That opens up real creative possibilities.

A sunset can hold brighter light without clipping into mush. A window-lit interior can keep detail outside without destroying the atmosphere inside. A seascape can carry that glowing, luminous quality we often try to suggest with standard editing but don't always fully achieve.

In the right hands, HDR isn't flashy. It's expressive.

Where it fits in a workflow

The best way I think about HDR is this: it's another tool, not a replacement for everything else.

It won't suit every photograph. Some images are better left in a standard workflow, particularly if the scene is already well contained or if you want a classic, controlled look. HDR also won't make much difference if your audience is mostly viewing on SDR screens.

But for the right image, it can be brilliant.

The skill, then, isn't just learning how to switch HDR on. It's knowing when it adds value and when it doesn't. That's usually where good photography lives anyway. Not in using every feature available, but in using the right one at the right time.

Is HDR the future?

I think so, yes. Just not in the old dramatic sense.

We're not heading back to the days of overprocessed HDR everywhere. That era is done, and rightly so. But we are moving towards a more natural, more display-aware way of working, where HDR becomes a normal part of the photographic toolbox rather than a novelty.

How quickly that happens depends on a few things catching up together: displays, software, and sharing platforms. But the direction is clear.

More of the world is becoming HDR-capable, which means photographers will increasingly need to understand how to work with that reality, whether they choose to or not.

Final thoughts

HDR is not dead.

What's dead is the old caricature of it. The version that turned every photo into a neon soap opera. The modern version is far more interesting, far more useful, and far more in step with where technology is heading.

For photographers, the opportunity is simple: start paying attention now. Learn what modern HDR actually is, watch how it develops, and think about where it fits in your own work, because this feels less like a passing fad and more like a genuine shift in the way images are made and seen.

Lightroom Virtual Summit 2026

The Lightroom Virtual Summit is BACK running from 1st June to the 5th June 2026, including 45 classes (33hrs +) of Lightroom Education which you can watch completely for FREE!

🚨 Link for FREE PASS: https://glyndewis.krtra.com/t/e7YtyIDicEoQ

InstructorS

Anthony Morganti, Ben Willmore, Chris Orwig, CliffordPickett, ColinSmith, DanielGregory, GregBenz, JaredPlatt, Jesús Ramirez, Kristina Sherk, LisaCarney, Matt Kloskowski, PeterMorgan, RobSylvan, Sean McCormack, TimGrey ... and yours truly 😃

FREE TO WATCH

All classes are free to watch for a 48 period once they go live, and there’s an optional VIP Pass available for purchase that gives you lifetime access to the recordings of all classes, instructor-provided class notes and exclusive bonuses (including additional videos).

Lightroom AI - You're using it in the WRONG ORDER

In Lightroom Classic, Desktop, and Camera Raw, a yellow warning icon often appears in the AI Edit Status panel. This happens when you perform edits out of the recommended "order of operations," signaling that certain AI-generated layers need to be updated or rerendered.

While you can still edit in any order, jumping around can lead to unpredictable results. For example, applying an adaptive color profile and then using the "Denoise" or "Remove" tool might cause the colors and highlights to shift once the AI is forced to update.

The Recommended Workflow: Prepare, Repair, Finesse

To maintain total control over how your image looks, it is best to follow this three-step sequence:

  1. Prepare: Start with edits that affect the entire image, such as Denoise, Raw Details, Super Resolution, or HDR. This is the foundation of your edit.

  2. Repair: Next, clean up the image by removing distractions. Use the Remove tool (with Generative AI) or Distraction Removal for things like reflections, dust spots, or unwanted objects.

  3. Finesse (or Finish): Once the image is prepped and repaired, move on to creative adjustments, such as Adaptive Color Profiles or intricate masking.

Handling the AI Edit Status Warning

If the yellow icon appears, it is a reminder that your AI edits may no longer be perfectly synced with the current state of the image.

  • Click to Update: Always click the icon and select "Update" before finishing your edit.

  • Reassess: After updating, look closely at your image. Because the AI is rerendering, the results might look slightly different than before.

  • Don't Just Export: If you try to export while the icon is yellow, a popup will warn you. Instead of clicking "Export" anyway, it is safer to cancel, update the edits manually, and ensure you are happy with the changes before saving the final file.

By following the Prepare, Repair, Finesse order, you ensure your editing remains predictable and that the final export looks exactly as you intended.

Fix IMPOSSIBLE Backgrounds Instantly ( Lightroom + Photoshop )

Recently, Steven Gotz, a member of the Photography Community on SKOOL ( LINK ), sent over a brilliant RAW file of a condor. Stunning subject, great light, one problem: a massive fence running right through the background.

Rather than leave it on the shelf, I figured it was the perfect excuse to put the latest updates in Lightroom and Photoshop Beta through their paces. What would have taken ages with the Clone Stamp tool a couple of years ago can now be sorted in seconds. Here's exactly how I did it, using two different workflows.

Workflow 1: Photoshop Beta with Firefly Image 5

This is the quickest route right now, and the results are genuinely impressive.

The key is using the new Firefly Image 5 (Preview) model inside Photoshop Beta. It's been built specifically for editing while preserving detail, which matters a lot when you're dealing with complex textures like feathers and rocky backgrounds.

  1. From Lightroom to Photoshop Beta. Right-click the image in Lightroom and choose Edit In > Adobe Photoshop Beta.

  2. Select All. Once you're in Photoshop, go to Select > All. This gives the AI the full context of the frame before you do anything.

  3. Switch to Firefly Image 5. Click Generative Fill in the contextual taskbar. Here's the bit that matters: don't use the standard model. Switch it to Firefly Image 5 (Preview) from the dropdown.

  4. The prompt. This model needs a prompt to work, unlike some of the others. I kept it simple: "remove the fence from this picture."

  5. Refine the detail. The AI did a great job on the background, but because Firefly Image 5 currently outputs at 2K, the fine detail around the bird's eye and feathers was slightly softer than the original RAW. The fix is straightforward: use the Object Selection Tool to select the bird and the rock, then fill that area on the layer mask with black. That reveals the sharp original bird while keeping the AI-cleared background intact.

Workflow 2: Lightroom to Firefly Web

Not on the Photoshop Beta? No problem. You can get to the same place via Lightroom's sharing feature.

  1. Share to Firefly. In Lightroom, hit the Share button (top right) and select Firefly: Edit an image. This opens your browser and drops the photo straight into the Adobe Firefly web interface.

  2. Settings and generate. Select Firefly Image 5, bump the resolution to 2K, use the same prompt ("remove the fence from this picture"), and hit generate.

  3. Back to Photoshop. Download the cleaned image, go back to Lightroom, and open the original file in the regular version of Photoshop.

  4. Stack and align. Use File > Place Embedded to bring the Firefly-cleaned version in on top of your original. Rasterise the top layer, select both layers, then go to Edit > Auto-Align Layers to make sure everything lines up perfectly.

  5. The masking trick. Same principle as Workflow 1: use the Object Selection Tool to select the bird and the rock, then hold Option (Mac) or Alt (Windows) and click the mask icon. This hides the AI version of the bird and brings back the sharp, high-detail original underneath.

Why the masking step matters

This is the part I think is really important. It's not about letting AI take over the whole image. It's about using it to fix a specific problem, in this case the background, while keeping the actual subject exactly as it was captured in the RAW file. The integrity of the original is what you're protecting.

Have a look through your archives. Chances are there are shots you wrote off because of something in the background. It might be worth giving them another look.

Generative Upscaling using Topaz Gigapixel now in Lightroom

Adobe Lightroom version 9.2, released on 20th February 2026, brings with it a significant new feature: generative upscaling powered by Gigapixel from Topaz Labs.

If you've ever needed to enlarge an image whilst maintaining sharpness and clarity, this update is going to be very welcome indeed.

Here's a comprehensive look at what it does, how to use it, and what you need to know before you get started.

What Is Generative Upscale with Gigapixel?

Generative upscale is an AI-powered image enlargement tool built directly into Lightroom, using technology from Topaz Labs' well-regarded Gigapixel application. It works by analysing your image and intelligently increasing its resolution, improving quality, sharpness, and clarity in the process. The key advantage over Lightroom's existing super resolution feature is both the degree of upscaling available and the range of file formats it supports.

How Does It Differ from Super Resolution?

Lightroom has offered super resolution for some time, but it comes with two notable limitations: it only upscales by 2x (200%), and it only works on RAW files. The new Gigapixel-powered generative upscale removes both of those restrictions. You can now upscale by either 2x or 4x, and crucially, it works on RAW files and other file formats too, making it far more versatile.

How to Access Generative Upscale

There are three ways to access the feature within Lightroom:

From the menu bar, go to Photo and select Generative Upscale. Alternatively, right-click on your image in the editing view and choose Generative Upscale from the context menu. You can also right-click on a thumbnail in the grid view to find the same option.

What Happens When You Upscale?

Once you select generative upscale, a dialogue box appears showing your upscaling options (2x or 4x), along with the resulting pixel dimensions and file size in megapixels. You'll also see how many generative credits the process will consume, and a real-time display of your current monthly generative credit balance, which is a very handy addition.

The processing itself takes place in the cloud, regardless of whether your images are stored locally or in Adobe's cloud. This means an active internet connection is required every time you use the feature. In testing, the process took around 30 seconds, though this will depend on your connection speed.

Once complete, the upscaled image is saved as a new DNG file alongside your original. This is an important point: no matter what file format you send for upscaling, the returned file will always be a DNG. The filename will reflect that Gigapixel was used and will indicate the upscaling factor applied (2x or 4x).

An Important Tip: Edit First, Then Upscale

This is perhaps the most important thing to be aware of when using generative upscale. When the upscaled DNG file is returned, all of your existing Lightroom edits, including masks and adjustments, are baked into it. The new file will not retain any editable Lightroom settings. For that reason, you should always complete your editing first before running the upscale. The good news is that your original edited file is preserved, so you will always have access to make further adjustments to it should you need to.

Generative Credits

Using generative upscale consumes generative credits from your monthly allowance. The cost is either 10 or 20 credits, depending on the size of the output, with a maximum of 20 credits per upscale. The dialogue box shows exactly how many credits will be used before you commit, and you can see your remaining balance at the same time.

The Stacking Option for Cloud Images

If you are working with images stored in Adobe's cloud, there is one additional option available: the ability to create a stack. Rather than the upscaled file appearing as a separate thumbnail alongside your original, it will be grouped together with it as a stack, keeping your library neat and organised. This option is not available for locally stored images.

Maximum Output Size

The maximum output size is an impressive 65,000 pixels on the longest edge, making this suitable for very large print work indeed.

Where Generative Upscale Is Most Useful

This feature is particularly well suited to a number of scenarios. It's excellent when you've made a significant crop to an image and want to recover detail and sharpness in the enlarged result. It can also be used to improve low-resolution scans, or to breathe new life into images from older cameras with lower megapixel counts.

Quick Summary of Key Points

  • Available in Lightroom version 9.2 and later

  • Powered by Topaz Labs' Gigapixel technology

  • Upscale options: 2x or 4x

  • Works on RAW files and other file formats (unlike super resolution)

  • All processing is done in the cloud; an internet connection is required

  • Returns a new DNG file regardless of the original format

  • Consumes 10 or 20 generative credits (maximum 20 per upscale)

  • Maximum output: 65,000 pixels on the longest edge

  • Edits are baked into the upscaled file, so always edit first

  • Stacking option available for cloud-stored images

  • Your original file is always preserved

For photographers looking to get the most from their images, whether recovering detail from a tight crop or improving older files, this is a genuinely useful addition to Lightroom's toolkit.

Come on Adobe 🙏🏻 We NEED THIS FEATURE ⚠️

I've put together this short video because I need to ask a favour from anyone who uses Photoshop Camera Raw or Lightroom. There's a fundamental feature that's been missing for years, and it seriously impacts how we edit our images and the results we achieve.

The Missing Piece in AI Masking

The issue centres on masking, specifically the AI-generated masks available in the masking panel. Being able to select a sky or subject with one click is genuinely incredible, but there's a massive gap in functionality. We have no way to soften, blur, or feather those AI masks after they've been created.

Instead, we're left with incredibly sharp, defined outlines that sometimes look like poorly executed cutouts. This makes blending our adjustments naturally into the rest of the image much harder than it needs to be.

Years HAVE PASSED

Adobe introduced the masking panel back in October 2021. It changed the way we work and represented a huge step forward. Yet here we are, years later, still without a simple slider to soften mask edges.

If you want to blend an adjustment now, you're often stuck trying to subtract with a large soft brush, using the intersect command with a gradient, or employing other crude workarounds to hide the transition. It feels like excessive work for what should be a standard function.

The Competition Gets It Right

What makes this even more frustrating is seeing other software solve this problem elegantly. The new Boris FX Optics 2026 release includes AI masking controls where a single slider softens and blurs the mask outline, and it works incredibly well. Luminar has been offering this functionality for quite a while too.

These tools understand that a mask is only as good as its edges. When the competition provides ways to feather and refine AI selections, the absence of this feature in Adobe's ecosystem feels glaringly obvious.

Adobe's Strengths and Opportunities

Don't get me wrong. I appreciate that Adobe constantly pushes boundaries. We've witnessed tremendous growth over recent years, with developments from third-party AI platforms like Google's Gemini, emerging models, and innovations from Black Forest Labs with Flux and Topaz Labs. It's an exciting time to be a creator.

But I wish Adobe would take a moment to polish what we already have. Adding flashy new features is great, but refining the core workflows we use every single day would be a massive leap forward for all of us.

How You Can Help

Rather than simply complaining about this issue, I've created a feature request post in the Adobe forums. It's been merged with an existing thread on the same topic, which actually helps consolidate our voices into one place.

Here's what I need you to do: click the link below to visit the post and give it an upvote by clicking or tapping the counter number in the upper left. If we can get enough visibility on this, Adobe might finally recognise how much the community wants and needs this feature.

( LINK )

I believe refining existing tools is just as important as inventing new ones. Thank you for taking the time to vote. It really does make a difference when we speak up together.

The Fisherman's Tale 🐟 New Compositing Workflow

Yesterday morning I popped out for breakfast and to meet up with a friend, Steve.

After a great bite to eat at one of my favourite haunts, Town Mill Bakery (Lyme Regis) I sprung it on him that I had an idea for a picture I wanted to put together and that I needed him to be the subject.

The idea was to create a portrait of a Fisherman and to do this with a combination of Photography, Lightroom, Photoshop and AI, to test out a new workflow.

So, here’s the resulting image, and below is a breakdown of the steps involved using Lightroom, Photoshop, Google Gemini AI and Magnific (Upscaler)

The Process

  • Taking the portrait of Steve with the desired background

  • Initial Edits in Lightroom

  • Export into Google Gemini AI and add Stock Photographs of Fisherman’s clothing onto Steve. Create image in 4K and then Upscale 2x

  • Create aging, weathering on the Overalls and Hat using Gemini AI and then selectively paint this onto Steve using Masks in Photoshop

  • In Gemini AI generate the fish and Steve’s new arm position, then mask this into the main image in Photoshop

Extend Background in Photoshop and add finishing touches in Lightroom including Colour Grading, Adjusting Lighting, Lens Blur, Adding Grain etc …


Editing a Photo in Lightroom + Photoshop ... on an iPad

Not too long ago, I never would have considered editing my photos on an iPad. It always felt like something I should save for my desktop. But things have changed. Both Lightroom and Photoshop on the iPad have improved massively, and these days I often use them when traveling. More and more, this mobile workflow is becoming a real option for photographers.

In this walkthrough, I’ll show you how I edited an image completely on the iPad, starting in Lightroom, jumping over to Photoshop when needed, and then finishing off with a print.

Starting in Lightroom on the iPad

The photo I worked on was taken with my iPhone. The first job was the obvious one: straightening the image. In Lightroom, I headed to the Geometry panel and switched on the Upright option, which immediately fixed the horizon.

Next, I dealt with a distraction in the bottom left corner. Using the Remove Tool with Generative AI switched on, I brushed over the wall that had crept into the frame. Lightroom offered three variations, and the second one was perfect.

With those fixes made, I converted the photo to black and white using one of my own synced presets. A quick tweak of the Amount slider gave me just the right level of contrast.

Masking and Sky Adjustments

The sky needed attention, so I created a Select Sky mask. As usual, the AI selection bled slightly into the hills, so I used a Subtract mask to tidy things up. It wasn’t perfect, but it was good enough to move forward.

From there, I added some Dehaze and Clarity to bring detail back into the clouds. A bit of sharpening pushed the image further, but that also revealed halos around a distant lamppost. At that point, I knew it was time to send the photo into Photoshop.

Fixing Halos in Photoshop on the iPad

Jumping into Photoshop on the iPad takes a little getting used to, but once you know where things are, it feels very familiar.

To remove the halos, I used the Clone Stamp Tool on a blank layer set to Darken blend mode. This technique is brilliant because it only darkens areas brighter than the sample point. With a bit of careful cloning, the halos disappeared quickly.

I then added a subtle “glow” effect often used on landscapes. By duplicating the layer, applying a Gaussian Blur, and changing the blend mode to Soft Light at low opacity, the image gained a soft, atmospheric look.

Back to Lightroom and Printing

With the edits complete, I sent the image back to Lightroom. From there it synced seamlessly across to my desktop, but the important point is that all of the editing was done entirely on the iPad.

Before printing, I checked the histogram and made some final tweaks. Then it was straight to print on a textured matte fine art paper. Once the ink settled, the result looked fantastic — no halos in sight.

Final Thoughts

I’m not suggesting you should abandon your desktop for editing. Far from it. But the iPad has become a powerful option when you’re traveling, sitting in a café, or simply want to work away from your desk.

This workflow shows what’s possible: you can straighten, retouch, convert to black and white, make sky adjustments, refine details in Photoshop, and even prepare a final print — all from the iPad. And of course, everything syncs back to your desktop for finishing touches if needed.

Exciting times indeed.