Lightroom

HDR in Photography: Dead, Dated, or Ready for a Comeback?

For years, HDR in photography has carried a bit of baggage.

Mention it to most photographers and they'll immediately picture those crunchy, overcooked images from the early 2010s. Glowing edges, strange colours, and a look that screamed "processing" louder than the actual subject. And honestly, fair enough. That version of HDR put a lot of people off, and for good reason.

The “HDR” Trend back in the early 2010s

But here's what's changed: HDR isn't what it used to be.

What we're talking about today is not that old exposure-blended, tone-mapped look that most of us learned to avoid. This is proper HDR editing, pulling more out of the image's dynamic range and displaying it on screens that can actually show it. It's less about creating a dramatic effect and more about giving the image room to breathe.

That distinction changes the conversation completely.

So what is HDR now?

At its simplest, HDR means high dynamic range; more tonal range than a standard dynamic range image can show. It’s not blending images together, it’s having the ability to really show what already exists in that file.

That sounds technical, but the practical version is straightforward. Think about a scene with a blazing sky, deep shadows, and subtle detail in between. In a standard SDR workflow, you end up squeezing all of that into a smaller box. You protect the highlights, lift the shadows, and find some kind of compromise.

With modern HDR editing, you're not forcing that compromise in the same way. You're working in a way that allows more brightness information to survive the edit, so when viewed on an HDR-capable screen, the image can look much closer to what the scene actually felt like.

That's the key difference.

This isn't about making everything loud. It's about giving the image more range.


Check out this web page I put together to check if your display / device is capable of HDR.

Take a look on your computer, mobile and tablet device (if you have one)

🔗 LINK: hdrviewer.lovable.app


Why the old HDR got a bad name

Let's be honest: old-school HDR deserved a fair amount of the criticism it got.

A lot of it was used as a shortcut to rescue badly exposed images, and the results were often heavy-handed. Software like Photomatix, which was the go-to tool for HDR processing back in those early days, made it incredibly easy to push things too far. Shadows were crushed, highlights flattened, and that distinctive grungy, over-cooked look became almost a signature of the era. At its worst, it was gimmicky. You knew exactly what you were looking at the moment you saw it.

Worth saying though: Photomatix is still around and still a perfectly viable option. Used with some restraint, it's capable of much more conservative, natural-looking results than its early reputation might suggest. But back then, subtlety wasn't really the point for a lot of people using it.

That's why many photographers developed a kind of instinctive resistance to anything labelled HDR.

But modern HDR is a different thing entirely.

It's not trying to shout at you. It's trying to reveal more subtlety. And when it's done well, most people won't even register that they're looking at an HDR image. They'll just think it looks rich, deep, and beautifully displayed.

Who is actually doing this?

More people than you might think.

The biggest shift is that the industry around HDR has finally started to catch up. More screens support it, editing software is building in proper HDR workflows, and image sharing is slowly becoming more compatible. That matters, because a workflow only becomes genuinely useful when you can see the result and actually share it.

Photographers are already experimenting with it in landscape work, cityscapes, interiors, sunsets, and any scene where the contrast is simply too much for a standard file to hold comfortably. It makes particular sense when the subject contains bright highlights that you want to keep bright, without the rest of the image falling apart around them.

So yes, people are doing it. Not everyone, and not for every image. But enough that it's moving from niche curiosity toward something more mainstream.

Why it matters now

This is where HDR becomes genuinely interesting from a photographer's point of view.

We've reached a point where many viewers already have HDR-capable phones, tablets, laptops, televisions, and monitors. The image you edit is no longer always limited to the old one-size-fits-all SDR world. Some people can actually see more of what you intended when you made it.

That opens up real creative possibilities.

A sunset can hold brighter light without clipping into mush. A window-lit interior can keep detail outside without destroying the atmosphere inside. A seascape can carry that glowing, luminous quality we often try to suggest with standard editing but don't always fully achieve.

In the right hands, HDR isn't flashy. It's expressive.

Where it fits in a workflow

The best way I think about HDR is this: it's another tool, not a replacement for everything else.

It won't suit every photograph. Some images are better left in a standard workflow, particularly if the scene is already well contained or if you want a classic, controlled look. HDR also won't make much difference if your audience is mostly viewing on SDR screens.

But for the right image, it can be brilliant.

The skill, then, isn't just learning how to switch HDR on. It's knowing when it adds value and when it doesn't. That's usually where good photography lives anyway. Not in using every feature available, but in using the right one at the right time.

Is HDR the future?

I think so, yes. Just not in the old dramatic sense.

We're not heading back to the days of overprocessed HDR everywhere. That era is done, and rightly so. But we are moving towards a more natural, more display-aware way of working, where HDR becomes a normal part of the photographic toolbox rather than a novelty.

How quickly that happens depends on a few things catching up together: displays, software, and sharing platforms. But the direction is clear.

More of the world is becoming HDR-capable, which means photographers will increasingly need to understand how to work with that reality, whether they choose to or not.

Final thoughts

HDR is not dead.

What's dead is the old caricature of it. The version that turned every photo into a neon soap opera. The modern version is far more interesting, far more useful, and far more in step with where technology is heading.

For photographers, the opportunity is simple: start paying attention now. Learn what modern HDR actually is, watch how it develops, and think about where it fits in your own work, because this feels less like a passing fad and more like a genuine shift in the way images are made and seen.

Lightroom Virtual Summit 2026

The Lightroom Virtual Summit is BACK running from 1st June to the 5th June 2026, including 45 classes (33hrs +) of Lightroom Education which you can watch completely for FREE!

🚨 Link for FREE PASS: https://glyndewis.krtra.com/t/e7YtyIDicEoQ

InstructorS

Anthony Morganti, Ben Willmore, Chris Orwig, CliffordPickett, ColinSmith, DanielGregory, GregBenz, JaredPlatt, Jesús Ramirez, Kristina Sherk, LisaCarney, Matt Kloskowski, PeterMorgan, RobSylvan, Sean McCormack, TimGrey ... and yours truly 😃

FREE TO WATCH

All classes are free to watch for a 48 period once they go live, and there’s an optional VIP Pass available for purchase that gives you lifetime access to the recordings of all classes, instructor-provided class notes and exclusive bonuses (including additional videos).

Lightroom AI - You're using it in the WRONG ORDER

In Lightroom Classic, Desktop, and Camera Raw, a yellow warning icon often appears in the AI Edit Status panel. This happens when you perform edits out of the recommended "order of operations," signaling that certain AI-generated layers need to be updated or rerendered.

While you can still edit in any order, jumping around can lead to unpredictable results. For example, applying an adaptive color profile and then using the "Denoise" or "Remove" tool might cause the colors and highlights to shift once the AI is forced to update.

The Recommended Workflow: Prepare, Repair, Finesse

To maintain total control over how your image looks, it is best to follow this three-step sequence:

  1. Prepare: Start with edits that affect the entire image, such as Denoise, Raw Details, Super Resolution, or HDR. This is the foundation of your edit.

  2. Repair: Next, clean up the image by removing distractions. Use the Remove tool (with Generative AI) or Distraction Removal for things like reflections, dust spots, or unwanted objects.

  3. Finesse (or Finish): Once the image is prepped and repaired, move on to creative adjustments, such as Adaptive Color Profiles or intricate masking.

Handling the AI Edit Status Warning

If the yellow icon appears, it is a reminder that your AI edits may no longer be perfectly synced with the current state of the image.

  • Click to Update: Always click the icon and select "Update" before finishing your edit.

  • Reassess: After updating, look closely at your image. Because the AI is rerendering, the results might look slightly different than before.

  • Don't Just Export: If you try to export while the icon is yellow, a popup will warn you. Instead of clicking "Export" anyway, it is safer to cancel, update the edits manually, and ensure you are happy with the changes before saving the final file.

By following the Prepare, Repair, Finesse order, you ensure your editing remains predictable and that the final export looks exactly as you intended.

Content Credentials: The Future of Proving Your Photos Are Real ✅

In a world where AI can generate a photorealistic image in seconds, how do you prove that your photograph is actually real? That it was captured by a real camera, in a real place, by a real photographer?

That is exactly the problem Content Credentials are designed to solve, and in 2026 this technology is finally moving from niche experiment to something every working photographer needs to understand.

What Are Content Credentials?

Think of Content Credentials as a kind of nutrition label for your photographs. Just as a food label tells you what is inside the packet, Content Credentials can tell viewers key facts about an image: who created it, which camera or software was used, what kind of edits were made, and, crucially, whether AI tools were involved at any stage.

Under the hood, Content Credentials are powered by an open technical standard called C2PA, which stands for Coalition for Content Provenance and Authenticity. C2PA is a cross-industry specification backed by companies and organisations including Adobe, Microsoft, Google, Sony, Nikon, Canon, Leica, Fujifilm, the BBC, the Associated Press and many others.

The key point is that Content Credentials do not judge whether a photo is "good" or "bad". They provide a tamper-evident record of provenance, meaning a factual history of where an image came from and how it was made, so that editors, clients and audiences can make their own decisions about whether to trust what they are seeing.

How Do Content Credentials Actually Work?

At a technical level, C2PA uses cryptographic hashes and digital signatures, the same kind of technology that protects online banking, to bind provenance information to media files. In practice, the chain looks like this:

  1. Capture. On supported cameras, a C2PA manifest is signed at the moment of capture, recording the device identity and, where enabled, when and where the image was created.

  2. Edit. When the photo is opened in C2PA-enabled software such as Photoshop or Lightroom, the software can log key edits, including the use of generative AI tools, into an updated manifest.

  3. Export and publish. On export, the photographer chooses what information to include. The Content Credentials can be embedded in the file itself, published to a cloud service, or both.

  4. Verify. Anyone can later inspect the credentials using tools such as the Content Authenticity Initiative's Inspect site at contentcredentials.org/verify, browser extensions, or compatible apps and services.

If someone tampers with the pixels or tries to alter the signed provenance after the fact, the cryptographic checks break. The result is that the credentials are tamper-evident: you cannot quietly change the file or its signed history without that being detectable.

Which Cameras Support Content Credentials in 2026?

Camera support has accelerated over the last two years. A useful snapshot comes from the community-maintained c2pa.camera site, which tracks devices that can sign images using the C2PA standard.

As of early 2026, supported cameras include:

One particularly important entry is the Google Pixel 10. Thanks to its Tensor G5 and Titan M2 security chips and built-in C2PA support in the Google Camera app, it is currently the least expensive way to capture C2PA-signed images. That matters because not every working photographer or journalist will be carrying a flagship mirrorless body at the moment something newsworthy happens.

On the mirrorless side, Fujifilm has committed to rolling Content Authenticity support out across its X and GFX cameras, starting with models like the X-T50 and GFX100S II, with further firmware support planned but not yet fully detailed.

Content Credentials in Lightroom and Photoshop

The good news is you do not need a C2PA-enabled camera to start using Content Credentials. Adobe has built support directly into Lightroom Classic, Lightroom Desktop and Photoshop, using C2PA under the hood.

Lightroom Classic

In Lightroom Classic, Content Credentials are applied at export time.

Open the Export dialogue and scroll to the Content Credentials section, then enable Apply Content Credentials. You will need to choose how the credentials are stored: you can publish to Content Credentials Cloud, attach them to files by embedding them in the JPEG, or do both at once, which is the recommended option for most photographers. You can also decide what information to include, such as your name from your Adobe account, any connected social accounts, and a log of the editing steps recorded by Lightroom.

A few practical limitations are worth knowing about in 2026. Lightroom Classic only applies Content Credentials on JPEG export, not on TIFF, PSD or RAW files. An active internet connection is also required for the feature to work, even if you are simply attaching credentials to files rather than publishing to the cloud.

Lightroom Classic

Content Credentials are set in the Preferences and Export section …

Photoshop

Photoshop takes a slightly different approach because it can record provenance while you edit. Go to Settings or Preferences, then History and Content Credentials, and enable Content Credentials for saved documents. For each document you can turn credentials on or off individually, so not every file has to be recorded. When you export, Photoshop can embed a detailed edit history into the Content Credentials, including the use of Generative Fill, Generative Expand and other AI-powered tools.

The system records a summarised, provenance-oriented history rather than every brush stroke, but enough to show that AI tools were used and how the file evolved over time.

Keeping the Chain Intact Between Lightroom and Photoshop

If your workflow moves between Lightroom Classic and Photoshop, it is worth thinking about the provenance chain. A robust approach is to export from Lightroom with Content Credentials turned on, then open that exported file in Photoshop with Content Credentials enabled for the document. Export again from Photoshop with Content Credentials, and if you want the final file back in your Lightroom catalogue, import the Photoshop export so that Lightroom sees the credentialled version.

Is it perfectly seamless? Not yet. But this approach ensures that each major step in your workflow adds to the same signed chain instead of breaking it.

Why Content Credentials Matter in 2026

Several developments make Content Credentials especially relevant right now.

Photo Mechanic and Press Workflows

In February 2026, Camera Bits confirmed that Photo Mechanic is gaining support for the C2PA standard. For decades, Photo Mechanic has been the first stop in press photographers' workflows, used for ingest, culling and metadata. Camera Bits' goal is to preserve C2PA signatures from C2PA-enabled cameras all the way through to publication, so editors can trust that a signed image really traces back to a specific moment and camera.

Camera Bits has been clear that this feature is still in active development with no public release timeline yet, but for photojournalism this is a significant shift.

Competitions and Clubs

The Canadian Association of Photographic Art has adopted a Content Credential model for its competitions to address AI-generated imagery. Their current stance, through at least 2027, is that the model is optional and educational rather than mandatory, but potential winning entries already undergo verification that includes Content Credentials analysis, AI detection and forensic checks. Images that fail those verification steps can be disqualified, which is a strong signal of where competition rules are heading.

Platforms and the Broader Ecosystem

On the platform side, there has been real movement. LinkedIn now displays a CR icon for images carrying Content Credentials, which users can click to see the provenance summary. Google has brought C2PA-based Trusted Images to Android and Pixel, using Content Credentials and SynthID to distinguish originals and AI-generated content. Cloudflare Images and other services now preserve Content Credentials through transformations, so the provenance remains intact when images are resized or optimised for delivery.

The Content Authenticity Initiative itself has grown into a global community of more than 6,000 members by the end of 2025, spanning media, tech, education and government. This is no longer a small experiment.

The Honest Challenges (As of 2026)

That said, Content Credentials are not magic, and the current limitations are worth being transparent about.

Social Platforms Still Strip Metadata

Many social platforms still strip embedded metadata from uploads, which removes embedded C2PA manifests along with traditional EXIF and IPTC data. Tests have shown that platforms like Facebook remove Content Credentials on upload, which is one reason Adobe allows you to publish credentials to a cloud service as well, so you can still verify an image via the cloud record even if the embedded data is lost.

The Chicken-and-Egg Problem

Camera makers want platforms and tools to support provenance before they invest heavily. Platforms want a critical mass of signed content. Newsrooms want both to be stable before they change their workflows. PetaPixel's coverage of the Digimarc C2PA Chrome extension in 2025 summed up the situation bluntly: at that point, basically no photos published online were carrying C2PA metadata. That is slowly improving in 2026, but it remains an adoption loop rather than a solved problem.

The Perception Problem

At CES 2026, several analyses highlighted that many visitors misunderstood the Content Credentials icon, assuming it marked AI-generated content rather than authentic content with a provenance record. Without better public education, there is a real risk that authenticity labels are misread as AI labels, which is the exact opposite of the intended outcome.

Inconsistent Implementations

Some early implementations have also bent the semantics in unhelpful ways. Critics have pointed out that certain smartphone workflows only add C2PA manifests to images that have been processed with AI features, not to ordinary captures. That reverses the intent entirely: the real images are the ones that most need a verifiable credential.

Privacy and Identity

Finally, there is the privacy angle. C2PA and Adobe both make identity assertions optional and opt-in, so you choose whether to embed your name, social accounts or edit history. That flexibility is valuable, but it also means you should think carefully about what you are comfortable attaching to every exported file. For some photographers, including personal account details on every share will feel like a useful feature; for others, it may feel like over-exposure.

Should You Start Using Content Credentials?

For most photographers who share work online, the pragmatic answer in 2026 is yes, it is worth turning on now, even with the current rough edges.

There is no extra cost, as Content Credentials in Lightroom and Photoshop are included in your existing Adobe subscription and do not consume generative credits. They are non-destructive, meaning enabling them does not alter your image content or require a different editing approach. It simply adds metadata, and optionally a cloud record, at export.

Starting now also means you build good habits early. As more contests, clients and platforms start expecting provenance, having a back catalogue of signed images will be an advantage rather than something you are scrambling to retrofit. Organisations like the Canadian Association of Photographic Art explicitly highlight that embedded creator information and timestamps help strengthen copyright and attribution claims as part of a wider evidence chain. And the export settings give you control over privacy, so you can choose to share just a minimal provenance chain or a more detailed record including identity and edit history.

For photojournalists and press photographers, this is already moving from a nice-to-have to something expected. For commercial and fine-art photographers, it is a professional differentiator that signals authenticity and transparency at a time when clients are increasingly wary of AI fakery.

How to Check if an Image Has Content Credentials

If you want to verify an image, whether your own or someone else's, there are several options available. You can upload a file at contentcredentials.org/verify to see its provenance, including capture and edit history where available.

Adobe and its partners also provide browser extensions that detect and surface Content Credentials as you browse the web. On LinkedIn, look for the CR icon on images; clicking it shows the stored provenance for that image. Nikon users, editors and agencies can use the Nikon Authenticity Service to validate C2PA-signed images from supported cameras. And Leica's FOTOS app can read and display authenticity information for images from the M11-P, SL3-S and related cameras.

Where This Is Heading

The direction of travel is clear. The C2PA Conformance Programme and the CAI's growing membership are pushing the ecosystem towards more consistent implementations across cameras, software and platforms. Open-source tooling is making it easier for smaller developers to add support. And regulatory and industry pressure around AI transparency, especially in news and political advertising, is giving content authenticity a real tailwind.

As Camera Bits put it when discussing Photo Mechanic's planned support, the goal is not to replace trust in photographers, but to provide an additional layer of confidence in an environment where synthetic media is increasingly common.

For working photographers, the message in 2026 is straightforward. The tools are here, they are free to switch on, and they are only going to become more important. Enabling Content Credentials today is one of the simplest practical steps you can take to protect your work and to prove that it is genuinely yours.

🪦 Is Adobe Killing Lightroom with Topaz?

A few days ago I posted a video about the latest Lightroom update, version 9.2, and one of the big headlines was the new generative upscale feature powered by Topaz Gigapixel. A lot of people were excited about it, and honestly, so was I at first. But now that the dust has settled, I've had a chance to really sit with it, and I'll be straight with you: something feels off.

I've been going through your comments and doing a lot of thinking, and there are a few things here that I just can't get past.

Are We Really Going Backwards on Non-Destructive Editing?

The non-destructive workflow is one of the things that makes Lightroom so brilliant. We've reached a point where we can do masking, lighting adjustments, special effects, all without ever leaving the app or touching the original file. It's genuinely impressive how far it's come.

But this Topaz integration throws a spanner in the works. It basically puts a full stop on your edit and spits out a brand new file, which is a destructive process. And here's the thing, we've been here before. Remember when Super Resolution had the same problem? Adobe actually listened back then and sorted it so we weren't drowning in extra DNG files. So why are we going in the opposite direction now?

Innovation or Just Outsourcing?

Adobe is supposed to be leading the way in creative software. They already have Super Resolution, and it works well. So rather than pushing that further, say, allowing a proper 4x upscale, they've decided to hand it off to a third party instead.

That doesn't feel like innovation to me. It feels like taking the easy route. Especially when you consider the price increases we've seen recently. You'd expect that extra revenue to go towards building better, more seamless tools, not just bolting on someone else's technology and calling it a feature.

The Credits Problem

This is the bit that really gets me. The version of Topaz built into Lightroom is incredibly stripped back compared to the standalone app. There's no preview, barely any controls, and it costs you generative credits every single time you use it.

Compare that to the standalone Topaz app, where you get a proper preview, far more control, and unlimited upscales as part of your monthly subscription. In Lightroom, you're essentially guessing and spending credits to find out whether the result is even usable. It makes you wonder whether this is genuinely designed to improve your workflow or whether it's just another way to drive credit sales.

Let's Not Lose Sight of What Matters

I'm a big fan of AI and what it can do for our editing. It can save time, open up new possibilities, and make certain jobs a lot easier. But it should be a tool that supports your creativity, not a shortcut that sidesteps it.

Lightroom has always been a platform I've championed, and I still believe in what it can be. But moves like this make it harder to recommend with a straight face. I don't want to see it turn into a hub for third-party plugins that slowly bleed you dry with credit charges.

I've built my career on Adobe software and I'll always back it when it deserves it. But I also think it's important to say something when things don't feel right.

So Adobe, if you're paying attention: we know what you're capable of. Give us tools that respect the way we work, rather than features that complicate it. And in the meantime, if I run out of credits, I'll quite happily go back into Photoshop and rely on the traditional skills that have served me well for years. AI is a brilliant tool. But it's not the whole craft.

Generative Upscaling using Topaz Gigapixel now in Lightroom

Adobe Lightroom version 9.2, released on 20th February 2026, brings with it a significant new feature: generative upscaling powered by Gigapixel from Topaz Labs.

If you've ever needed to enlarge an image whilst maintaining sharpness and clarity, this update is going to be very welcome indeed.

Here's a comprehensive look at what it does, how to use it, and what you need to know before you get started.

What Is Generative Upscale with Gigapixel?

Generative upscale is an AI-powered image enlargement tool built directly into Lightroom, using technology from Topaz Labs' well-regarded Gigapixel application. It works by analysing your image and intelligently increasing its resolution, improving quality, sharpness, and clarity in the process. The key advantage over Lightroom's existing super resolution feature is both the degree of upscaling available and the range of file formats it supports.

How Does It Differ from Super Resolution?

Lightroom has offered super resolution for some time, but it comes with two notable limitations: it only upscales by 2x (200%), and it only works on RAW files. The new Gigapixel-powered generative upscale removes both of those restrictions. You can now upscale by either 2x or 4x, and crucially, it works on RAW files and other file formats too, making it far more versatile.

How to Access Generative Upscale

There are three ways to access the feature within Lightroom:

From the menu bar, go to Photo and select Generative Upscale. Alternatively, right-click on your image in the editing view and choose Generative Upscale from the context menu. You can also right-click on a thumbnail in the grid view to find the same option.

What Happens When You Upscale?

Once you select generative upscale, a dialogue box appears showing your upscaling options (2x or 4x), along with the resulting pixel dimensions and file size in megapixels. You'll also see how many generative credits the process will consume, and a real-time display of your current monthly generative credit balance, which is a very handy addition.

The processing itself takes place in the cloud, regardless of whether your images are stored locally or in Adobe's cloud. This means an active internet connection is required every time you use the feature. In testing, the process took around 30 seconds, though this will depend on your connection speed.

Once complete, the upscaled image is saved as a new DNG file alongside your original. This is an important point: no matter what file format you send for upscaling, the returned file will always be a DNG. The filename will reflect that Gigapixel was used and will indicate the upscaling factor applied (2x or 4x).

An Important Tip: Edit First, Then Upscale

This is perhaps the most important thing to be aware of when using generative upscale. When the upscaled DNG file is returned, all of your existing Lightroom edits, including masks and adjustments, are baked into it. The new file will not retain any editable Lightroom settings. For that reason, you should always complete your editing first before running the upscale. The good news is that your original edited file is preserved, so you will always have access to make further adjustments to it should you need to.

Generative Credits

Using generative upscale consumes generative credits from your monthly allowance. The cost is either 10 or 20 credits, depending on the size of the output, with a maximum of 20 credits per upscale. The dialogue box shows exactly how many credits will be used before you commit, and you can see your remaining balance at the same time.

The Stacking Option for Cloud Images

If you are working with images stored in Adobe's cloud, there is one additional option available: the ability to create a stack. Rather than the upscaled file appearing as a separate thumbnail alongside your original, it will be grouped together with it as a stack, keeping your library neat and organised. This option is not available for locally stored images.

Maximum Output Size

The maximum output size is an impressive 65,000 pixels on the longest edge, making this suitable for very large print work indeed.

Where Generative Upscale Is Most Useful

This feature is particularly well suited to a number of scenarios. It's excellent when you've made a significant crop to an image and want to recover detail and sharpness in the enlarged result. It can also be used to improve low-resolution scans, or to breathe new life into images from older cameras with lower megapixel counts.

Quick Summary of Key Points

  • Available in Lightroom version 9.2 and later

  • Powered by Topaz Labs' Gigapixel technology

  • Upscale options: 2x or 4x

  • Works on RAW files and other file formats (unlike super resolution)

  • All processing is done in the cloud; an internet connection is required

  • Returns a new DNG file regardless of the original format

  • Consumes 10 or 20 generative credits (maximum 20 per upscale)

  • Maximum output: 65,000 pixels on the longest edge

  • Edits are baked into the upscaled file, so always edit first

  • Stacking option available for cloud-stored images

  • Your original file is always preserved

For photographers looking to get the most from their images, whether recovering detail from a tight crop or improving older files, this is a genuinely useful addition to Lightroom's toolkit.

Editing a Photo in Lightroom + Photoshop ... on an iPad

Not too long ago, I never would have considered editing my photos on an iPad. It always felt like something I should save for my desktop. But things have changed. Both Lightroom and Photoshop on the iPad have improved massively, and these days I often use them when traveling. More and more, this mobile workflow is becoming a real option for photographers.

In this walkthrough, I’ll show you how I edited an image completely on the iPad, starting in Lightroom, jumping over to Photoshop when needed, and then finishing off with a print.

Starting in Lightroom on the iPad

The photo I worked on was taken with my iPhone. The first job was the obvious one: straightening the image. In Lightroom, I headed to the Geometry panel and switched on the Upright option, which immediately fixed the horizon.

Next, I dealt with a distraction in the bottom left corner. Using the Remove Tool with Generative AI switched on, I brushed over the wall that had crept into the frame. Lightroom offered three variations, and the second one was perfect.

With those fixes made, I converted the photo to black and white using one of my own synced presets. A quick tweak of the Amount slider gave me just the right level of contrast.

Masking and Sky Adjustments

The sky needed attention, so I created a Select Sky mask. As usual, the AI selection bled slightly into the hills, so I used a Subtract mask to tidy things up. It wasn’t perfect, but it was good enough to move forward.

From there, I added some Dehaze and Clarity to bring detail back into the clouds. A bit of sharpening pushed the image further, but that also revealed halos around a distant lamppost. At that point, I knew it was time to send the photo into Photoshop.

Fixing Halos in Photoshop on the iPad

Jumping into Photoshop on the iPad takes a little getting used to, but once you know where things are, it feels very familiar.

To remove the halos, I used the Clone Stamp Tool on a blank layer set to Darken blend mode. This technique is brilliant because it only darkens areas brighter than the sample point. With a bit of careful cloning, the halos disappeared quickly.

I then added a subtle “glow” effect often used on landscapes. By duplicating the layer, applying a Gaussian Blur, and changing the blend mode to Soft Light at low opacity, the image gained a soft, atmospheric look.

Back to Lightroom and Printing

With the edits complete, I sent the image back to Lightroom. From there it synced seamlessly across to my desktop, but the important point is that all of the editing was done entirely on the iPad.

Before printing, I checked the histogram and made some final tweaks. Then it was straight to print on a textured matte fine art paper. Once the ink settled, the result looked fantastic — no halos in sight.

Final Thoughts

I’m not suggesting you should abandon your desktop for editing. Far from it. But the iPad has become a powerful option when you’re traveling, sitting in a café, or simply want to work away from your desk.

This workflow shows what’s possible: you can straighten, retouch, convert to black and white, make sky adjustments, refine details in Photoshop, and even prepare a final print — all from the iPad. And of course, everything syncs back to your desktop for finishing touches if needed.

Exciting times indeed.

HOW I Edit THIS Portrait in 2025 – Full Lightroom Workflow (No Photoshop!)

In this video I show how I now retouch a stylised portrait in Lightroom, that up until recently was only possible using Photoshop by making BIG use of Lightroom Masks …

*Newsletter Subscribers can download the same file I use in this tutorial to follow along step by step.