adobe

Instantly Fix "Impossible" Glasses Reflections in Photoshop

Removing reflections from glasses has always been one of those jobs in Photoshop that's either felt impossible or just painfully tedious. In this tutorial, I'm showing you how the new Firefly Image Model 5 in the Photoshop Beta handles this specific problem in a way that I think you're going to find really useful.

I'm working with a portrait of Thomas Coulter, one of the veterans from my 39-45 Portraits Project, to walk you through exactly how it works.

The Challenge with Older Models

If you've tried using Generative Fill for this before, you'll know that older models like Firefly Image 1 could certainly remove a reflection, but they often introduced other problems at the same time. You'd sometimes end up with subtle changes to facial structure, eyebrows, or the shape of the glasses frames themselves. The reflection might be gone, but the portrait no longer looked quite right.

Why Firefly Image Model 5 is Worth Knowing About

Model 5 has been built with detail preservation as a priority. The idea is that it only changes what you've asked it to change, leaving everything else as close to the original as possible.

Worth knowing: this is a premium model, so it uses 10 generative credits rather than one. It also only produces a single variation, but given the quality of the result, that's rarely a problem.

How to Do It, Step by Step

  1. Open Photoshop Beta - You'll need the Beta version to access the latest Firefly models. [00:56]

  2. Make your selection - Use the Selection Brush Tool to paint over the reflections on the lenses. You don't need to be overly precise; going slightly over the frames is fine. [01:25]

  3. Open Generative Fill - Click Generative Fill in the Contextual Taskbar. If you can't see it, go to Window > Contextual Taskbar. [01:49]

  4. Choose the right model - This is the key step. In the Taskbar settings, look under Adobe Models and select Firefly Image Model 5 (Preview). [06:16]

  5. Enter your prompt - Something simple like "remove the reflection from the glasses" is all you need. [05:32]

  6. Generate - Hit Generate and give it around 10 to 12 seconds. [06:21]

The Results

What I find genuinely impressive here is that once the reflection is gone, everything else stays exactly as it was. The eyebrow hairs, the skin texture, the precise shape of the frames - all identical to the original file.

Now, Camera Raw and Lightroom do have reflection removal tools built in, and they're well worth trying, particularly on larger reflections. But for detailed areas like eyewear, where precision really matters, this approach in Photoshop gives you a level of control and accuracy that's hard to beat. If you've got portraits sitting in your archive that you've written off as too difficult, this is a good reason to dig them back out.

Fix IMPOSSIBLE Backgrounds Instantly ( Lightroom + Photoshop )

Recently, Steven Gotz, a member of the Photography Community on SKOOL ( LINK ), sent over a brilliant RAW file of a condor. Stunning subject, great light, one problem: a massive fence running right through the background.

Rather than leave it on the shelf, I figured it was the perfect excuse to put the latest updates in Lightroom and Photoshop Beta through their paces. What would have taken ages with the Clone Stamp tool a couple of years ago can now be sorted in seconds. Here's exactly how I did it, using two different workflows.

Workflow 1: Photoshop Beta with Firefly Image 5

This is the quickest route right now, and the results are genuinely impressive.

The key is using the new Firefly Image 5 (Preview) model inside Photoshop Beta. It's been built specifically for editing while preserving detail, which matters a lot when you're dealing with complex textures like feathers and rocky backgrounds.

  1. From Lightroom to Photoshop Beta. Right-click the image in Lightroom and choose Edit In > Adobe Photoshop Beta.

  2. Select All. Once you're in Photoshop, go to Select > All. This gives the AI the full context of the frame before you do anything.

  3. Switch to Firefly Image 5. Click Generative Fill in the contextual taskbar. Here's the bit that matters: don't use the standard model. Switch it to Firefly Image 5 (Preview) from the dropdown.

  4. The prompt. This model needs a prompt to work, unlike some of the others. I kept it simple: "remove the fence from this picture."

  5. Refine the detail. The AI did a great job on the background, but because Firefly Image 5 currently outputs at 2K, the fine detail around the bird's eye and feathers was slightly softer than the original RAW. The fix is straightforward: use the Object Selection Tool to select the bird and the rock, then fill that area on the layer mask with black. That reveals the sharp original bird while keeping the AI-cleared background intact.

Workflow 2: Lightroom to Firefly Web

Not on the Photoshop Beta? No problem. You can get to the same place via Lightroom's sharing feature.

  1. Share to Firefly. In Lightroom, hit the Share button (top right) and select Firefly: Edit an image. This opens your browser and drops the photo straight into the Adobe Firefly web interface.

  2. Settings and generate. Select Firefly Image 5, bump the resolution to 2K, use the same prompt ("remove the fence from this picture"), and hit generate.

  3. Back to Photoshop. Download the cleaned image, go back to Lightroom, and open the original file in the regular version of Photoshop.

  4. Stack and align. Use File > Place Embedded to bring the Firefly-cleaned version in on top of your original. Rasterise the top layer, select both layers, then go to Edit > Auto-Align Layers to make sure everything lines up perfectly.

  5. The masking trick. Same principle as Workflow 1: use the Object Selection Tool to select the bird and the rock, then hold Option (Mac) or Alt (Windows) and click the mask icon. This hides the AI version of the bird and brings back the sharp, high-detail original underneath.

Why the masking step matters

This is the part I think is really important. It's not about letting AI take over the whole image. It's about using it to fix a specific problem, in this case the background, while keeping the actual subject exactly as it was captured in the RAW file. The integrity of the original is what you're protecting.

Have a look through your archives. Chances are there are shots you wrote off because of something in the background. It might be worth giving them another look.

NEW đŸ’Ĩ Photoshop's One-Click Auto Distraction Removal

Adobe has just dropped a seriously powerful update to the Remove Tool in the Photoshop Public Beta (version 27.6.0), and it’s a total game-changer for cleaning up your photos. It can now automatically scan your entire image, identify distractions across 26 different categories, and let you remove them with a single click.

Here is a quick look at how it works and how you can start using it to save yourself hours of manual cloning and healing.

What is the New "General Distractions" Feature?

Previously, the Remove Tool had specific buttons for "Wires and Cables" or "People." This new update introduces General Distractions. It uses generative AI to find things like trash cans, signs, vehicles, and even stray animals that might be cluttering up your shot.

How to Use It: A 3-Step Tutorial

Before you start, make sure you have GPU hardware acceleration turned on in your Photoshop settings (Preferences > Performance) to ensure the tool runs smoothly.

1. Select the Remove Tool

Head over to your toolbar and select the Remove Tool. In the options bar at the top, make sure Sample All Layers is ticked and, most importantly, check the Create New Layer box. This acts as a fail-safe, putting all your removals on a separate layer so you can easily bring things back if you change your mind later.

2. Find Your Distractions

In the options bar, click on the Find Distractions dropdown and choose General Distractions, then click Find. Photoshop will take a few moments to scan the image. When it’s finished, it will highlight potential distractions with color-coded overlays.

The cool part? The list of categories it shows you is dynamic. It won't show you all 26 categories; it only lists the ones it actually found in your specific photo—like "Vehicles," "Animals," or "Urban Elements."

3. Refine and Remove

You have total control over what stays and what goes:

  • Toggle Categories: You can untick specific categories in the dropdown if Photoshop picked up something you actually want to keep (like a cool cloud it mistook for a "light diffusing element").

  • Manual Overwrite: Use the plus (+) or minus (-) brush icons in the options bar to manually add areas to be removed or protect areas you want to save.

  • The Big Reveal: Once you're happy with the selection, click the Tick icon. Photoshop will work its magic, and the distractions will vanish, seamlessly filling in the background.

Why This Matters

I've been testing this on complex street scenes and busy beach shots, and the results are mind-blowing. It handles everything from removing pigeons at someone's feet to rebuilding stone walls where a trash can used to be. It’s not just a time-saver; it’s doing work that used to require advanced cloning skills in just a few seconds.

Since this is currently in the Public Beta, if you run into anything unexpected, be sure to use the "Feedback" icon in the top right of Photoshop to let Adobe know. The more feedback we give them now, the better the final version will be.

My Upgraded Realistic Photoshop Lighting Effect + Dust

This is one of those techniques I absolutely love. Adding a lighting effect to a portrait can completely change the mood of an image, and it really doesn't take long once you know the steps. What I want to share here is an upgraded version of what I used to call the "world's simplest lighting effect," but this time with realistic floating dust and a bit of atmospheric depth thrown in.

The Secret to Realism: Highlights

Before you even open Photoshop, there's one thing you really need to look for in your original photo, and that's existing highlights. For a lighting effect to look convincing, your subject needs to already have highlights on the side where you're going to place the light source. If you're adding light coming down from the top left, for example, there need to be highlights there already. Without them, the effect just never looks right no matter how much you tweak it.

Step 1: Creating the Light Source

A common mistake I see is people grabbing a massive brush and clicking once. The trouble is that with a huge soft brush, the feathered edges often get clipped by the edge of the canvas, leaving a harsh, ugly line.

Here's a better approach. Create a new blank layer, then select a standard round soft brush from the toolbar with the hardness set to 0%. Set your foreground colour to white and click once in the middle of your image with a relatively small brush. Now go to Edit > Free Transform (Cmd/Ctrl + T), hold down Shift and Option on Mac or Alt on Windows, and drag a corner handle to scale that brush stroke up proportionally until it's nice and large. Then grab the Move tool and reposition the light into the corner so that only the soft, feathered edge spills into the frame.

Step 2: Adding the Atmospheric Dust

This is where you take the effect to the next level. Those tiny bits of dust and debris that become visible when caught in a beam of light make all the difference. I tend to use a texture that looks a bit like a photograph of rain at night, shot looking upwards and slightly out of focus.

To apply a dust overlay, place the image over your work and use Free Transform to scale it so it fills the whole image. If the layer is a Smart Object, right click it and choose Rasterize Layer. Then go to Image > Adjustments > Desaturate so the dust doesn't introduce any unwanted colour. Change the Blend Mode to Screen, which knocks out the black background and leaves only the bright dust particles. Finally, add a Layer Mask to the dust layer. Grab a soft brush with a black foreground colour and paint away the dust where you don't want it. Keep it concentrated near the light source and off the main parts of your subject.

Step 3: Adding Movement

Static dust can look a bit "stuck on," so adding a touch of motion blur makes a huge difference. Go to Filter > Blur > Motion Blur and adjust the angle so the blur follows the direction of the light beam, usually from top left down to bottom right. Keep the distance quite small. You just want a subtle sense of movement, as if the particles are caught in a gentle drift.

How to Create Your Own Dust Textures

If you haven't got a dust overlay to hand, you can actually use AI to generate one. Using a tool like Adobe Firefly or Google Gemini, try a prompt along the lines of "dark atmospheric bokeh background with falling rain or snow particles." I find that asking for a 4x3 aspect ratio works well for most portraits.

I hope you find this upgraded technique useful for your own retouching. It's a quick way to add a lot of drama and production value to your images without needing any kind of complex setup.

The Photoshop Zoom Setting You NEED to Change ✅

Whether you are just starting out with Photoshop or you have been using it for years, there is one specific setting that can occasionally make it feel like the software is behaving rather strangely. I wanted to share a quick tip about the Zoom tool that might just save you a bit of frustration.

The Mystery of the Shifting Zoom

Have you ever tried to zoom in on a specific detail, only for that area to suddenly jump to the middle of your screen? Usually, when you click with the Zoom tool, you expect the image to get larger exactly where your cursor is sitting. However, there is a setting that changes this behaviour entirely.

If your image keeps repositioning itself every time you click to magnify, it is likely because of a single tick box in your preferences.

How to Fix It

Depending on whether you are using a Mac or Windows, the menu location is slightly different, but the setting itself is the same:

  • On Mac: Go to the Photoshop menu, then Settings, and select Tools.

  • On Windows: Go to the Edit menu, then Preferences, and select Tools.

Look for the option labelled Zoom Clicked Point to Centre.

If this is ticked, Photoshop will take the exact point you clicked and move it to the very centre of your workspace as it zooms in. If you find this distracting, simply uncheck the box. Once you do that, your zoom will behave in the traditional way, staying put exactly where you click.

Why Would You Use It?

You might wonder why this setting even exists if it feels so counter-intuitive at first. It actually comes in quite handy when you are working on very large, high-resolution images or wide landscapes.

If you are trying to inspect a small mark or a bit of sensor dust right in the far corner of a photo, a standard zoom might actually push that detail off the edge of the screen as the image expands. By having "Zoom Clicked Point to Centre" turned on, Photoshop pulls that corner detail right into your main field of view, making it much easier to work on without having to scroll around.

It really comes down to personal preference. Some people love the control of keeping the image static, while others prefer the software to "hand" them the detail they are looking for by placing it in the middle.

Content Credentials: The Future of Proving Your Photos Are Real ✅

In a world where AI can generate a photorealistic image in seconds, how do you prove that your photograph is actually real? That it was captured by a real camera, in a real place, by a real photographer?

That is exactly the problem Content Credentials are designed to solve, and in 2026 this technology is finally moving from niche experiment to something every working photographer needs to understand.

What Are Content Credentials?

Think of Content Credentials as a kind of nutrition label for your photographs. Just as a food label tells you what is inside the packet, Content Credentials can tell viewers key facts about an image: who created it, which camera or software was used, what kind of edits were made, and, crucially, whether AI tools were involved at any stage.

Under the hood, Content Credentials are powered by an open technical standard called C2PA, which stands for Coalition for Content Provenance and Authenticity. C2PA is a cross-industry specification backed by companies and organisations including Adobe, Microsoft, Google, Sony, Nikon, Canon, Leica, Fujifilm, the BBC, the Associated Press and many others.

The key point is that Content Credentials do not judge whether a photo is "good" or "bad". They provide a tamper-evident record of provenance, meaning a factual history of where an image came from and how it was made, so that editors, clients and audiences can make their own decisions about whether to trust what they are seeing.

How Do Content Credentials Actually Work?

At a technical level, C2PA uses cryptographic hashes and digital signatures, the same kind of technology that protects online banking, to bind provenance information to media files. In practice, the chain looks like this:

  1. Capture. On supported cameras, a C2PA manifest is signed at the moment of capture, recording the device identity and, where enabled, when and where the image was created.

  2. Edit. When the photo is opened in C2PA-enabled software such as Photoshop or Lightroom, the software can log key edits, including the use of generative AI tools, into an updated manifest.

  3. Export and publish. On export, the photographer chooses what information to include. The Content Credentials can be embedded in the file itself, published to a cloud service, or both.

  4. Verify. Anyone can later inspect the credentials using tools such as the Content Authenticity Initiative's Inspect site at contentcredentials.org/verify, browser extensions, or compatible apps and services.

If someone tampers with the pixels or tries to alter the signed provenance after the fact, the cryptographic checks break. The result is that the credentials are tamper-evident: you cannot quietly change the file or its signed history without that being detectable.

Which Cameras Support Content Credentials in 2026?

Camera support has accelerated over the last two years. A useful snapshot comes from the community-maintained c2pa.camera site, which tracks devices that can sign images using the C2PA standard.

As of early 2026, supported cameras include:

One particularly important entry is the Google Pixel 10. Thanks to its Tensor G5 and Titan M2 security chips and built-in C2PA support in the Google Camera app, it is currently the least expensive way to capture C2PA-signed images. That matters because not every working photographer or journalist will be carrying a flagship mirrorless body at the moment something newsworthy happens.

On the mirrorless side, Fujifilm has committed to rolling Content Authenticity support out across its X and GFX cameras, starting with models like the X-T50 and GFX100S II, with further firmware support planned but not yet fully detailed.

Content Credentials in Lightroom and Photoshop

The good news is you do not need a C2PA-enabled camera to start using Content Credentials. Adobe has built support directly into Lightroom Classic, Lightroom Desktop and Photoshop, using C2PA under the hood.

Lightroom Classic

In Lightroom Classic, Content Credentials are applied at export time.

Open the Export dialogue and scroll to the Content Credentials section, then enable Apply Content Credentials. You will need to choose how the credentials are stored: you can publish to Content Credentials Cloud, attach them to files by embedding them in the JPEG, or do both at once, which is the recommended option for most photographers. You can also decide what information to include, such as your name from your Adobe account, any connected social accounts, and a log of the editing steps recorded by Lightroom.

A few practical limitations are worth knowing about in 2026. Lightroom Classic only applies Content Credentials on JPEG export, not on TIFF, PSD or RAW files. An active internet connection is also required for the feature to work, even if you are simply attaching credentials to files rather than publishing to the cloud.

Lightroom Classic

Content Credentials are set in the Preferences and Export section â€Ļ

Photoshop

Photoshop takes a slightly different approach because it can record provenance while you edit. Go to Settings or Preferences, then History and Content Credentials, and enable Content Credentials for saved documents. For each document you can turn credentials on or off individually, so not every file has to be recorded. When you export, Photoshop can embed a detailed edit history into the Content Credentials, including the use of Generative Fill, Generative Expand and other AI-powered tools.

The system records a summarised, provenance-oriented history rather than every brush stroke, but enough to show that AI tools were used and how the file evolved over time.

Keeping the Chain Intact Between Lightroom and Photoshop

If your workflow moves between Lightroom Classic and Photoshop, it is worth thinking about the provenance chain. A robust approach is to export from Lightroom with Content Credentials turned on, then open that exported file in Photoshop with Content Credentials enabled for the document. Export again from Photoshop with Content Credentials, and if you want the final file back in your Lightroom catalogue, import the Photoshop export so that Lightroom sees the credentialled version.

Is it perfectly seamless? Not yet. But this approach ensures that each major step in your workflow adds to the same signed chain instead of breaking it.

Why Content Credentials Matter in 2026

Several developments make Content Credentials especially relevant right now.

Photo Mechanic and Press Workflows

In February 2026, Camera Bits confirmed that Photo Mechanic is gaining support for the C2PA standard. For decades, Photo Mechanic has been the first stop in press photographers' workflows, used for ingest, culling and metadata. Camera Bits' goal is to preserve C2PA signatures from C2PA-enabled cameras all the way through to publication, so editors can trust that a signed image really traces back to a specific moment and camera.

Camera Bits has been clear that this feature is still in active development with no public release timeline yet, but for photojournalism this is a significant shift.

Competitions and Clubs

The Canadian Association of Photographic Art has adopted a Content Credential model for its competitions to address AI-generated imagery. Their current stance, through at least 2027, is that the model is optional and educational rather than mandatory, but potential winning entries already undergo verification that includes Content Credentials analysis, AI detection and forensic checks. Images that fail those verification steps can be disqualified, which is a strong signal of where competition rules are heading.

Platforms and the Broader Ecosystem

On the platform side, there has been real movement. LinkedIn now displays a CR icon for images carrying Content Credentials, which users can click to see the provenance summary. Google has brought C2PA-based Trusted Images to Android and Pixel, using Content Credentials and SynthID to distinguish originals and AI-generated content. Cloudflare Images and other services now preserve Content Credentials through transformations, so the provenance remains intact when images are resized or optimised for delivery.

The Content Authenticity Initiative itself has grown into a global community of more than 6,000 members by the end of 2025, spanning media, tech, education and government. This is no longer a small experiment.

The Honest Challenges (As of 2026)

That said, Content Credentials are not magic, and the current limitations are worth being transparent about.

Social Platforms Still Strip Metadata

Many social platforms still strip embedded metadata from uploads, which removes embedded C2PA manifests along with traditional EXIF and IPTC data. Tests have shown that platforms like Facebook remove Content Credentials on upload, which is one reason Adobe allows you to publish credentials to a cloud service as well, so you can still verify an image via the cloud record even if the embedded data is lost.

The Chicken-and-Egg Problem

Camera makers want platforms and tools to support provenance before they invest heavily. Platforms want a critical mass of signed content. Newsrooms want both to be stable before they change their workflows. PetaPixel's coverage of the Digimarc C2PA Chrome extension in 2025 summed up the situation bluntly: at that point, basically no photos published online were carrying C2PA metadata. That is slowly improving in 2026, but it remains an adoption loop rather than a solved problem.

The Perception Problem

At CES 2026, several analyses highlighted that many visitors misunderstood the Content Credentials icon, assuming it marked AI-generated content rather than authentic content with a provenance record. Without better public education, there is a real risk that authenticity labels are misread as AI labels, which is the exact opposite of the intended outcome.

Inconsistent Implementations

Some early implementations have also bent the semantics in unhelpful ways. Critics have pointed out that certain smartphone workflows only add C2PA manifests to images that have been processed with AI features, not to ordinary captures. That reverses the intent entirely: the real images are the ones that most need a verifiable credential.

Privacy and Identity

Finally, there is the privacy angle. C2PA and Adobe both make identity assertions optional and opt-in, so you choose whether to embed your name, social accounts or edit history. That flexibility is valuable, but it also means you should think carefully about what you are comfortable attaching to every exported file. For some photographers, including personal account details on every share will feel like a useful feature; for others, it may feel like over-exposure.

Should You Start Using Content Credentials?

For most photographers who share work online, the pragmatic answer in 2026 is yes, it is worth turning on now, even with the current rough edges.

There is no extra cost, as Content Credentials in Lightroom and Photoshop are included in your existing Adobe subscription and do not consume generative credits. They are non-destructive, meaning enabling them does not alter your image content or require a different editing approach. It simply adds metadata, and optionally a cloud record, at export.

Starting now also means you build good habits early. As more contests, clients and platforms start expecting provenance, having a back catalogue of signed images will be an advantage rather than something you are scrambling to retrofit. Organisations like the Canadian Association of Photographic Art explicitly highlight that embedded creator information and timestamps help strengthen copyright and attribution claims as part of a wider evidence chain. And the export settings give you control over privacy, so you can choose to share just a minimal provenance chain or a more detailed record including identity and edit history.

For photojournalists and press photographers, this is already moving from a nice-to-have to something expected. For commercial and fine-art photographers, it is a professional differentiator that signals authenticity and transparency at a time when clients are increasingly wary of AI fakery.

How to Check if an Image Has Content Credentials

If you want to verify an image, whether your own or someone else's, there are several options available. You can upload a file at contentcredentials.org/verify to see its provenance, including capture and edit history where available.

Adobe and its partners also provide browser extensions that detect and surface Content Credentials as you browse the web. On LinkedIn, look for the CR icon on images; clicking it shows the stored provenance for that image. Nikon users, editors and agencies can use the Nikon Authenticity Service to validate C2PA-signed images from supported cameras. And Leica's FOTOS app can read and display authenticity information for images from the M11-P, SL3-S and related cameras.

Where This Is Heading

The direction of travel is clear. The C2PA Conformance Programme and the CAI's growing membership are pushing the ecosystem towards more consistent implementations across cameras, software and platforms. Open-source tooling is making it easier for smaller developers to add support. And regulatory and industry pressure around AI transparency, especially in news and political advertising, is giving content authenticity a real tailwind.

As Camera Bits put it when discussing Photo Mechanic's planned support, the goal is not to replace trust in photographers, but to provide an additional layer of confidence in an environment where synthetic media is increasingly common.

For working photographers, the message in 2026 is straightforward. The tools are here, they are free to switch on, and they are only going to become more important. Enabling Content Credentials today is one of the simplest practical steps you can take to protect your work and to prove that it is genuinely yours.

đŸĒĻ Is Adobe Killing Lightroom with Topaz?

A few days ago I posted a video about the latest Lightroom update, version 9.2, and one of the big headlines was the new generative upscale feature powered by Topaz Gigapixel. A lot of people were excited about it, and honestly, so was I at first. But now that the dust has settled, I've had a chance to really sit with it, and I'll be straight with you: something feels off.

I've been going through your comments and doing a lot of thinking, and there are a few things here that I just can't get past.

Are We Really Going Backwards on Non-Destructive Editing?

The non-destructive workflow is one of the things that makes Lightroom so brilliant. We've reached a point where we can do masking, lighting adjustments, special effects, all without ever leaving the app or touching the original file. It's genuinely impressive how far it's come.

But this Topaz integration throws a spanner in the works. It basically puts a full stop on your edit and spits out a brand new file, which is a destructive process. And here's the thing, we've been here before. Remember when Super Resolution had the same problem? Adobe actually listened back then and sorted it so we weren't drowning in extra DNG files. So why are we going in the opposite direction now?

Innovation or Just Outsourcing?

Adobe is supposed to be leading the way in creative software. They already have Super Resolution, and it works well. So rather than pushing that further, say, allowing a proper 4x upscale, they've decided to hand it off to a third party instead.

That doesn't feel like innovation to me. It feels like taking the easy route. Especially when you consider the price increases we've seen recently. You'd expect that extra revenue to go towards building better, more seamless tools, not just bolting on someone else's technology and calling it a feature.

The Credits Problem

This is the bit that really gets me. The version of Topaz built into Lightroom is incredibly stripped back compared to the standalone app. There's no preview, barely any controls, and it costs you generative credits every single time you use it.

Compare that to the standalone Topaz app, where you get a proper preview, far more control, and unlimited upscales as part of your monthly subscription. In Lightroom, you're essentially guessing and spending credits to find out whether the result is even usable. It makes you wonder whether this is genuinely designed to improve your workflow or whether it's just another way to drive credit sales.

Let's Not Lose Sight of What Matters

I'm a big fan of AI and what it can do for our editing. It can save time, open up new possibilities, and make certain jobs a lot easier. But it should be a tool that supports your creativity, not a shortcut that sidesteps it.

Lightroom has always been a platform I've championed, and I still believe in what it can be. But moves like this make it harder to recommend with a straight face. I don't want to see it turn into a hub for third-party plugins that slowly bleed you dry with credit charges.

I've built my career on Adobe software and I'll always back it when it deserves it. But I also think it's important to say something when things don't feel right.

So Adobe, if you're paying attention: we know what you're capable of. Give us tools that respect the way we work, rather than features that complicate it. And in the meantime, if I run out of credits, I'll quite happily go back into Photoshop and rely on the traditional skills that have served me well for years. AI is a brilliant tool. But it's not the whole craft.

Generative Upscaling using Topaz Gigapixel now in Lightroom

Adobe Lightroom version 9.2, released on 20th February 2026, brings with it a significant new feature: generative upscaling powered by Gigapixel from Topaz Labs.

If you've ever needed to enlarge an image whilst maintaining sharpness and clarity, this update is going to be very welcome indeed.

Here's a comprehensive look at what it does, how to use it, and what you need to know before you get started.

What Is Generative Upscale with Gigapixel?

Generative upscale is an AI-powered image enlargement tool built directly into Lightroom, using technology from Topaz Labs' well-regarded Gigapixel application. It works by analysing your image and intelligently increasing its resolution, improving quality, sharpness, and clarity in the process. The key advantage over Lightroom's existing super resolution feature is both the degree of upscaling available and the range of file formats it supports.

How Does It Differ from Super Resolution?

Lightroom has offered super resolution for some time, but it comes with two notable limitations: it only upscales by 2x (200%), and it only works on RAW files. The new Gigapixel-powered generative upscale removes both of those restrictions. You can now upscale by either 2x or 4x, and crucially, it works on RAW files and other file formats too, making it far more versatile.

How to Access Generative Upscale

There are three ways to access the feature within Lightroom:

From the menu bar, go to Photo and select Generative Upscale. Alternatively, right-click on your image in the editing view and choose Generative Upscale from the context menu. You can also right-click on a thumbnail in the grid view to find the same option.

What Happens When You Upscale?

Once you select generative upscale, a dialogue box appears showing your upscaling options (2x or 4x), along with the resulting pixel dimensions and file size in megapixels. You'll also see how many generative credits the process will consume, and a real-time display of your current monthly generative credit balance, which is a very handy addition.

The processing itself takes place in the cloud, regardless of whether your images are stored locally or in Adobe's cloud. This means an active internet connection is required every time you use the feature. In testing, the process took around 30 seconds, though this will depend on your connection speed.

Once complete, the upscaled image is saved as a new DNG file alongside your original. This is an important point: no matter what file format you send for upscaling, the returned file will always be a DNG. The filename will reflect that Gigapixel was used and will indicate the upscaling factor applied (2x or 4x).

An Important Tip: Edit First, Then Upscale

This is perhaps the most important thing to be aware of when using generative upscale. When the upscaled DNG file is returned, all of your existing Lightroom edits, including masks and adjustments, are baked into it. The new file will not retain any editable Lightroom settings. For that reason, you should always complete your editing first before running the upscale. The good news is that your original edited file is preserved, so you will always have access to make further adjustments to it should you need to.

Generative Credits

Using generative upscale consumes generative credits from your monthly allowance. The cost is either 10 or 20 credits, depending on the size of the output, with a maximum of 20 credits per upscale. The dialogue box shows exactly how many credits will be used before you commit, and you can see your remaining balance at the same time.

The Stacking Option for Cloud Images

If you are working with images stored in Adobe's cloud, there is one additional option available: the ability to create a stack. Rather than the upscaled file appearing as a separate thumbnail alongside your original, it will be grouped together with it as a stack, keeping your library neat and organised. This option is not available for locally stored images.

Maximum Output Size

The maximum output size is an impressive 65,000 pixels on the longest edge, making this suitable for very large print work indeed.

Where Generative Upscale Is Most Useful

This feature is particularly well suited to a number of scenarios. It's excellent when you've made a significant crop to an image and want to recover detail and sharpness in the enlarged result. It can also be used to improve low-resolution scans, or to breathe new life into images from older cameras with lower megapixel counts.

Quick Summary of Key Points

  • Available in Lightroom version 9.2 and later

  • Powered by Topaz Labs' Gigapixel technology

  • Upscale options: 2x or 4x

  • Works on RAW files and other file formats (unlike super resolution)

  • All processing is done in the cloud; an internet connection is required

  • Returns a new DNG file regardless of the original format

  • Consumes 10 or 20 generative credits (maximum 20 per upscale)

  • Maximum output: 65,000 pixels on the longest edge

  • Edits are baked into the upscaled file, so always edit first

  • Stacking option available for cloud-stored images

  • Your original file is always preserved

For photographers looking to get the most from their images, whether recovering detail from a tight crop or improving older files, this is a genuinely useful addition to Lightroom's toolkit.

Reality vs Photoshop - Is Faking It Cheating? đŸ¤ˇâ€â™‚ī¸

Car photography always looks that little bit more dramatic when there's a wet road reflection underneath the vehicle. But what do you do when the road is bone dry? In this guide, I'll walk you through two ways to fake a puddle reflection in Photoshop -- one traditional, one powered by AI -- and then I'll leave you with a question worth thinking about.

Method One: The Manual Approach

Step 1: Select the Car

Start by grabbing the Object Selection tool from the toolbar. In the options bar at the top of the screen, make sure the mode is set to Cloud for the best possible result, then click Select Subject. Photoshop will do a surprisingly good job of selecting the car in just a moment or two.

Step 2: Copy the Car onto Its Own Layer

With your selection active, press Command + J (Mac) or Control + J (Windows) to copy the car up onto a new layer. If you toggle every other layer off, you should see just the isolated car sitting cleanly on a transparent background.

Step 3: Flip It Upside Down

Go to Edit > Transform > Flip Vertical. This flips the car layer to create the basis of your reflection. Now grab the Move tool, hold down Shift (to keep movement perfectly vertical) and drag the flipped car downwards until the tyres of both the original and the reflection are just touching.

If things look slightly off-angle, go to Edit > Free Transform, move your cursor just outside the bounding box until you see the rotation cursor, and give it a gentle nudge until it lines up properly.

Step 4: Add a Black Layer Mask

Rename this layer "Reflection" to keep things tidy. Then, holding down Option (Mac) or Alt (Windows), click the Layer Mask icon at the bottom of the Layers panel. This adds a black mask that hides the layer entirely -- which is exactly what you want for now.

Step 5: Draw the Puddle Shape

Select the Lasso tool and make sure you click directly on the layer mask thumbnail (you should see a white border appear around it, confirming it's active). Now draw a rough, freehand puddle shape beneath the car's tyres -- it doesn't need to be perfect, natural-looking and irregular is actually better here.

Step 6: Fill with White to Reveal the Reflection

Go to Edit > Fill, set the contents to White, and click OK. The reflection will now appear only within the puddle shape you drew.

Step 7: Soften the Edges

Zoom in and you'll notice the puddle edge looks very sharp and unnatural. To fix that, go to Filter > Blur > Gaussian Blur and apply just a small amount -- around 3 pixels is usually enough. This softens the boundary and helps the reflection blend into the ground convincingly.

Finally, you can reduce the opacity of the Reflection layer slightly to make the whole thing look a little more subtle and true to life.

Method Two: Using Adobe Firefly's Generative Fill

If you want a quicker and arguably more realistic result, Photoshop's AI tools can do a remarkable job here.

Step 1: Load the Puddle Selection

Hold Command (Mac) or Control (Windows) and click directly on the layer mask from your first reflection layer. This loads the puddle shape back as an active selection, saving you from having to draw it again.

Step 2: Select the Background Layer

Click on the main image layer, so that Generative Fill works on the background rather than the reflection layer.

Step 3: Run Generative Fill

In the contextual taskbar, click Generative Fill and type a prompt along the lines of: a reflection of car in puddle of water. For the AI model, select Firefly (specifically the Firefly Built and Expand model released in January 2026). If you're on a Creative Cloud Pro account, this won't cost you any credits -- whereas models like Flux or Nano Banana can use anywhere between 20 and 30 credits per generation.

Click Generate.

Step 4: Choose Your Favourite Variation

Firefly will produce three variations for you to compare. Have a look through them and pick the one that looks most convincing. You'll likely notice that the AI does something quite clever: it reflects the sky in the puddle on the far side of the car, just as real water would. Achieving that manually in Photoshop would take considerably more time and effort.

Which Method Should You Use?

For a quick, dirty result, the manual method works well and gives you full control. But for something that genuinely looks like a photograph taken on a wet road, the AI approach is hard to argue with -- particularly because of how naturally it handles the environmental reflections in the water.

A Question Worth Thinking About

Here's something to consider. When photographing that car, there were really two options: bring bottles of water to pour around the car and create a real puddle on the dry road, or add the reflection later in post-production, either manually or with AI.

Both approaches result in a reflection that wasn't originally there. The only difference is when in the process you add it.

So what do you think -- is there a meaningful ethical difference between physically creating something on location and digitally adding it afterwards? When it comes to reflections specifically, does it matter?

Let me know your thoughts in the comments below.

How Many Generative Fill Credits do you have left? đŸ¤ˇâ€â™‚ī¸

When I publish a new video on my YouTube Channel, one of the comments I often see is ...

"We should be able to see how many credits we have left when we use the AI in Photoshop"

So, this week I've recorded a short video showing exactly how you can ✅

Hope it's useful
Cheers
Glyn