tutorial

Instantly Fix "Impossible" Glasses Reflections in Photoshop

Removing reflections from glasses has always been one of those jobs in Photoshop that's either felt impossible or just painfully tedious. In this tutorial, I'm showing you how the new Firefly Image Model 5 in the Photoshop Beta handles this specific problem in a way that I think you're going to find really useful.

I'm working with a portrait of Thomas Coulter, one of the veterans from my 39-45 Portraits Project, to walk you through exactly how it works.

The Challenge with Older Models

If you've tried using Generative Fill for this before, you'll know that older models like Firefly Image 1 could certainly remove a reflection, but they often introduced other problems at the same time. You'd sometimes end up with subtle changes to facial structure, eyebrows, or the shape of the glasses frames themselves. The reflection might be gone, but the portrait no longer looked quite right.

Why Firefly Image Model 5 is Worth Knowing About

Model 5 has been built with detail preservation as a priority. The idea is that it only changes what you've asked it to change, leaving everything else as close to the original as possible.

Worth knowing: this is a premium model, so it uses 10 generative credits rather than one. It also only produces a single variation, but given the quality of the result, that's rarely a problem.

How to Do It, Step by Step

  1. Open Photoshop Beta - You'll need the Beta version to access the latest Firefly models. [00:56]

  2. Make your selection - Use the Selection Brush Tool to paint over the reflections on the lenses. You don't need to be overly precise; going slightly over the frames is fine. [01:25]

  3. Open Generative Fill - Click Generative Fill in the Contextual Taskbar. If you can't see it, go to Window > Contextual Taskbar. [01:49]

  4. Choose the right model - This is the key step. In the Taskbar settings, look under Adobe Models and select Firefly Image Model 5 (Preview). [06:16]

  5. Enter your prompt - Something simple like "remove the reflection from the glasses" is all you need. [05:32]

  6. Generate - Hit Generate and give it around 10 to 12 seconds. [06:21]

The Results

What I find genuinely impressive here is that once the reflection is gone, everything else stays exactly as it was. The eyebrow hairs, the skin texture, the precise shape of the frames - all identical to the original file.

Now, Camera Raw and Lightroom do have reflection removal tools built in, and they're well worth trying, particularly on larger reflections. But for detailed areas like eyewear, where precision really matters, this approach in Photoshop gives you a level of control and accuracy that's hard to beat. If you've got portraits sitting in your archive that you've written off as too difficult, this is a good reason to dig them back out.

Fix IMPOSSIBLE Backgrounds Instantly ( Lightroom + Photoshop )

Recently, Steven Gotz, a member of the Photography Community on SKOOL ( LINK ), sent over a brilliant RAW file of a condor. Stunning subject, great light, one problem: a massive fence running right through the background.

Rather than leave it on the shelf, I figured it was the perfect excuse to put the latest updates in Lightroom and Photoshop Beta through their paces. What would have taken ages with the Clone Stamp tool a couple of years ago can now be sorted in seconds. Here's exactly how I did it, using two different workflows.

Workflow 1: Photoshop Beta with Firefly Image 5

This is the quickest route right now, and the results are genuinely impressive.

The key is using the new Firefly Image 5 (Preview) model inside Photoshop Beta. It's been built specifically for editing while preserving detail, which matters a lot when you're dealing with complex textures like feathers and rocky backgrounds.

  1. From Lightroom to Photoshop Beta. Right-click the image in Lightroom and choose Edit In > Adobe Photoshop Beta.

  2. Select All. Once you're in Photoshop, go to Select > All. This gives the AI the full context of the frame before you do anything.

  3. Switch to Firefly Image 5. Click Generative Fill in the contextual taskbar. Here's the bit that matters: don't use the standard model. Switch it to Firefly Image 5 (Preview) from the dropdown.

  4. The prompt. This model needs a prompt to work, unlike some of the others. I kept it simple: "remove the fence from this picture."

  5. Refine the detail. The AI did a great job on the background, but because Firefly Image 5 currently outputs at 2K, the fine detail around the bird's eye and feathers was slightly softer than the original RAW. The fix is straightforward: use the Object Selection Tool to select the bird and the rock, then fill that area on the layer mask with black. That reveals the sharp original bird while keeping the AI-cleared background intact.

Workflow 2: Lightroom to Firefly Web

Not on the Photoshop Beta? No problem. You can get to the same place via Lightroom's sharing feature.

  1. Share to Firefly. In Lightroom, hit the Share button (top right) and select Firefly: Edit an image. This opens your browser and drops the photo straight into the Adobe Firefly web interface.

  2. Settings and generate. Select Firefly Image 5, bump the resolution to 2K, use the same prompt ("remove the fence from this picture"), and hit generate.

  3. Back to Photoshop. Download the cleaned image, go back to Lightroom, and open the original file in the regular version of Photoshop.

  4. Stack and align. Use File > Place Embedded to bring the Firefly-cleaned version in on top of your original. Rasterise the top layer, select both layers, then go to Edit > Auto-Align Layers to make sure everything lines up perfectly.

  5. The masking trick. Same principle as Workflow 1: use the Object Selection Tool to select the bird and the rock, then hold Option (Mac) or Alt (Windows) and click the mask icon. This hides the AI version of the bird and brings back the sharp, high-detail original underneath.

Why the masking step matters

This is the part I think is really important. It's not about letting AI take over the whole image. It's about using it to fix a specific problem, in this case the background, while keeping the actual subject exactly as it was captured in the RAW file. The integrity of the original is what you're protecting.

Have a look through your archives. Chances are there are shots you wrote off because of something in the background. It might be worth giving them another look.

My Upgraded Realistic Photoshop Lighting Effect + Dust

This is one of those techniques I absolutely love. Adding a lighting effect to a portrait can completely change the mood of an image, and it really doesn't take long once you know the steps. What I want to share here is an upgraded version of what I used to call the "world's simplest lighting effect," but this time with realistic floating dust and a bit of atmospheric depth thrown in.

The Secret to Realism: Highlights

Before you even open Photoshop, there's one thing you really need to look for in your original photo, and that's existing highlights. For a lighting effect to look convincing, your subject needs to already have highlights on the side where you're going to place the light source. If you're adding light coming down from the top left, for example, there need to be highlights there already. Without them, the effect just never looks right no matter how much you tweak it.

Step 1: Creating the Light Source

A common mistake I see is people grabbing a massive brush and clicking once. The trouble is that with a huge soft brush, the feathered edges often get clipped by the edge of the canvas, leaving a harsh, ugly line.

Here's a better approach. Create a new blank layer, then select a standard round soft brush from the toolbar with the hardness set to 0%. Set your foreground colour to white and click once in the middle of your image with a relatively small brush. Now go to Edit > Free Transform (Cmd/Ctrl + T), hold down Shift and Option on Mac or Alt on Windows, and drag a corner handle to scale that brush stroke up proportionally until it's nice and large. Then grab the Move tool and reposition the light into the corner so that only the soft, feathered edge spills into the frame.

Step 2: Adding the Atmospheric Dust

This is where you take the effect to the next level. Those tiny bits of dust and debris that become visible when caught in a beam of light make all the difference. I tend to use a texture that looks a bit like a photograph of rain at night, shot looking upwards and slightly out of focus.

To apply a dust overlay, place the image over your work and use Free Transform to scale it so it fills the whole image. If the layer is a Smart Object, right click it and choose Rasterize Layer. Then go to Image > Adjustments > Desaturate so the dust doesn't introduce any unwanted colour. Change the Blend Mode to Screen, which knocks out the black background and leaves only the bright dust particles. Finally, add a Layer Mask to the dust layer. Grab a soft brush with a black foreground colour and paint away the dust where you don't want it. Keep it concentrated near the light source and off the main parts of your subject.

Step 3: Adding Movement

Static dust can look a bit "stuck on," so adding a touch of motion blur makes a huge difference. Go to Filter > Blur > Motion Blur and adjust the angle so the blur follows the direction of the light beam, usually from top left down to bottom right. Keep the distance quite small. You just want a subtle sense of movement, as if the particles are caught in a gentle drift.

How to Create Your Own Dust Textures

If you haven't got a dust overlay to hand, you can actually use AI to generate one. Using a tool like Adobe Firefly or Google Gemini, try a prompt along the lines of "dark atmospheric bokeh background with falling rain or snow particles." I find that asking for a 4x3 aspect ratio works well for most portraits.

I hope you find this upgraded technique useful for your own retouching. It's a quick way to add a lot of drama and production value to your images without needing any kind of complex setup.

Reality vs Photoshop - Is Faking It Cheating? 🤷‍♂️

Car photography always looks that little bit more dramatic when there's a wet road reflection underneath the vehicle. But what do you do when the road is bone dry? In this guide, I'll walk you through two ways to fake a puddle reflection in Photoshop -- one traditional, one powered by AI -- and then I'll leave you with a question worth thinking about.

Method One: The Manual Approach

Step 1: Select the Car

Start by grabbing the Object Selection tool from the toolbar. In the options bar at the top of the screen, make sure the mode is set to Cloud for the best possible result, then click Select Subject. Photoshop will do a surprisingly good job of selecting the car in just a moment or two.

Step 2: Copy the Car onto Its Own Layer

With your selection active, press Command + J (Mac) or Control + J (Windows) to copy the car up onto a new layer. If you toggle every other layer off, you should see just the isolated car sitting cleanly on a transparent background.

Step 3: Flip It Upside Down

Go to Edit > Transform > Flip Vertical. This flips the car layer to create the basis of your reflection. Now grab the Move tool, hold down Shift (to keep movement perfectly vertical) and drag the flipped car downwards until the tyres of both the original and the reflection are just touching.

If things look slightly off-angle, go to Edit > Free Transform, move your cursor just outside the bounding box until you see the rotation cursor, and give it a gentle nudge until it lines up properly.

Step 4: Add a Black Layer Mask

Rename this layer "Reflection" to keep things tidy. Then, holding down Option (Mac) or Alt (Windows), click the Layer Mask icon at the bottom of the Layers panel. This adds a black mask that hides the layer entirely -- which is exactly what you want for now.

Step 5: Draw the Puddle Shape

Select the Lasso tool and make sure you click directly on the layer mask thumbnail (you should see a white border appear around it, confirming it's active). Now draw a rough, freehand puddle shape beneath the car's tyres -- it doesn't need to be perfect, natural-looking and irregular is actually better here.

Step 6: Fill with White to Reveal the Reflection

Go to Edit > Fill, set the contents to White, and click OK. The reflection will now appear only within the puddle shape you drew.

Step 7: Soften the Edges

Zoom in and you'll notice the puddle edge looks very sharp and unnatural. To fix that, go to Filter > Blur > Gaussian Blur and apply just a small amount -- around 3 pixels is usually enough. This softens the boundary and helps the reflection blend into the ground convincingly.

Finally, you can reduce the opacity of the Reflection layer slightly to make the whole thing look a little more subtle and true to life.

Method Two: Using Adobe Firefly's Generative Fill

If you want a quicker and arguably more realistic result, Photoshop's AI tools can do a remarkable job here.

Step 1: Load the Puddle Selection

Hold Command (Mac) or Control (Windows) and click directly on the layer mask from your first reflection layer. This loads the puddle shape back as an active selection, saving you from having to draw it again.

Step 2: Select the Background Layer

Click on the main image layer, so that Generative Fill works on the background rather than the reflection layer.

Step 3: Run Generative Fill

In the contextual taskbar, click Generative Fill and type a prompt along the lines of: a reflection of car in puddle of water. For the AI model, select Firefly (specifically the Firefly Built and Expand model released in January 2026). If you're on a Creative Cloud Pro account, this won't cost you any credits -- whereas models like Flux or Nano Banana can use anywhere between 20 and 30 credits per generation.

Click Generate.

Step 4: Choose Your Favourite Variation

Firefly will produce three variations for you to compare. Have a look through them and pick the one that looks most convincing. You'll likely notice that the AI does something quite clever: it reflects the sky in the puddle on the far side of the car, just as real water would. Achieving that manually in Photoshop would take considerably more time and effort.

Which Method Should You Use?

For a quick, dirty result, the manual method works well and gives you full control. But for something that genuinely looks like a photograph taken on a wet road, the AI approach is hard to argue with -- particularly because of how naturally it handles the environmental reflections in the water.

A Question Worth Thinking About

Here's something to consider. When photographing that car, there were really two options: bring bottles of water to pour around the car and create a real puddle on the dry road, or add the reflection later in post-production, either manually or with AI.

Both approaches result in a reflection that wasn't originally there. The only difference is when in the process you add it.

So what do you think -- is there a meaningful ethical difference between physically creating something on location and digitally adding it afterwards? When it comes to reflections specifically, does it matter?

Let me know your thoughts in the comments below.

✅ Photoshop JANUARY 2026 - Everything NEW 💥

Adobe dropped Photoshop 27.3.0 on the 27th January, and for once it's not just AI hype and features nobody asked for. This update brings some genuinely useful stuff that photographers and editors have been requesting for years.

Camera Raw tools finally join the party

The headline features are 2 (actually 3) new Adjustment Layers: Clarity & Dehaze and Grain.

If you've ever wanted to use Clarity or Dehaze without opening Camera Raw or converting to a Smart Object, your prayers have been answered. They now work exactly like Curves, Levels or any other adjustment layer. You can mask them, adjust opacity, change blend modes, and they stay fully editable in your PSD.

Clarity is brilliant for adding punch to textures and details in your midtones without blowing out highlights or crushing shadows. Dehaze cuts through atmospheric haze (or adds it if you reverse the slider), and having it as an adjustment layer means you can apply it selectively with a mask.

Grain gets the same treatment. Want to add film-style texture to knock the digital edge off a super-clean file? Chuck a Grain adjustment layer on top, dial it in, and you're done. It's particularly good for black and white work or vintage treatments.

The AI tools are growing up

On the generative side, things have improved quite a bit.​

Generative Fill and Generative Expand now output at up to 2K resolution, which means extended canvases and filled areas look far less mushy and hold detail much better. Adobe has also added model selection, so you can pick the Firefly version that best suits what you're doing.

The real game-changer is Reference Image support in Generative Fill. You can now feed Photoshop a reference photo and it'll try to match the lighting, colour and structure when generating new content. This is massive for compositing work or keeping a series of images consistent.

The Remove tool has also been quietly upgraded. It does a much cleaner job removing objects and people, with fewer obvious smears and repetitive patterns. In most cases you'll get a usable result without needing to follow up with Clone Stamp or Healing Brush.​

Why this one matters

This isn't a flashy update, but it's the kind that actually changes how you work.

Having Clarity, Dehaze and Grain as proper adjustment layers keeps everything inside Photoshop's layer stack where it belongs. No more jumping between Camera Raw, no more Smart Objects eating up file size, no more destructive edits.

The AI improvements make the generative tools feel less like tech demos and more like something you'd actually use in client work. Higher resolution output and better reference matching mean you can rely on them for real projects, not just Instagram experiments.

If you're on Creative Cloud, the update should already be available. The new adjustment layers live in the standard Adjustments panel alongside everything else. Well worth checking out, especially if you shoot landscapes, architecture or do any kind of composite work.​

🙅🏼‍♂️ How to NEVER forget your Photoshop edits again ✅

I have lost count of the times I have finished an edit, loved the result, and then completely forgotten how I actually got there.

In this video, I am showing you a simple trick using the Photoshop History Log and AI to create a perfect, step-by-step record of every single move you make.

No more guessing which filter you used or what that specific slider value was; it’s like having a digital assistant write your editing recipes for you while you work.

What I cover:

✅ How to turn on the hidden History Log in Photoshop.
✅ Exporting your editing steps as a simple text file.
✅ Using a clever AI prompt to turn that messy log into a clear workflow.
✅ Why this is a game-changer for your consistency and learning.

🎨 Colour Space Conversion Explained 🪚 ProPhoto vs Adobe RGB vs sRGB

This post follows on from my previous article where I explained what colour spaces actually are. If you haven't read that yet, you can check it out here: Colour Spaces Simplified.

If you have read that one, you know that ProPhoto RGB is a massive container of colours, Adobe RGB sits in the middle as a wide-gamut space popular for printing and high-end displays, and sRGB is a smaller container that became the standard default for monitors, operating systems, and the web. But knowing what they are is only half the battle. The real magic, and the potential disaster, happens when we move an image from one container to another. This process is called Colour Space Conversion.

If you don't understand what happens during this conversion, you are gambling with the final look of your images, so let's look under the bonnet at the mechanics of moving colour.

The Core Problem: The Definition of "Red" Changes

To understand conversion, you need to grasp one slightly technical concept: Pixels are just numbers. Every pixel in your digital photo is defined by three numbers: Red, Green, and Blue (RGB). In an 8-bit image, these numbers run from 0 to 255.

If a pixel is pure, maximum red, its value is R:255, G:0, B:0. Here is the mind-bending part: A pixel valued at R:255 in ProPhoto RGB, Adobe RGB, and sRGB represents three different actual colours, even though the numbers are the same.

ProPhoto's R:255, G:0, B:0 is an extremely saturated, incredibly intense deep red, defined right at the long-wavelength edge of what human vision can see, and on a real device it will be mapped to the most saturated red your display or printer can show. Adobe RGB's R:255, G:0, B:0 is still a very strong red, but not as extreme as ProPhoto, and designed to sit well with high-quality inkjet and press gamuts. sRGB R:255, G:0, B:0 is the red of a standard fizzy drink can: still bright, but nowhere near as intense as the ProPhoto or Adobe RGB versions.

When you convert an image, you aren't just moving pixels around; you are fundamentally translating their meaning.

The Analogy: Moving from a Mansion to a Studio Flat

Think of ProPhoto RGB as a giant mansion. You have massive rooms and enormous furniture: a grand piano, huge chandeliers, and king-sized beds.

Think of Adobe RGB as a generous three-bedroom house: much more space than a studio, plenty of room for big pieces, but not quite the sprawling scale of the mansion. Think of sRGB as a small studio flat. It's functional and cosy, but it has very strict space limits. The Conversion Process is moving day. You have to fit everything from the mansion into the house, or all the way down into the studio flat.

Many items fit easily. Your normal clothes, books, and kitchen plates (these represent the standard skin tones, sky blues, and foliage greens in your photo) fit into all three spaces without issue; they exist comfortably in ProPhoto, Adobe RGB, and sRGB.

The "Out of Gamut" Problem

The problem arises when you try to move the grand piano (highly saturated colours, like a vibrant sunset orange or a neon flower petal). The piano is "out of gamut". It might just squeeze into the Adobe RGB house but still refuse to fit through the door of the sRGB studio flat, or it may already be too big even for Adobe RGB.

You now face a critical choice on how to handle that piano. This choice is what we call Rendering Intent.

The Solution: How We Fit the Piano

When you convert colours in Photoshop (via Edit > Convert to Profile), you are telling the software how to fit the furniture. You might be going from ProPhoto to Adobe RGB for print prep, or straight from ProPhoto/Adobe RGB down to sRGB for the web. You generally have two choices for photography:

Choice 1: Relative Colourimetric (The "Saw" Method)

This method prioritises accuracy for the colours that do fit.

What it does: It looks at the grand piano, realises it won't fit, and saws off the legs until it does. In Photo Terms: This is called Clipping. The software takes any colour that is too saturated for the destination space (whether that's Adobe RGB or sRGB) and maps it to the closest colour at the edge of what that space can display, which can flatten subtle gradations in those brightest, most saturated areas.

The Good: All your normal colours (skin tones, etc.) that fall inside the destination space remain essentially identical to the original.

The Bad: You can lose detail in highly saturated highlights where colours are pushed beyond that space and get clipped.

Choice 2: Perceptual (The "Shrink Ray" Method)

This method prioritises the relationship between colours.

What it does: It uses a sci-fi shrink ray on all your furniture just enough so that the grand piano fits through the door. In Photo Terms: To make room for the highly saturated colours, it slightly compresses the entire colour range of the image so that out-of-gamut colours are brought inside Adobe RGB or sRGB with smoother transitions.

The Good: You keep smoother detail and gradation in your bright sunsets and flowers; the piano fits whole, and the relationships between colours tend to look natural.

The Bad: Your entire image might look slightly less punchy or saturated than the original because everything got shrunk a little. How strong this effect is depends on the specific profiles involved.

In many simple RGB-to-RGB conversions (for example, between Adobe RGB and sRGB), perceptual and relative colourimetric may look very similar, but the intent choice becomes especially important when mapping from a wide space like ProPhoto or Adobe RGB into printer profiles with more complex colour ranges.

Why You Must Control This (Don't let the browser decide!)

This is the crucial takeaway.

If you upload an Adobe RGB or ProPhoto image directly to the web, you are relying on the browser, operating system, and device to handle that wide-colour file correctly, and that is risky. Many systems expect sRGB, some sites strip embedded profiles, and some viewers may ignore or mishandle wide-gamut profiles, especially if the file is untagged or metadata has been removed. The result can be incorrect colour or harsh clipping and posterisation in saturated areas, or simply dull, wrong-looking colour where Adobe RGB or ProPhoto numbers are interpreted as if they were sRGB.

By doing the conversion yourself in Photoshop or Lightroom before you export, you get to choose. You can convert from ProPhoto or Adobe RGB down to sRGB, select relative or perceptual rendering intent, and preview the result rather than leaving those decisions to whatever defaults the viewer's browser and device happen to use.

Is the image mostly portraits with normal colours? The "Saw" method (Relative Colourimetric) to sRGB or Adobe RGB might be perfect, because it keeps all in-range colours very accurate and most portrait colours already fall comfortably inside those spaces. Is it a vibrant landscape with intense colours that really push ProPhoto or Adobe RGB? You probably need the "Shrink Ray" (Perceptual) to save the smooth detail in those saturated areas.

You are the Artist. You should decide how your furniture gets moved, not the moving company.

🎨 Colour Spaces Simplified: A Practical Guide

Choosing the right colour space can feel like a bit of a headache, especially when you just want to get on with your work and make things look great. It is one of those technical topics that often gets over-complicated with jargon, but it really comes down to how much colour your file can hold and where that file is eventually going to live.

Big picture: colour spaces

Think of a colour space like a box of crayons. Some boxes have the basic 8 colours, while others have 128, and each digital colour space is just a different "box" with its own range of possible colours (gamut) inside. For common RGB spaces like sRGB, Adobe RGB, Display P3, and Rec. 709, that gamut is usually shown as a triangle sitting inside the horseshoe-shaped map of all colours the human eye can see.

sRGB: the universal baseline

Created in the mid-1990s by HP and Microsoft, sRGB was designed as a standard colour space that typical monitors, printers, operating systems, and web browsers could all assume by default. If you are posting a photo to Instagram, a blog, or sending it to a standard consumer lab for prints, sRGB is the safest choice because it is the "lowest common denominator" most devices expect.

Use case: Web, social media, and general consumer printing where you cannot control colour management. sRGB gives predictable, consistent colour on the widest range of devices.

Limitation: sRGB is a relatively small "box of crayons," especially in saturated greens and cyans, so it cannot represent all the rich colours modern cameras can capture.

Adobe RGB (1998): print-oriented wider gamut

Adobe RGB (1998) was developed by Adobe to cover a wider gamut than sRGB, with more reach into greens and some cyans, and to better match the range achievable by high-quality CMYK printing processes. On a gamut diagram you can see Adobe RGB extending further towards the green corner than sRGB, which is particularly useful for subjects like foliage, seascapes, and some print workflows.

Use case: Professional printing and high-end photography workflows where files will go to colour-managed printers or presses that can exploit the wider gamut.

Limitation: If you upload an Adobe RGB image to a non-colour-managed website, the browser often treats it as sRGB, which makes it look dull and washed out because the extra gamut is compressed incorrectly.

ProPhoto RGB: extremely wide editing space

ProPhoto RGB (also known as ROMM RGB) is a very large-gamut colour space developed by Kodak, designed to include almost all real-world surface colours and even some mathematically defined "imaginary" colours that lie just outside the human-visible locus. Because its gamut is so wide, it comfortably contains colours that fall outside both sRGB and Adobe RGB, which can occur in highly saturated parts of modern digital captures.

When you shoot RAW, the camera records sensor data that is not yet in any RGB colour space; the RAW developer chooses a working space for editing. Applications like Lightroom use an internal working space (often described as MelissaRGB or a linear ProPhoto variant) that shares ProPhoto's primaries, giving you a ProPhoto-sized gamut while you make adjustments.

Use case: As a working or internal space for developing high-quality RAW files, where a very wide gamut helps avoid clipping intense colours during heavy editing.

Limitation: ProPhoto is so large that using it in 8-bit can cause banding in gradients, so it should be paired with 16-bit editing to maintain smooth transitions. It is also a poor choice as a delivery space for general viewing or the web, because most devices and browsers either do not handle it correctly or cannot display its gamut, leading to flat or strange colour; final exports for sharing are usually converted to sRGB or at most Adobe RGB/P3.

Using a ProPhoto-based space during editing gives you room to "hold" all the colour the RAW data can produce, but the RAW itself is not stored "in" ProPhoto.

What about Display P3?

If you use an iPhone, a Mac, or a recent high-end monitor, you have probably seen Display P3 mentioned. It is a modern wide-gamut colour space, built from the cinema P3 primaries but adapted to the D65 white point and an sRGB-style tone curve used on typical computer displays.

To understand it, it helps to start with DCI-P3. That is the "box of crayons" designed for digital cinema projectors in movie theatres, with a gamma around 2.6 and a slightly green-tinted white balanced for xenon-lamp projection. Its gamut reaches significantly further than sRGB in reds and greens, which is one reason properly graded movies can look so saturated and "punchy" on the big screen.

Display P3 is essentially a more desktop-friendly variant of that cinema colour. It uses the same P3 primaries, but adopts the D65 white point shared by sRGB and Adobe RGB, and an sRGB-like transfer curve (roughly gamma 2.2), making it better suited to normal monitor and device viewing.

How it compares to Adobe RGB

Adobe RGB and Display P3 are both wide-gamut spaces of similar overall volume, but with different shapes.

  • Adobe RGB reaches further into deep greens and blues, which is why it has long been favoured for print workflows where those hues matter and where printers and papers can take advantage of that gamut

  • Display P3 pushes more into richly saturated oranges and reds, while not extending quite as far as Adobe RGB in some green-blue regions

Use case: If you are creating content primarily for modern wide-gamut smartphones, tablets, and laptops that support Display P3 and are properly colour-managed, working in Display P3 lets you use colours that go beyond sRGB, so images can look more lifelike and vibrant on those devices. On older or strictly sRGB-only screens, though, those extra colours are either mapped back into sRGB or clipped, so the advantage largely disappears.

Which one should you use?

A simple, robust way to stay sane is to separate "editing space" from "delivery space." During RAW editing, using a very wide-gamut space like ProPhoto (or Lightroom's ProPhoto-based internal space) in 16-bit keeps as much colour information as possible while you make adjustments. When you are finished and ready to share or upload, you convert a copy of that master to sRGB (or to Adobe RGB/P3 if you are targeting a fully colour-managed, wide-gamut environment), so it looks correct on most people's devices.

This approach gives you a master file that preserves the widest feasible gamut for future prints or re-edits, plus final exports tailored to where the image will actually live (web, print, or video) without sacrificing consistency for your viewers.

Creating a print master in Adobe RGB

When it's time to take an image off the screen and put it onto paper, I often convert my files to Adobe RGB as a dedicated "print master." It might seem like an extra step, but there is a very practical reason for it: it gives the print system more of the colours that high-quality printers can actually reproduce, especially beyond plain sRGB.

Matching what the printer can really do

Many modern high-quality inkjet and lab printers can reproduce certain colours (particularly some vibrant cyans, deep blues, and rich greens) that extend outside the sRGB gamut. If a scene or RAW file contains those more saturated hues, converting everything into sRGB first can compress or clip them before the printer even gets a chance to do its job, so the print may not show all the nuance that was originally captured.

By keeping the editing in a wide space (like ProPhoto RGB or Lightroom's internal MelissaRGB space, which uses ProPhoto-based primaries) and then creating a print file in Adobe RGB, the file can still describe many of those "extra" printable colours that sRGB would squeeze in.

In real-world terms, this often shows up as:

  • More believable foliage

  • Subtler turquoise water

  • More faithful fabric tones when the printer and paper are capable of that gamut

Bridging the gap to CMYK

The ink in a printer behaves very differently from the light on your screen: monitors work in RGB (Red, Green, Blue), while printers work in CMYK (Cyan, Magenta, Yellow, Black) or multi-ink variants. A printer's CMYK gamut has a lumpy, irregular shape. There are regions, especially in certain blue-green areas, where it stretches outside sRGB, and other regions (like very bright, saturated oranges and yellows) where it is actually smaller than sRGB.

Adobe RGB was designed to better encompass typical CMYK print gamuts, so it overlaps much more closely with the colours high-quality printing systems can produce. It does not literally "cover every possible CMYK colour," but it does include most of the printable colours that sit outside sRGB, which means you are less likely to be "leaving colour on the table" when you hand a file to a good, colour-managed print workflow.

How this fits into a print workflow

  • Edit in a very wide-gamut space (e.g., ProPhoto RGB or Lightroom's MelissaRGB internal space) to keep as much colour information from the RAW as possible while you do the heavy lifting

  • Create a print master in Adobe RGB once the edit is finished, so the file aligns better with what many high-end printers and papers can show than sRGB does

  • Match the lab's requirements, since some pro and fine-art labs prefer Adobe RGB (or accept ProPhoto), while many consumer or high-street labs still expect sRGB only

The bottom line

Ultimately, it is all about making sure the final physical print gets as close to your vision as the printer and paper combination allows. Using a very restricted colour space for a high-end print setup is a bit like buying a sports car and never taking it out of second gear: it will still move, but you will never see what it is truly capable of.

How to Create a Photo Book using Blurb by Judy Lindo

Recently in The Photography Creative Circle Community, member Judy Lindo hosted a fabulous LIVE session where she showed step by step how to create a photo book using Blurb and Lightroom Classic.

Judy, who has experience as an editor for a photography group called the "City Clickers," guided attendees through the process, covering essential steps like planning, photo selection, book layout, and sequencing images for optimal narrative and visual flow.

Discussions included the technical aspects of using Blurb's interface, including book settings, page layouts, text options, and background customization, and practical advice on cost-saving formats like the Zine, as well as managing the book's print-on-demand nature and marketing options.

✅ Check out the audio overview below …

To check out the full 1 ½ session hour video recording, jump into The Photography Creative Circle Community on SKOOL with the 7 Day Free Trial …