tutorial

🎨 Colour Space Conversion Explained 🪚 ProPhoto vs Adobe RGB vs sRGB

This post follows on from my previous article where I explained what colour spaces actually are. If you haven't read that yet, you can check it out here: Colour Spaces Simplified.

If you have read that one, you know that ProPhoto RGB is a massive container of colours, Adobe RGB sits in the middle as a wide-gamut space popular for printing and high-end displays, and sRGB is a smaller container that became the standard default for monitors, operating systems, and the web. But knowing what they are is only half the battle. The real magic, and the potential disaster, happens when we move an image from one container to another. This process is called Colour Space Conversion.

If you don't understand what happens during this conversion, you are gambling with the final look of your images, so let's look under the bonnet at the mechanics of moving colour.

The Core Problem: The Definition of "Red" Changes

To understand conversion, you need to grasp one slightly technical concept: Pixels are just numbers. Every pixel in your digital photo is defined by three numbers: Red, Green, and Blue (RGB). In an 8-bit image, these numbers run from 0 to 255.

If a pixel is pure, maximum red, its value is R:255, G:0, B:0. Here is the mind-bending part: A pixel valued at R:255 in ProPhoto RGB, Adobe RGB, and sRGB represents three different actual colours, even though the numbers are the same.

ProPhoto's R:255, G:0, B:0 is an extremely saturated, incredibly intense deep red, defined right at the long-wavelength edge of what human vision can see, and on a real device it will be mapped to the most saturated red your display or printer can show. Adobe RGB's R:255, G:0, B:0 is still a very strong red, but not as extreme as ProPhoto, and designed to sit well with high-quality inkjet and press gamuts. sRGB R:255, G:0, B:0 is the red of a standard fizzy drink can: still bright, but nowhere near as intense as the ProPhoto or Adobe RGB versions.

When you convert an image, you aren't just moving pixels around; you are fundamentally translating their meaning.

The Analogy: Moving from a Mansion to a Studio Flat

Think of ProPhoto RGB as a giant mansion. You have massive rooms and enormous furniture: a grand piano, huge chandeliers, and king-sized beds.

Think of Adobe RGB as a generous three-bedroom house: much more space than a studio, plenty of room for big pieces, but not quite the sprawling scale of the mansion. Think of sRGB as a small studio flat. It's functional and cosy, but it has very strict space limits. The Conversion Process is moving day. You have to fit everything from the mansion into the house, or all the way down into the studio flat.

Many items fit easily. Your normal clothes, books, and kitchen plates (these represent the standard skin tones, sky blues, and foliage greens in your photo) fit into all three spaces without issue; they exist comfortably in ProPhoto, Adobe RGB, and sRGB.

The "Out of Gamut" Problem

The problem arises when you try to move the grand piano (highly saturated colours, like a vibrant sunset orange or a neon flower petal). The piano is "out of gamut". It might just squeeze into the Adobe RGB house but still refuse to fit through the door of the sRGB studio flat, or it may already be too big even for Adobe RGB.

You now face a critical choice on how to handle that piano. This choice is what we call Rendering Intent.

The Solution: How We Fit the Piano

When you convert colours in Photoshop (via Edit > Convert to Profile), you are telling the software how to fit the furniture. You might be going from ProPhoto to Adobe RGB for print prep, or straight from ProPhoto/Adobe RGB down to sRGB for the web. You generally have two choices for photography:

Choice 1: Relative Colourimetric (The "Saw" Method)

This method prioritises accuracy for the colours that do fit.

What it does: It looks at the grand piano, realises it won't fit, and saws off the legs until it does. In Photo Terms: This is called Clipping. The software takes any colour that is too saturated for the destination space (whether that's Adobe RGB or sRGB) and maps it to the closest colour at the edge of what that space can display, which can flatten subtle gradations in those brightest, most saturated areas.

The Good: All your normal colours (skin tones, etc.) that fall inside the destination space remain essentially identical to the original.

The Bad: You can lose detail in highly saturated highlights where colours are pushed beyond that space and get clipped.

Choice 2: Perceptual (The "Shrink Ray" Method)

This method prioritises the relationship between colours.

What it does: It uses a sci-fi shrink ray on all your furniture just enough so that the grand piano fits through the door. In Photo Terms: To make room for the highly saturated colours, it slightly compresses the entire colour range of the image so that out-of-gamut colours are brought inside Adobe RGB or sRGB with smoother transitions.

The Good: You keep smoother detail and gradation in your bright sunsets and flowers; the piano fits whole, and the relationships between colours tend to look natural.

The Bad: Your entire image might look slightly less punchy or saturated than the original because everything got shrunk a little. How strong this effect is depends on the specific profiles involved.

In many simple RGB-to-RGB conversions (for example, between Adobe RGB and sRGB), perceptual and relative colourimetric may look very similar, but the intent choice becomes especially important when mapping from a wide space like ProPhoto or Adobe RGB into printer profiles with more complex colour ranges.

Why You Must Control This (Don't let the browser decide!)

This is the crucial takeaway.

If you upload an Adobe RGB or ProPhoto image directly to the web, you are relying on the browser, operating system, and device to handle that wide-colour file correctly, and that is risky. Many systems expect sRGB, some sites strip embedded profiles, and some viewers may ignore or mishandle wide-gamut profiles, especially if the file is untagged or metadata has been removed. The result can be incorrect colour or harsh clipping and posterisation in saturated areas, or simply dull, wrong-looking colour where Adobe RGB or ProPhoto numbers are interpreted as if they were sRGB.

By doing the conversion yourself in Photoshop or Lightroom before you export, you get to choose. You can convert from ProPhoto or Adobe RGB down to sRGB, select relative or perceptual rendering intent, and preview the result rather than leaving those decisions to whatever defaults the viewer's browser and device happen to use.

Is the image mostly portraits with normal colours? The "Saw" method (Relative Colourimetric) to sRGB or Adobe RGB might be perfect, because it keeps all in-range colours very accurate and most portrait colours already fall comfortably inside those spaces. Is it a vibrant landscape with intense colours that really push ProPhoto or Adobe RGB? You probably need the "Shrink Ray" (Perceptual) to save the smooth detail in those saturated areas.

You are the Artist. You should decide how your furniture gets moved, not the moving company.

🎨 Colour Spaces Simplified: A Practical Guide

Choosing the right colour space can feel like a bit of a headache, especially when you just want to get on with your work and make things look great. It is one of those technical topics that often gets over-complicated with jargon, but it really comes down to how much colour your file can hold and where that file is eventually going to live.

Big picture: colour spaces

Think of a colour space like a box of crayons. Some boxes have the basic 8 colours, while others have 128, and each digital colour space is just a different "box" with its own range of possible colours (gamut) inside. For common RGB spaces like sRGB, Adobe RGB, Display P3, and Rec. 709, that gamut is usually shown as a triangle sitting inside the horseshoe-shaped map of all colours the human eye can see.

sRGB: the universal baseline

Created in the mid-1990s by HP and Microsoft, sRGB was designed as a standard colour space that typical monitors, printers, operating systems, and web browsers could all assume by default. If you are posting a photo to Instagram, a blog, or sending it to a standard consumer lab for prints, sRGB is the safest choice because it is the "lowest common denominator" most devices expect.

Use case: Web, social media, and general consumer printing where you cannot control colour management. sRGB gives predictable, consistent colour on the widest range of devices.

Limitation: sRGB is a relatively small "box of crayons," especially in saturated greens and cyans, so it cannot represent all the rich colours modern cameras can capture.

Adobe RGB (1998): print-oriented wider gamut

Adobe RGB (1998) was developed by Adobe to cover a wider gamut than sRGB, with more reach into greens and some cyans, and to better match the range achievable by high-quality CMYK printing processes. On a gamut diagram you can see Adobe RGB extending further towards the green corner than sRGB, which is particularly useful for subjects like foliage, seascapes, and some print workflows.

Use case: Professional printing and high-end photography workflows where files will go to colour-managed printers or presses that can exploit the wider gamut.

Limitation: If you upload an Adobe RGB image to a non-colour-managed website, the browser often treats it as sRGB, which makes it look dull and washed out because the extra gamut is compressed incorrectly.

ProPhoto RGB: extremely wide editing space

ProPhoto RGB (also known as ROMM RGB) is a very large-gamut colour space developed by Kodak, designed to include almost all real-world surface colours and even some mathematically defined "imaginary" colours that lie just outside the human-visible locus. Because its gamut is so wide, it comfortably contains colours that fall outside both sRGB and Adobe RGB, which can occur in highly saturated parts of modern digital captures.

When you shoot RAW, the camera records sensor data that is not yet in any RGB colour space; the RAW developer chooses a working space for editing. Applications like Lightroom use an internal working space (often described as MelissaRGB or a linear ProPhoto variant) that shares ProPhoto's primaries, giving you a ProPhoto-sized gamut while you make adjustments.

Use case: As a working or internal space for developing high-quality RAW files, where a very wide gamut helps avoid clipping intense colours during heavy editing.

Limitation: ProPhoto is so large that using it in 8-bit can cause banding in gradients, so it should be paired with 16-bit editing to maintain smooth transitions. It is also a poor choice as a delivery space for general viewing or the web, because most devices and browsers either do not handle it correctly or cannot display its gamut, leading to flat or strange colour; final exports for sharing are usually converted to sRGB or at most Adobe RGB/P3.

Using a ProPhoto-based space during editing gives you room to "hold" all the colour the RAW data can produce, but the RAW itself is not stored "in" ProPhoto.

What about Display P3?

If you use an iPhone, a Mac, or a recent high-end monitor, you have probably seen Display P3 mentioned. It is a modern wide-gamut colour space, built from the cinema P3 primaries but adapted to the D65 white point and an sRGB-style tone curve used on typical computer displays.

To understand it, it helps to start with DCI-P3. That is the "box of crayons" designed for digital cinema projectors in movie theatres, with a gamma around 2.6 and a slightly green-tinted white balanced for xenon-lamp projection. Its gamut reaches significantly further than sRGB in reds and greens, which is one reason properly graded movies can look so saturated and "punchy" on the big screen.

Display P3 is essentially a more desktop-friendly variant of that cinema colour. It uses the same P3 primaries, but adopts the D65 white point shared by sRGB and Adobe RGB, and an sRGB-like transfer curve (roughly gamma 2.2), making it better suited to normal monitor and device viewing.

How it compares to Adobe RGB

Adobe RGB and Display P3 are both wide-gamut spaces of similar overall volume, but with different shapes.

  • Adobe RGB reaches further into deep greens and blues, which is why it has long been favoured for print workflows where those hues matter and where printers and papers can take advantage of that gamut

  • Display P3 pushes more into richly saturated oranges and reds, while not extending quite as far as Adobe RGB in some green-blue regions

Use case: If you are creating content primarily for modern wide-gamut smartphones, tablets, and laptops that support Display P3 and are properly colour-managed, working in Display P3 lets you use colours that go beyond sRGB, so images can look more lifelike and vibrant on those devices. On older or strictly sRGB-only screens, though, those extra colours are either mapped back into sRGB or clipped, so the advantage largely disappears.

Which one should you use?

A simple, robust way to stay sane is to separate "editing space" from "delivery space." During RAW editing, using a very wide-gamut space like ProPhoto (or Lightroom's ProPhoto-based internal space) in 16-bit keeps as much colour information as possible while you make adjustments. When you are finished and ready to share or upload, you convert a copy of that master to sRGB (or to Adobe RGB/P3 if you are targeting a fully colour-managed, wide-gamut environment), so it looks correct on most people's devices.

This approach gives you a master file that preserves the widest feasible gamut for future prints or re-edits, plus final exports tailored to where the image will actually live (web, print, or video) without sacrificing consistency for your viewers.

Creating a print master in Adobe RGB

When it's time to take an image off the screen and put it onto paper, I often convert my files to Adobe RGB as a dedicated "print master." It might seem like an extra step, but there is a very practical reason for it: it gives the print system more of the colours that high-quality printers can actually reproduce, especially beyond plain sRGB.

Matching what the printer can really do

Many modern high-quality inkjet and lab printers can reproduce certain colours (particularly some vibrant cyans, deep blues, and rich greens) that extend outside the sRGB gamut. If a scene or RAW file contains those more saturated hues, converting everything into sRGB first can compress or clip them before the printer even gets a chance to do its job, so the print may not show all the nuance that was originally captured.

By keeping the editing in a wide space (like ProPhoto RGB or Lightroom's internal MelissaRGB space, which uses ProPhoto-based primaries) and then creating a print file in Adobe RGB, the file can still describe many of those "extra" printable colours that sRGB would squeeze in.

In real-world terms, this often shows up as:

  • More believable foliage

  • Subtler turquoise water

  • More faithful fabric tones when the printer and paper are capable of that gamut

Bridging the gap to CMYK

The ink in a printer behaves very differently from the light on your screen: monitors work in RGB (Red, Green, Blue), while printers work in CMYK (Cyan, Magenta, Yellow, Black) or multi-ink variants. A printer's CMYK gamut has a lumpy, irregular shape. There are regions, especially in certain blue-green areas, where it stretches outside sRGB, and other regions (like very bright, saturated oranges and yellows) where it is actually smaller than sRGB.

Adobe RGB was designed to better encompass typical CMYK print gamuts, so it overlaps much more closely with the colours high-quality printing systems can produce. It does not literally "cover every possible CMYK colour," but it does include most of the printable colours that sit outside sRGB, which means you are less likely to be "leaving colour on the table" when you hand a file to a good, colour-managed print workflow.

How this fits into a print workflow

  • Edit in a very wide-gamut space (e.g., ProPhoto RGB or Lightroom's MelissaRGB internal space) to keep as much colour information from the RAW as possible while you do the heavy lifting

  • Create a print master in Adobe RGB once the edit is finished, so the file aligns better with what many high-end printers and papers can show than sRGB does

  • Match the lab's requirements, since some pro and fine-art labs prefer Adobe RGB (or accept ProPhoto), while many consumer or high-street labs still expect sRGB only

The bottom line

Ultimately, it is all about making sure the final physical print gets as close to your vision as the printer and paper combination allows. Using a very restricted colour space for a high-end print setup is a bit like buying a sports car and never taking it out of second gear: it will still move, but you will never see what it is truly capable of.

How to Create a Photo Book using Blurb by Judy Lindo

Recently in The Photography Creative Circle Community, member Judy Lindo hosted a fabulous LIVE session where she showed step by step how to create a photo book using Blurb and Lightroom Classic.

Judy, who has experience as an editor for a photography group called the "City Clickers," guided attendees through the process, covering essential steps like planning, photo selection, book layout, and sequencing images for optimal narrative and visual flow.

Discussions included the technical aspects of using Blurb's interface, including book settings, page layouts, text options, and background customization, and practical advice on cost-saving formats like the Zine, as well as managing the book's print-on-demand nature and marketing options.

✅ Check out the audio overview below …

To check out the full 1 ½ session hour video recording, jump into The Photography Creative Circle Community on SKOOL with the 7 Day Free Trial …

CLICK / TAP for a 7 day free trial

Photoshop Compositing Hack with Harmonize

If you use Photoshop for compositing, you’ve probably tried out the Harmonize feature currently in Photoshop beta. It’s a great addition when blending objects into a scene, adjusting color and adding shadows to make everything look more natural. The problem is, Harmonize isn’t really designed for people - it tends to break down on human subjects.

But I’ve found a handy workaround that makes Harmonize incredibly useful when compositing people, particularly when it comes to the hardest part: creating realistic contact and cast shadows.

Why Shadows Are the Hardest Part

When you’re compositing, matching colors is one thing, but making sure the subject looks truly grounded in the scene is another. Shadows - both contact shadows right under the feet, and cast shadows stretching into the scene - are what really sell the effect. Without them, the subject looks like they’re floating.

Testing Harmonize on People

Harmonize works brilliantly on objects, but when applied to a person it usually ruins detail and texture. For example, in a composite with a Viking figure photographed in the studio, Harmonize messed up the fine detail in the image but still attempted to generate shadows. Not perfect, but promising.

The Workaround: Adding a Fake Light Source

Here’s where the trick comes in. By adding a fake light source into the background before running Harmonize, the results improve dramatically.

  • Duplicate your background layer.

  • With a soft white brush, paint a bright “light spot” in the sky area.

  • Run Harmonize again with your subject layer active.

This extra light influences how Harmonize interprets the scene and produces stronger, more believable contact and cast shadows.

Keeping Only the Shadows

Of course, we don’t want the strange coloring Harmonize often applies to people. To fix this:

  1. Rasterize the Harmonize layer to make it editable.

  2. Apply the layer mask so only the visible result remains.

  3. Add a black layer mask to hide everything.

  4. With a white brush, paint back just the shadows from the Harmonize layer.

Now you have realistic shadows under your subject, without losing the original detail and color of the person.

Bonus Tip: Dealing with Flyaway Hair

Compositing hair can be a nightmare. Instead of spending hours trying to cut out every strand, I’ve had success using Generative Fill.

  • Make a quick selection of the hair area.

  • In Generative Fill (Firefly Image 3 model), type something like “long brown wavy hair blowing in the wind”.

  • Photoshop generates natural-looking variations that save a ton of time.

Final Thoughts

Harmonize might not be built for people yet, but with this compositing hack it becomes a powerful tool for one of the trickiest parts of the job — shadows. Add in the Generative Fill trick for hair, and you’ve got a much faster way to create composites that look believable.

Give it a try and see how it works in your own projects.

Editing a Photo in Lightroom + Photoshop ... on an iPad

Not too long ago, I never would have considered editing my photos on an iPad. It always felt like something I should save for my desktop. But things have changed. Both Lightroom and Photoshop on the iPad have improved massively, and these days I often use them when traveling. More and more, this mobile workflow is becoming a real option for photographers.

In this walkthrough, I’ll show you how I edited an image completely on the iPad, starting in Lightroom, jumping over to Photoshop when needed, and then finishing off with a print.

Starting in Lightroom on the iPad

The photo I worked on was taken with my iPhone. The first job was the obvious one: straightening the image. In Lightroom, I headed to the Geometry panel and switched on the Upright option, which immediately fixed the horizon.

Next, I dealt with a distraction in the bottom left corner. Using the Remove Tool with Generative AI switched on, I brushed over the wall that had crept into the frame. Lightroom offered three variations, and the second one was perfect.

With those fixes made, I converted the photo to black and white using one of my own synced presets. A quick tweak of the Amount slider gave me just the right level of contrast.

Masking and Sky Adjustments

The sky needed attention, so I created a Select Sky mask. As usual, the AI selection bled slightly into the hills, so I used a Subtract mask to tidy things up. It wasn’t perfect, but it was good enough to move forward.

From there, I added some Dehaze and Clarity to bring detail back into the clouds. A bit of sharpening pushed the image further, but that also revealed halos around a distant lamppost. At that point, I knew it was time to send the photo into Photoshop.

Fixing Halos in Photoshop on the iPad

Jumping into Photoshop on the iPad takes a little getting used to, but once you know where things are, it feels very familiar.

To remove the halos, I used the Clone Stamp Tool on a blank layer set to Darken blend mode. This technique is brilliant because it only darkens areas brighter than the sample point. With a bit of careful cloning, the halos disappeared quickly.

I then added a subtle “glow” effect often used on landscapes. By duplicating the layer, applying a Gaussian Blur, and changing the blend mode to Soft Light at low opacity, the image gained a soft, atmospheric look.

Back to Lightroom and Printing

With the edits complete, I sent the image back to Lightroom. From there it synced seamlessly across to my desktop, but the important point is that all of the editing was done entirely on the iPad.

Before printing, I checked the histogram and made some final tweaks. Then it was straight to print on a textured matte fine art paper. Once the ink settled, the result looked fantastic — no halos in sight.

Final Thoughts

I’m not suggesting you should abandon your desktop for editing. Far from it. But the iPad has become a powerful option when you’re traveling, sitting in a café, or simply want to work away from your desk.

This workflow shows what’s possible: you can straighten, retouch, convert to black and white, make sky adjustments, refine details in Photoshop, and even prepare a final print — all from the iPad. And of course, everything syncs back to your desktop for finishing touches if needed.

Exciting times indeed.

AI Just Changed How We ENHANCE EYES in PHOTOSHOP 💥

Two Ways to Add Detail to Dark Eyes in Photoshop

If you’ve ever edited a portrait where the eyes are so dark there’s no detail to recover, you’ll know how tricky it can be. Brightening them often makes things look worse, leaving the subject with flat, lifeless eyes.

In the video above, I walk you through two powerful techniques that solve this problem:

  • A reliable method using Photoshop’s traditional tools

  • A newer approach that uses AI to generate realistic iris detail

Here’s a quick overview of what you’ll see in the tutorial.

The Traditional Photoshop Method

This approach has been in my toolkit for years. It doesn’t try to recover what isn’t there. Instead it creates the impression of natural iris texture.

By adding grain, applying a subtle radial blur, and carefully masking the effect, you can fake detail that looks convincing. A touch of colour adjustment finishes the look, leaving you with eyes that feel alive instead of flat.

It’s a manual process but it gives you full control, and the result is surprisingly realistic.

The AI-Powered Method

Photoshop’s Generative Fill takes things in a different direction. With a simple selection around the iris and a prompt like “brown iris identification pattern”, Photoshop can generate natural-looking iris textures, the kind of fine patterns you’d expect to see in a close-up eye photo.

Once the AI has created the base texture, you can enhance it further using Camera Raw:

  • brighten the iris

  • increase contrast, clarity, and texture

  • even add a little extra saturation

Add a subtle catchlight and the transformation is incredible. The eyes go from lifeless to full of depth and realism in seconds.

Why These Techniques Matter

Eyes are the focal point of most portraits. If they’re dark and featureless, the whole image suffers.

These two techniques, one traditional and one modern, give you reliable options to fix the problem. Whether you want the hands-on control of Photoshop’s tools or the speed and realism of AI, you’ll be able to bring that essential spark back into the eyes.

How I Calibrate My BenQ SW272U Display for Photography and Everyday Use

One of the questions I get asked most often is how to correctly calibrate a display for photo editing and printing. Getting a reliable screen-to-print match can save you a huge amount of time, paper, and frustration.

In this article, I’ll walk you through the calibration process I use on my BenQ SW272U display. I’ll share the exact settings I rely on for editing and printing, as well as a second calibration I use for everyday tasks like browsing the internet, emails, and watching videos.

The good news is that while I use BenQ’s Palette Master Ultimate software, the same principles apply no matter what brand or software you use.

Why Two Calibrations?

Your requirements are very different when you are editing images compared to when you are simply watching videos or scrolling through emails.

  • Photo and Print Calibration – designed for accuracy and consistency. A lower brightness, neutral white point, and subtle black levels that preserve shadow detail.

  • Everyday Use Calibration – designed for a punchier, brighter look. Strong contrast and deep blacks make general computing and video viewing more enjoyable.

With a hardware calibrated display, it is easy to switch between these profiles at the push of a button.

Tools I Use

  • BenQ SW272U Display (hardware calibration capable)

  • Calibrite Display Pro HL (connected via USB-C or USB adapter)

  • Palette Master Ultimate Software (BenQ’s calibration tool)

Calibration for Photo Editing and Printing

Step 1 – Connect and Configure

I plug my calibration device into the USB port on the monitor. On BenQ displays, make sure the USB setting is at 60 Hz in the on-screen menu, otherwise the device may not be recognised.

Step 2 – Start Palette Master Ultimate

Open the software, select your display, and choose the calibration device. Then go into Color Calibration and click Start.

Step 3 – Create a Custom Target

The default presets are not suitable for serious photography. They tend to be too bright, too cool, and overly contrasty. Instead, I set up my own target:

  • Luminance: 60 cd/m² (much lower than the default 120 cd/m², but it gives me the most accurate screen-to-print match in my workspace).

  • White Point: 6000K (to match the 6000K LED lighting in my studio).

  • Gamut: Adobe RGB.

  • Gamma: 2.2 (with Enhanced Gamma Calibration enabled for better black and white printing).

  • Black Point: 0.5 cd/m² (slightly lifted from pure black so shadow detail is visible).

I save this as a custom preset called Photo & Print and assign it to Calibration 1 on the monitor.

Step 4 – Run the Calibration

Place the sensor on the screen, tilt the display back slightly, and let the software run. The process takes about 7 minutes.

Step 5 – Check Results

The software generates a report showing how closely the calibration matched the targets. For example, my most recent run achieved:

  • Luminance: 58 (target 60)

  • White Point: 6040K (target 6000K)

  • Black Point: 0.51 (target 0.5)

These are excellent results. The key metric is Delta E, which measures accuracy. A value below 4 is considered good, below 2 is excellent. My calibration achieved an average of 0.53 with a maximum of 1.28.

This means my display is performing very accurately, giving me confidence in my editing and printing.

Calibration for Everyday Use

When I am not editing or printing, I want a brighter, more contrasty display for daily computer use. Instead of creating a custom target, I simply use the built-in Photography preset in Palette Master Ultimate, but assign it to Calibration 2.

This gives me:

  • Bright luminance for comfortable viewing

  • White Point at D65 (6500K), which is the standard for TVs, tablets, and smartphones

  • Absolute black point for deep contrast

The calibration process is the same: place the sensor, let the software measure, and save the profile.

Final Thoughts

By creating two calibrations and assigning them to different preset buttons, I can switch between Photo & Print and Everyday Use in seconds.

For editing and printing, I get a display that shows me accurate colors, controlled brightness, and detail in the darkest areas. For browsing, video, and general use, I enjoy a bright, punchy image that looks fantastic.

If you own a hardware calibrated display like the BenQ SW272U, I highly recommend setting up both profiles. It makes your editing workflow more accurate and your day-to-day computing more enjoyable.

HOW I Edit THIS Portrait in 2025 – Full Lightroom Workflow (No Photoshop!)

In this video I show how I now retouch a stylised portrait in Lightroom, that up until recently was only possible using Photoshop by making BIG use of Lightroom Masks …

*Newsletter Subscribers can download the same file I use in this tutorial to follow along step by step.