Come on Adobe 🙏🏻 We NEED THIS FEATURE ⚠️

I've put together this short video because I need to ask a favour from anyone who uses Photoshop Camera Raw or Lightroom. There's a fundamental feature that's been missing for years, and it seriously impacts how we edit our images and the results we achieve.

The Missing Piece in AI Masking

The issue centres on masking, specifically the AI-generated masks available in the masking panel. Being able to select a sky or subject with one click is genuinely incredible, but there's a massive gap in functionality. We have no way to soften, blur, or feather those AI masks after they've been created.

Instead, we're left with incredibly sharp, defined outlines that sometimes look like poorly executed cutouts. This makes blending our adjustments naturally into the rest of the image much harder than it needs to be.

Years HAVE PASSED

Adobe introduced the masking panel back in October 2021. It changed the way we work and represented a huge step forward. Yet here we are, years later, still without a simple slider to soften mask edges.

If you want to blend an adjustment now, you're often stuck trying to subtract with a large soft brush, using the intersect command with a gradient, or employing other crude workarounds to hide the transition. It feels like excessive work for what should be a standard function.

The Competition Gets It Right

What makes this even more frustrating is seeing other software solve this problem elegantly. The new Boris FX Optics 2026 release includes AI masking controls where a single slider softens and blurs the mask outline, and it works incredibly well. Luminar has been offering this functionality for quite a while too.

These tools understand that a mask is only as good as its edges. When the competition provides ways to feather and refine AI selections, the absence of this feature in Adobe's ecosystem feels glaringly obvious.

Adobe's Strengths and Opportunities

Don't get me wrong. I appreciate that Adobe constantly pushes boundaries. We've witnessed tremendous growth over recent years, with developments from third-party AI platforms like Google's Gemini, emerging models, and innovations from Black Forest Labs with Flux and Topaz Labs. It's an exciting time to be a creator.

But I wish Adobe would take a moment to polish what we already have. Adding flashy new features is great, but refining the core workflows we use every single day would be a massive leap forward for all of us.

How You Can Help

Rather than simply complaining about this issue, I've created a feature request post in the Adobe forums. It's been merged with an existing thread on the same topic, which actually helps consolidate our voices into one place.

Here's what I need you to do: click the link below to visit the post and give it an upvote by clicking or tapping the counter number in the upper left. If we can get enough visibility on this, Adobe might finally recognise how much the community wants and needs this feature.

( LINK )

I believe refining existing tools is just as important as inventing new ones. Thank you for taking the time to vote. It really does make a difference when we speak up together.

The Smartphone AI Photography Controversy: What's Really Going On?

The smartphone photography world is having a bit of an identity crisis right now, and it's forcing us all to ask an uncomfortable question: when does making a photo look better cross the line into just making stuff up?

Samsung's Moon Photo Fiasco

The whole thing kicked off properly in March 2023 when a Reddit user called ibreakphotos ran a brilliant experiment. They took a high-resolution moon photo, blurred it until you couldn't see any detail at all, stuck it on a monitor, and photographed it from across the room using a Samsung Galaxy phone's Space Zoom. What happened next was pretty shocking: Samsung's camera added crisp crater details that simply weren't there in the blurry image.

This wasn't your typical computational photography where the phone combines multiple frames to pull out hidden detail. Samsung was using a neural network trained on hundreds of moon images to recognise the moon and basically paste texture where none existed. The company more or less admitted this in their technical explanation, saying they apply "a deep learning-based AI detail enhancement engine" to "maximise the details of the moon" because their 100x zoom images "have a lot of noise" and aren't good enough on their own.

The controversy came back round in August 2025 when Samsung's One UI 8 beta revealed they were working to reduce confusion "between the act of taking a picture of the real moon and an image of the moon". In other words, they admitted their AI creates moon photos rather than capturing them.

Other Companies Are At It Too

Samsung isn't the only one playing this game. Huawei faced similar accusations with its P30 Pro back in 2019, using AI to enhance moon photography beyond what the camera actually saw. The pattern is pretty clear: smartphone manufacturers are using AI to make up for physical limitations that no amount of clever software can genuinely overcome.

Google's Approach to Reality

Google went in a slightly different direction with the Pixel 8 and 8 Pro, introducing "Best Take", a feature that swaps facial expressions between different photos in a sequence. If someone blinks or frowns in a group shot, the phone finds a better expression from other photos and drops it into your chosen image.

They also launched "Magic Editor", which lets you erase, move, and resize elements in photos (people, buildings, whatever) with AI filling in the gaps using algorithms trained on millions of images. These tools work on any photo in your Google Photos library, not just ones you've just taken.

Tech commentators called these features "icky", "creepy", and potentially threatening to "people's (already fragile) trust of online content". Google's Isaac Reynolds defended the approach by saying that "people don't want to capture reality, they want to capture beautiful images" and calling the results "representations of a moment" rather than documentary records.

Photographers Are Fighting Back

The controversy has created what some observers call a "perfection paradox". As AI became capable of churning out flawless imagery at industrial scale in 2025, perfection itself lost its appeal. Social feeds filled up with technically immaculate visuals, but the images actually getting attention were the ones showing signs of real human touch.

Professional photographers responded by deliberately embracing film grain, motion blur, quirky colours, accidental flare, and cameras with intentional limitations. The message is clear: authenticity and imperfection have become the things that set you apart in an AI-saturated landscape.

One photographer noted that when clients were offered choices between AI-crafted footage and work shot by humans with clear creative perspectives, they "still gravitated to the latter". Despite AI's technical achievements, there's still a "gap between technological capability and cultural readiness".

The Trust Problem

The fundamental issue is that smartphone manufacturers market these AI enhancements as camera capabilities without clearly telling users when AI is manufacturing details rather than capturing them. Samsung's moon photos showcase this perfectly. Users think they've captured incredible detail through superior hardware and processing, when actually the phone has just overlaid trained data.

Professor Rafal Mantiuk from the University of Cambridge explained that smartphone AI isn't designed to make photographs look like real life: "People don't want to capture reality. They want to capture beautiful images". However, the physical limitations of smartphones mean they rely on machine learning to "fill in" information that doesn't exist in the photo, whether that's for zoom, low-light situations, or adding elements that were never there.

What's Happening Next

There's growing pressure on the industry for what's being called "the year of AI transparency" in 2026. People are demanding that manufacturers like Samsung, Apple, and Google disclose when and how AI is manipulating photos.

Google has started responding with detection tools, rolling out AI detection capabilities through Gemini that can spot artificially generated photos using hidden SynthID watermarks and C2PA metadata. These watermarks stay detectable by machines whilst remaining invisible to human eyes, surviving compression, cropping, and colour adjustments. The system analyses images on-device without sending data to external servers.

Samsung, meanwhile, continues embracing AI integration. They recently published an infographic declaring that future cameras "will only get smarter with AI" and describing their camera as "part of the intuitive interface that turns what users see into understanding and action". This language notably sidesteps the authenticity questions that plagued their moon photography feature.

The Cultural Pushback

Perhaps most tellingly, younger users are increasingly seeking cameras that produce "real" and "raw" photos rather than AI-enhanced imagery, driving a resurgence of early-2000s compact digital cameras. This represents a rebellion against smartphone AI manipulation and a genuine desire for photographic authenticity.

The controversy forces a broader reckoning about what photography means in the AI era. As one analysis noted, 2025's deeper story wasn't simply that AI improved, it was "the confrontation it forced: what counts as real, what counts as ours, and what creativity looks like when machines can mimic almost anything".

The Bottom Line

The core issue is straightforward: smartphone manufacturers are using AI to create photographic details that cameras never actually captured, then marketing these capabilities as camera performance rather than AI fabrication.

Companies haven't clearly disclosed when AI is manufacturing versus enhancing, which is eroding trust in smartphone photography. Real photographers are differentiating themselves by embracing authenticity and imperfection as AI floods the market with technically perfect but soulless imagery.

And 2026 is shaping up as a pivotal year for AI transparency demands and authenticity verification tools.

This controversy represents more than just technical debates. It's fundamentally about trust, authenticity, and what we expect from our photographic tools in an increasingly AI-mediated world.

Picture This - A Musical Gift 🎸

Last Friday I was left completely speechless!

I logged in to a live video chat to join members of The Photography Creative Circle for our weekly coffee hour, and immediately there seemed more members present than usual … way more.

Shortly after logging in I found out why, as member and dear friend Jean-François Léger began reading out something he’d prepared …

Glyn, In the spirit of the holiday season, we have a surprise for you today.

About six months ago, you shared a vision with us by creating this Photographic Creative Circle. At first, we all joined to learn from you, to master our cameras and refine our post-processing skills. But very quickly, something much deeper began to take shape.

It has become a place where we share our lives, celebrate our successes, and support one another through difficult times. Photography, in the end, became the beautiful pretext for us to become true friends.

You laid the foundation for this community, now this community wanted to create something for you that gives full meaning to the word 'community.'

Glyn, this is our way of saying a big thank you for the commitment, the generosity and the tremendous work you’ve done for all of us.

So Picture this!

And this is what I was presented with …

Written, recorded and edited by Jean-François and with contributions by other members of the community, including 2 in particular that have had traumatic loss in their families in recent weeks … this blew me away!

Such an incredible gift that I will treasure forever … and be playing over and over again ❤️

What Are Those Mystery * and # Symbols in Photoshop??? 🤔

If you spend any amount of time in Adobe Photoshop, you become very familiar with the document tab at the top of your workspace. It tells you the filename and the current zoom level.

But sometimes, little cryptic symbols appear next to that information. Have you ever looked up and wondered, "Why is there a random hashtag next to my image name?" or "What does that little star mean?"

Nothing is broken. These symbols are just Photoshop's way of giving you a quick status update on your file and its colour management, without you needing to dig through menus.

What These Symbols Tell You

The symbols represent:

  • The save state of your document

  • Whether it has a colour profile attached

  • Whether the document's profile differs from your working space

Here is a quick guide to decoding those little tab hieroglyphics.

1. The Asterisk After the Filename ("Save Me!" Star)

What it looks like: … (RGB/8) *

What it means: An asterisk hanging right off the end of your actual filename means you have unsaved changes.

When it appears: Photoshop is hypersensitive here. The star will appear if you:

  • Move a layer one pixel

  • Brush a single dot onto a mask

  • Simply toggle a layer's visibility

  • Do pretty much anything

It's a gentle reminder that the version on screen is different from the version saved on your hard drive. If the computer crashed right now, you would lose that work.

The fix: Press Cmd+S (Mac) or Ctrl+S (Windows). The moment you successfully save the file, that little star will disappear because Photoshop now considers the document "clean" again.

2. The Asterisk ("Profile Difference" Star)

What it looks like: … (RGB/8*)

What it means: This is a different symbol in a different spot. If the star is tucked inside the parentheses next to the bit depth (the 8 or 16), it's no longer talking about unsaved work but about colour management.

In current Photoshop versions, an asterisk here generally means the file's colour profile situation does not match your working RGB setup. For example, you're working in sRGB as your default, but the image you opened is tagged with Adobe RGB (1998). In other words, the document is "speaking" a slightly different colour language than your default workspace.

Should you worry?

  • Usually, no. As long as you keep the embedded profile and your Colour Settings are sensible, Photoshop can still display the colours accurately even if the document profile and working space are different.

  • It's worth paying attention, though, if you're planning to combine several images into one document. You'll want a consistent profile for predictable colour when you paste, convert or export.

3. The Hash Symbol # ("Untagged" Image)

What it looks like: … (RGB/8#)

What it means: If you see the hash/pound/hashtag symbol inside the parentheses, it means the image is Untagged RGB. There's no embedded colour profile at all, so Photoshop has no explicit instructions telling it how those RGB numbers are supposed to be interpreted.

Why this happens: This is very common with:

  • Screenshots

  • Many web images

  • Older files where metadata was stripped out

When Photoshop opens an untagged image, it has to assume a profile based on your Colour Settings (typically your RGB working space, often sRGB by default), which may or may not match how the file was originally created.

Should you worry?

  • If colour accuracy is critical (printing, branding, matching other assets), yes, you should pay attention to that #. Different assumptions about the profile can easily lead to differences in appearance between systems.

  • You can fix this by going to Edit > Assign Profile and choosing the correct profile. For many web-style images, assigning sRGB is a sensible starting point, but be aware that assigning the wrong profile will change how the image looks, so use it when you have a good idea of the original intent.

Summary Cheat Sheet

(RGB/8) *

  • This document has unsaved changes

  • Save the file and the star will disappear

(RGB/8*)

  • There's a colour-profile difference or related colour-management status

  • Typically means the document's profile is not the same as your current working RGB space

(RGB/8#)

  • The image is Untagged RGB, with no embedded colour profile

  • Photoshop has to assume a profile based on your settings

Catching the New Years Day Sunrise 2026 ☀️

Got up early and popped down to the local beach to photograph the sunrise, and Mother Nature did not disappoint 😍

Happy New Year 🎉

Fuji X-T5
Fuji 18mm f/1.4 @ f/11
2.5 sec, f/11 , ISO 125

NiSi 3 Stop JetMag Pro ND Filter

Benro Rhino Carbon Fibre Tripod

Images below captured on my iPhone 17 Pro Max using the Leica Camera App and the Greg WLM B&W setting…

🎨 Colour Space Conversion Explained 🪚 ProPhoto vs Adobe RGB vs sRGB

This post follows on from my previous article where I explained what colour spaces actually are. If you haven't read that yet, you can check it out here: Colour Spaces Simplified.

If you have read that one, you know that ProPhoto RGB is a massive container of colours, Adobe RGB sits in the middle as a wide-gamut space popular for printing and high-end displays, and sRGB is a smaller container that became the standard default for monitors, operating systems, and the web. But knowing what they are is only half the battle. The real magic, and the potential disaster, happens when we move an image from one container to another. This process is called Colour Space Conversion.

If you don't understand what happens during this conversion, you are gambling with the final look of your images, so let's look under the bonnet at the mechanics of moving colour.

The Core Problem: The Definition of "Red" Changes

To understand conversion, you need to grasp one slightly technical concept: Pixels are just numbers. Every pixel in your digital photo is defined by three numbers: Red, Green, and Blue (RGB). In an 8-bit image, these numbers run from 0 to 255.

If a pixel is pure, maximum red, its value is R:255, G:0, B:0. Here is the mind-bending part: A pixel valued at R:255 in ProPhoto RGB, Adobe RGB, and sRGB represents three different actual colours, even though the numbers are the same.

ProPhoto's R:255, G:0, B:0 is an extremely saturated, incredibly intense deep red, defined right at the long-wavelength edge of what human vision can see, and on a real device it will be mapped to the most saturated red your display or printer can show. Adobe RGB's R:255, G:0, B:0 is still a very strong red, but not as extreme as ProPhoto, and designed to sit well with high-quality inkjet and press gamuts. sRGB R:255, G:0, B:0 is the red of a standard fizzy drink can: still bright, but nowhere near as intense as the ProPhoto or Adobe RGB versions.

When you convert an image, you aren't just moving pixels around; you are fundamentally translating their meaning.

The Analogy: Moving from a Mansion to a Studio Flat

Think of ProPhoto RGB as a giant mansion. You have massive rooms and enormous furniture: a grand piano, huge chandeliers, and king-sized beds.

Think of Adobe RGB as a generous three-bedroom house: much more space than a studio, plenty of room for big pieces, but not quite the sprawling scale of the mansion. Think of sRGB as a small studio flat. It's functional and cosy, but it has very strict space limits. The Conversion Process is moving day. You have to fit everything from the mansion into the house, or all the way down into the studio flat.

Many items fit easily. Your normal clothes, books, and kitchen plates (these represent the standard skin tones, sky blues, and foliage greens in your photo) fit into all three spaces without issue; they exist comfortably in ProPhoto, Adobe RGB, and sRGB.

The "Out of Gamut" Problem

The problem arises when you try to move the grand piano (highly saturated colours, like a vibrant sunset orange or a neon flower petal). The piano is "out of gamut". It might just squeeze into the Adobe RGB house but still refuse to fit through the door of the sRGB studio flat, or it may already be too big even for Adobe RGB.

You now face a critical choice on how to handle that piano. This choice is what we call Rendering Intent.

The Solution: How We Fit the Piano

When you convert colours in Photoshop (via Edit > Convert to Profile), you are telling the software how to fit the furniture. You might be going from ProPhoto to Adobe RGB for print prep, or straight from ProPhoto/Adobe RGB down to sRGB for the web. You generally have two choices for photography:

Choice 1: Relative Colourimetric (The "Saw" Method)

This method prioritises accuracy for the colours that do fit.

What it does: It looks at the grand piano, realises it won't fit, and saws off the legs until it does. In Photo Terms: This is called Clipping. The software takes any colour that is too saturated for the destination space (whether that's Adobe RGB or sRGB) and maps it to the closest colour at the edge of what that space can display, which can flatten subtle gradations in those brightest, most saturated areas.

The Good: All your normal colours (skin tones, etc.) that fall inside the destination space remain essentially identical to the original.

The Bad: You can lose detail in highly saturated highlights where colours are pushed beyond that space and get clipped.

Choice 2: Perceptual (The "Shrink Ray" Method)

This method prioritises the relationship between colours.

What it does: It uses a sci-fi shrink ray on all your furniture just enough so that the grand piano fits through the door. In Photo Terms: To make room for the highly saturated colours, it slightly compresses the entire colour range of the image so that out-of-gamut colours are brought inside Adobe RGB or sRGB with smoother transitions.

The Good: You keep smoother detail and gradation in your bright sunsets and flowers; the piano fits whole, and the relationships between colours tend to look natural.

The Bad: Your entire image might look slightly less punchy or saturated than the original because everything got shrunk a little. How strong this effect is depends on the specific profiles involved.

In many simple RGB-to-RGB conversions (for example, between Adobe RGB and sRGB), perceptual and relative colourimetric may look very similar, but the intent choice becomes especially important when mapping from a wide space like ProPhoto or Adobe RGB into printer profiles with more complex colour ranges.

Why You Must Control This (Don't let the browser decide!)

This is the crucial takeaway.

If you upload an Adobe RGB or ProPhoto image directly to the web, you are relying on the browser, operating system, and device to handle that wide-colour file correctly, and that is risky. Many systems expect sRGB, some sites strip embedded profiles, and some viewers may ignore or mishandle wide-gamut profiles, especially if the file is untagged or metadata has been removed. The result can be incorrect colour or harsh clipping and posterisation in saturated areas, or simply dull, wrong-looking colour where Adobe RGB or ProPhoto numbers are interpreted as if they were sRGB.

By doing the conversion yourself in Photoshop or Lightroom before you export, you get to choose. You can convert from ProPhoto or Adobe RGB down to sRGB, select relative or perceptual rendering intent, and preview the result rather than leaving those decisions to whatever defaults the viewer's browser and device happen to use.

Is the image mostly portraits with normal colours? The "Saw" method (Relative Colourimetric) to sRGB or Adobe RGB might be perfect, because it keeps all in-range colours very accurate and most portrait colours already fall comfortably inside those spaces. Is it a vibrant landscape with intense colours that really push ProPhoto or Adobe RGB? You probably need the "Shrink Ray" (Perceptual) to save the smooth detail in those saturated areas.

You are the Artist. You should decide how your furniture gets moved, not the moving company.

🎨 Colour Spaces Simplified: A Practical Guide

Choosing the right colour space can feel like a bit of a headache, especially when you just want to get on with your work and make things look great. It is one of those technical topics that often gets over-complicated with jargon, but it really comes down to how much colour your file can hold and where that file is eventually going to live.

Big picture: colour spaces

Think of a colour space like a box of crayons. Some boxes have the basic 8 colours, while others have 128, and each digital colour space is just a different "box" with its own range of possible colours (gamut) inside. For common RGB spaces like sRGB, Adobe RGB, Display P3, and Rec. 709, that gamut is usually shown as a triangle sitting inside the horseshoe-shaped map of all colours the human eye can see.

sRGB: the universal baseline

Created in the mid-1990s by HP and Microsoft, sRGB was designed as a standard colour space that typical monitors, printers, operating systems, and web browsers could all assume by default. If you are posting a photo to Instagram, a blog, or sending it to a standard consumer lab for prints, sRGB is the safest choice because it is the "lowest common denominator" most devices expect.

Use case: Web, social media, and general consumer printing where you cannot control colour management. sRGB gives predictable, consistent colour on the widest range of devices.

Limitation: sRGB is a relatively small "box of crayons," especially in saturated greens and cyans, so it cannot represent all the rich colours modern cameras can capture.

Adobe RGB (1998): print-oriented wider gamut

Adobe RGB (1998) was developed by Adobe to cover a wider gamut than sRGB, with more reach into greens and some cyans, and to better match the range achievable by high-quality CMYK printing processes. On a gamut diagram you can see Adobe RGB extending further towards the green corner than sRGB, which is particularly useful for subjects like foliage, seascapes, and some print workflows.

Use case: Professional printing and high-end photography workflows where files will go to colour-managed printers or presses that can exploit the wider gamut.

Limitation: If you upload an Adobe RGB image to a non-colour-managed website, the browser often treats it as sRGB, which makes it look dull and washed out because the extra gamut is compressed incorrectly.

ProPhoto RGB: extremely wide editing space

ProPhoto RGB (also known as ROMM RGB) is a very large-gamut colour space developed by Kodak, designed to include almost all real-world surface colours and even some mathematically defined "imaginary" colours that lie just outside the human-visible locus. Because its gamut is so wide, it comfortably contains colours that fall outside both sRGB and Adobe RGB, which can occur in highly saturated parts of modern digital captures.

When you shoot RAW, the camera records sensor data that is not yet in any RGB colour space; the RAW developer chooses a working space for editing. Applications like Lightroom use an internal working space (often described as MelissaRGB or a linear ProPhoto variant) that shares ProPhoto's primaries, giving you a ProPhoto-sized gamut while you make adjustments.

Use case: As a working or internal space for developing high-quality RAW files, where a very wide gamut helps avoid clipping intense colours during heavy editing.

Limitation: ProPhoto is so large that using it in 8-bit can cause banding in gradients, so it should be paired with 16-bit editing to maintain smooth transitions. It is also a poor choice as a delivery space for general viewing or the web, because most devices and browsers either do not handle it correctly or cannot display its gamut, leading to flat or strange colour; final exports for sharing are usually converted to sRGB or at most Adobe RGB/P3.

Using a ProPhoto-based space during editing gives you room to "hold" all the colour the RAW data can produce, but the RAW itself is not stored "in" ProPhoto.

What about Display P3?

If you use an iPhone, a Mac, or a recent high-end monitor, you have probably seen Display P3 mentioned. It is a modern wide-gamut colour space, built from the cinema P3 primaries but adapted to the D65 white point and an sRGB-style tone curve used on typical computer displays.

To understand it, it helps to start with DCI-P3. That is the "box of crayons" designed for digital cinema projectors in movie theatres, with a gamma around 2.6 and a slightly green-tinted white balanced for xenon-lamp projection. Its gamut reaches significantly further than sRGB in reds and greens, which is one reason properly graded movies can look so saturated and "punchy" on the big screen.

Display P3 is essentially a more desktop-friendly variant of that cinema colour. It uses the same P3 primaries, but adopts the D65 white point shared by sRGB and Adobe RGB, and an sRGB-like transfer curve (roughly gamma 2.2), making it better suited to normal monitor and device viewing.

How it compares to Adobe RGB

Adobe RGB and Display P3 are both wide-gamut spaces of similar overall volume, but with different shapes.

  • Adobe RGB reaches further into deep greens and blues, which is why it has long been favoured for print workflows where those hues matter and where printers and papers can take advantage of that gamut

  • Display P3 pushes more into richly saturated oranges and reds, while not extending quite as far as Adobe RGB in some green-blue regions

Use case: If you are creating content primarily for modern wide-gamut smartphones, tablets, and laptops that support Display P3 and are properly colour-managed, working in Display P3 lets you use colours that go beyond sRGB, so images can look more lifelike and vibrant on those devices. On older or strictly sRGB-only screens, though, those extra colours are either mapped back into sRGB or clipped, so the advantage largely disappears.

Which one should you use?

A simple, robust way to stay sane is to separate "editing space" from "delivery space." During RAW editing, using a very wide-gamut space like ProPhoto (or Lightroom's ProPhoto-based internal space) in 16-bit keeps as much colour information as possible while you make adjustments. When you are finished and ready to share or upload, you convert a copy of that master to sRGB (or to Adobe RGB/P3 if you are targeting a fully colour-managed, wide-gamut environment), so it looks correct on most people's devices.

This approach gives you a master file that preserves the widest feasible gamut for future prints or re-edits, plus final exports tailored to where the image will actually live (web, print, or video) without sacrificing consistency for your viewers.

Creating a print master in Adobe RGB

When it's time to take an image off the screen and put it onto paper, I often convert my files to Adobe RGB as a dedicated "print master." It might seem like an extra step, but there is a very practical reason for it: it gives the print system more of the colours that high-quality printers can actually reproduce, especially beyond plain sRGB.

Matching what the printer can really do

Many modern high-quality inkjet and lab printers can reproduce certain colours (particularly some vibrant cyans, deep blues, and rich greens) that extend outside the sRGB gamut. If a scene or RAW file contains those more saturated hues, converting everything into sRGB first can compress or clip them before the printer even gets a chance to do its job, so the print may not show all the nuance that was originally captured.

By keeping the editing in a wide space (like ProPhoto RGB or Lightroom's internal MelissaRGB space, which uses ProPhoto-based primaries) and then creating a print file in Adobe RGB, the file can still describe many of those "extra" printable colours that sRGB would squeeze in.

In real-world terms, this often shows up as:

  • More believable foliage

  • Subtler turquoise water

  • More faithful fabric tones when the printer and paper are capable of that gamut

Bridging the gap to CMYK

The ink in a printer behaves very differently from the light on your screen: monitors work in RGB (Red, Green, Blue), while printers work in CMYK (Cyan, Magenta, Yellow, Black) or multi-ink variants. A printer's CMYK gamut has a lumpy, irregular shape. There are regions, especially in certain blue-green areas, where it stretches outside sRGB, and other regions (like very bright, saturated oranges and yellows) where it is actually smaller than sRGB.

Adobe RGB was designed to better encompass typical CMYK print gamuts, so it overlaps much more closely with the colours high-quality printing systems can produce. It does not literally "cover every possible CMYK colour," but it does include most of the printable colours that sit outside sRGB, which means you are less likely to be "leaving colour on the table" when you hand a file to a good, colour-managed print workflow.

How this fits into a print workflow

  • Edit in a very wide-gamut space (e.g., ProPhoto RGB or Lightroom's MelissaRGB internal space) to keep as much colour information from the RAW as possible while you do the heavy lifting

  • Create a print master in Adobe RGB once the edit is finished, so the file aligns better with what many high-end printers and papers can show than sRGB does

  • Match the lab's requirements, since some pro and fine-art labs prefer Adobe RGB (or accept ProPhoto), while many consumer or high-street labs still expect sRGB only

The bottom line

Ultimately, it is all about making sure the final physical print gets as close to your vision as the printer and paper combination allows. Using a very restricted colour space for a high-end print setup is a bit like buying a sports car and never taking it out of second gear: it will still move, but you will never see what it is truly capable of.

Why "Digital Infinity" is Killing Your Creativity (and How to Fix It)

We often see videos on YouTube claiming that one "magic trick" will change your life, but they usually fall a little bit flat. However, I recently ran an experiment in our creative community that I don't just believe will transform your photography, I know it will.

We live in an age of "digital infinity." Our phones can hold thousands of images, and it costs us absolutely nothing to press the shutter button. But this unlimited choice has a hidden downside: it can make us lazy.

To combat this, I set a challenge for our photographers that was brutally simple, and the results were completely unexpected.

The 10-Exposure Challenge

The rules were designed to strip away the safety nets we've become so reliant on:

  1. Only 10 exposures. That's it.

  2. No fixing it in post. What you shoot is what you get.

  3. No do-overs. If you click it, it counts, even if it's an accidental selfie.

The "Maddening" First Step

For many, the first reaction wasn't creative bliss; it was pure frustration. We had a studio photographer, Sarah, who is used to total control over lighting and props. Suddenly, out in the real world with only 10 frames, that control vanished. She described the experience as "maddening."

Another photographer, Francois, usually shoots a hundred frames just to get one perfect food shot. Having to tell the entire story of a meal in just 10 frames was a massive mental shift.

The Turning Point: Slowing Way Down

Once the frustration settled, something powerful happened. The photographers started to see this limitation as a lens that focused their attention.

They were forced to stop, look, and truly see what was in front of them. One member, Brian, took the challenge on his usual 90-minute walk. It ended up taking him three hours to take just 10 photos. That is the pace of deliberate creation.

What We Learnt

This challenge acted like a time machine, throwing us back to the discipline of the film era where every shot cost money. Here are the big takeaways:

  • Visualise first: We rediscovered the importance of walking around and using our eyes to find the angle before ever lifting the camera.

  • Embrace imperfection: Francois realised that his industry's obsession with "perfection" wasn't authentic. By embracing little imperfections, his photos felt more real and more appetising.

  • Constraint is liberating: Without the pressure of endless choices and editing, the simple act of taking a picture became joyful again.

The Final Verdict

Would they do it again? It was a resounding yes across the board. One member was so inspired he actually went back to shooting on real film.

The value wasn't really in the final 10 images; it was about rediscovering a mindful, deliberate way of working.

So, I have a question for you. In a world of unlimited options, what's one constraint you could impose on yourself to unlock a new level of creativity?

Give this challenge a go. I guarantee you'll see a difference and feel like an artist again.

UK Drone Rules are Changing

It looks like some big updates are coming to the UK drone scene from 1 January 2026, especially around how drones are classed, identified, and registered. Here is a revised, plain‑English version that reflects the latest CAA guidance.​

1. New UK class marks

From 1 January 2026, most new drones sold in the UK for normal hobby and commercial flying will carry a UK class mark from UK0 to UK6. This mark shows what safety standards the drone meets and which set of rules apply.​

  • UK0: Very light drones under 250g, including many small “sub‑250” models.​

  • UK1–UK3: Heavier drones intended for typical Open Category flying, with increasing levels of safety features as the class number goes up.​

  • UK4: Mostly used for model aircraft and some specialist use.​

  • UK5 & UK6: Higher‑risk drones designed for more advanced or specialist operations, usually in the Specific Category.​

EU C‑class drones:
If you already own an EU C‑marked drone, it will continue to be recognised in the UK until 31st December 2027, so you can keep flying it under the transitional rules until then.​

2. Remote ID – your “digital number plate”

Remote ID (RID) is like a digital number plate for your drone: it broadcasts identification and flight information while you are in the air. This helps the CAA, police and other authorities see who is flying where, and pick out illegal or unsafe flights.​

  • From 1st January 2026

    • Any UK‑class drone in UK1, UK2, UK3, UK5 or UK6 must have Remote ID fitted and switched on when it is flying.​

  • From 1st January 2028 (the “big” deadline)

    • Remote ID will also be required for:​

      • UK0 drones weighing 100g or more with a camera.

      • UK4 drones (often model aircraft) unless specifically exempted.

      • Privately built drones 100g or more with a camera.

      • “Legacy” drones (no UK class mark) 100g or more with a camera.

What RID does (and does not) share:

  • It broadcasts things like your drone’s location, height and an identification code (serial/Operator ID), plus some details about the flight.​

  • It does not broadcast your name or home address to the general public; it is designed for safety and enforcement, not doxxing pilots.​

3. Registration

The UK is tightening registration so that more small camera drones are covered. The key change is that the threshold drops from 250g to 100g for many requirements.​

From the new CAA table:​

  • Flyer ID – for the person who flies

    • Required if your drone or model aircraft weighs 100g to less than 250g
      (including UK0), and for anything 250g or heavier.​

  • Operator ID – for the person responsible for the drone

    • Required if your drone:

      • Weighs 100g to less than 250g and has a camera; or

      • Weighs 250g or more, even without a camera.​

    • If your drone is 100–250g without a camera, an Operator ID is optional
      (though it is still recommended).​

In everyday terms:

  • If your drone has a camera and weighs 100g or more, you should expect to need both an Operator ID and a Flyer ID.​

  • Sub‑100g aircraft remain outside the legal registration requirement, but the CAA still recommends taking the Flyer ID test for knowledge and safety.​

4. Night flying

If you fly at night, your aircraft must now have at least one green flashing light turned on. This makes it easier for other people and aircraft to see where it is and in which direction it is moving.​

A2 CofC and how close you can fly

The A2 Certificate of Competency (A2 CofC) still matters for flying certain drones closer to people. Under the new regime:​

  • With an A2 CofC, you can fly UK2‑class drones:

    • As close as 30m horizontally from uninvolved people in normal operation.​

    • Down to 5m in a dedicated “low‑speed mode” if your drone supports it and you comply with all conditions.​

  • For legacy drones under 2 kg, you should still keep at least 50m away from uninvolved people when using A2‑style privileges under the transitional rules.​

Always check the latest CAA drone code for the category you are flying in, as extra restrictions may apply depending on location and type of operation.​

5. What you need to do

If you are already flying legally today, you do not need to panic, but you should plan ahead over the next couple of years.​

  • Now–end of 2025

    • Make sure you have a valid Flyer ID and Operator ID if your drone falls into the current registration thresholds.​

  • From 1st January 2026

    • When buying a new drone, check that it has the correct UK class mark and built‑in Remote ID if it is UK1, UK2, UK3, UK5 or UK6.​

    • Use a green flashing light when flying at night.​

  • By 1st January 2028

    • If you own a legacy drone or UK0/UK4 aircraft 100g or more with a camera, ensure you are ready to comply with Remote ID, either through built‑in hardware or an approved add‑on.​

If you keep an eye on these dates and make sure your registration, class marks and Remote ID are in order, your current setup should remain usable under the new rules for years to come.​

Choosing the Right AI Model in Photoshop: A Credit-Smart Guide

If you've opened Photoshop recently, you've likely noticed that Generative Fill has received a significant upgrade. The platform now offers multiple AI models to choose from, each with distinct capabilities. However, there's an important consideration: these models vary considerably in their generative credit costs.

Understanding the Credit Structure

Adobe's proprietary Firefly model requires only 1 credit per generation, making it the most economical option. The newer partner models from Google (Gemini) and Black Forest Labs (FLUX), however, are classified as premium features and consume credits at a substantially higher rate. Depending on the model selected, you can expect to use between 10 and 40 credits per generation.

For users looking to maximize their monthly credit allocation, selecting the appropriate model for each task becomes an essential consideration.

Firefly: Your Go-To Workhorse (1 Credit)

Firefly serves as the default option and remains the most practical choice for everyday tasks. At just 1 credit per generation, it offers excellent efficiency for routine editing work. Whether you need to remove unwanted objects, extend backgrounds, or clean up imperfections, Firefly handles these tasks effectively.

Additionally, it benefits from full Creative Cloud integration, Adobe's commercial-use guarantees, and Content Credentials support. For standard production workflows, it's difficult to find a more cost-effective solution.

The Premium Players

The partner models represent a significant increase in cost, but they also deliver enhanced capabilities. Adobe operates these models on external infrastructure, which accounts for their higher credit requirements. These models excel at handling complex prompts, challenging lighting scenarios, and situations requiring exceptional realism or fine detail.

The credit costs break down as follows:

  • Gemini 2.5 (Nano Banana): 10 credits

  • FLUX.1 Kontext [pro]: 10 credits

  • FLUX.2 Pro: 20 credits

  • Gemini 3 (Nano Banana Pro): 40 credits

All of these models draw from the same credit pool as Firefly, but they deplete it considerably faster.

When to Use What

Gemini 2.5 (Nano Banana) occupies a middle position in the model hierarchy. It performs well when Firefly struggles with precise prompt interpretation, particularly for complex, multi-part instructions. This model also excels at maintaining consistent subject appearance across multiple variations.

FLUX.1 Kontext [pro] specialises in contextual integration. It analyses existing scenes to match perspective, lighting, and colour accurately. When adding new elements to complex photographs, this model provides the most seamless integration, making additions appear native to the original image.

FLUX.2 Pro elevates realism significantly. It generates outputs at higher resolution (approximately 2K-class) and demonstrates particular strength with textures. Areas that typically present challenges, such as skin, hair, and hands, appear notably more natural. For portrait and lifestyle photography requiring professional polish, the 20-credit investment may be justified.

Gemini 3 (Nano Banana Pro) represents the premium tier at 40 credits per generation. This "4K-class" option addresses one of Firefly's primary limitations: text rendering. When projects require legible signage, product labels, or user interface elements, Nano Banana Pro delivers the necessary clarity.

A Practical Approach to Model Selection

Default to Firefly (1 credit) for standard edits, cleanup tasks, and basic extensions

  1. Upgrade to Gemini 2.5 (10 credits) when improved prompt interpretation or likeness consistency is required

  2. Select FLUX.1 Kontext (10 credits) when lighting and perspective matching are priorities

  3. Deploy FLUX.2 Pro (20 credits) when realism and texture quality are essential

  4. Reserve Gemini 3 (40 credits) for situations requiring exceptional text clarity and fine detail

The guiding principle is straightforward: begin with the most economical option and upgrade only when project requirements justify the additional cost.