adobe

Come on Adobe 🙏🏻 We NEED THIS FEATURE ⚠️

I've put together this short video because I need to ask a favour from anyone who uses Photoshop Camera Raw or Lightroom. There's a fundamental feature that's been missing for years, and it seriously impacts how we edit our images and the results we achieve.

The Missing Piece in AI Masking

The issue centres on masking, specifically the AI-generated masks available in the masking panel. Being able to select a sky or subject with one click is genuinely incredible, but there's a massive gap in functionality. We have no way to soften, blur, or feather those AI masks after they've been created.

Instead, we're left with incredibly sharp, defined outlines that sometimes look like poorly executed cutouts. This makes blending our adjustments naturally into the rest of the image much harder than it needs to be.

Years HAVE PASSED

Adobe introduced the masking panel back in October 2021. It changed the way we work and represented a huge step forward. Yet here we are, years later, still without a simple slider to soften mask edges.

If you want to blend an adjustment now, you're often stuck trying to subtract with a large soft brush, using the intersect command with a gradient, or employing other crude workarounds to hide the transition. It feels like excessive work for what should be a standard function.

The Competition Gets It Right

What makes this even more frustrating is seeing other software solve this problem elegantly. The new Boris FX Optics 2026 release includes AI masking controls where a single slider softens and blurs the mask outline, and it works incredibly well. Luminar has been offering this functionality for quite a while too.

These tools understand that a mask is only as good as its edges. When the competition provides ways to feather and refine AI selections, the absence of this feature in Adobe's ecosystem feels glaringly obvious.

Adobe's Strengths and Opportunities

Don't get me wrong. I appreciate that Adobe constantly pushes boundaries. We've witnessed tremendous growth over recent years, with developments from third-party AI platforms like Google's Gemini, emerging models, and innovations from Black Forest Labs with Flux and Topaz Labs. It's an exciting time to be a creator.

But I wish Adobe would take a moment to polish what we already have. Adding flashy new features is great, but refining the core workflows we use every single day would be a massive leap forward for all of us.

How You Can Help

Rather than simply complaining about this issue, I've created a feature request post in the Adobe forums. It's been merged with an existing thread on the same topic, which actually helps consolidate our voices into one place.

Here's what I need you to do: click the link below to visit the post and give it an upvote by clicking or tapping the counter number in the upper left. If we can get enough visibility on this, Adobe might finally recognise how much the community wants and needs this feature.

( LINK )

I believe refining existing tools is just as important as inventing new ones. Thank you for taking the time to vote. It really does make a difference when we speak up together.

What Are Those Mystery * and # Symbols in Photoshop??? 🤔

If you spend any amount of time in Adobe Photoshop, you become very familiar with the document tab at the top of your workspace. It tells you the filename and the current zoom level.

But sometimes, little cryptic symbols appear next to that information. Have you ever looked up and wondered, "Why is there a random hashtag next to my image name?" or "What does that little star mean?"

Nothing is broken. These symbols are just Photoshop's way of giving you a quick status update on your file and its colour management, without you needing to dig through menus.

What These Symbols Tell You

The symbols represent:

  • The save state of your document

  • Whether it has a colour profile attached

  • Whether the document's profile differs from your working space

Here is a quick guide to decoding those little tab hieroglyphics.

1. The Asterisk After the Filename ("Save Me!" Star)

What it looks like: … (RGB/8) *

What it means: An asterisk hanging right off the end of your actual filename means you have unsaved changes.

When it appears: Photoshop is hypersensitive here. The star will appear if you:

  • Move a layer one pixel

  • Brush a single dot onto a mask

  • Simply toggle a layer's visibility

  • Do pretty much anything

It's a gentle reminder that the version on screen is different from the version saved on your hard drive. If the computer crashed right now, you would lose that work.

The fix: Press Cmd+S (Mac) or Ctrl+S (Windows). The moment you successfully save the file, that little star will disappear because Photoshop now considers the document "clean" again.

2. The Asterisk ("Profile Difference" Star)

What it looks like: … (RGB/8*)

What it means: This is a different symbol in a different spot. If the star is tucked inside the parentheses next to the bit depth (the 8 or 16), it's no longer talking about unsaved work but about colour management.

In current Photoshop versions, an asterisk here generally means the file's colour profile situation does not match your working RGB setup. For example, you're working in sRGB as your default, but the image you opened is tagged with Adobe RGB (1998). In other words, the document is "speaking" a slightly different colour language than your default workspace.

Should you worry?

  • Usually, no. As long as you keep the embedded profile and your Colour Settings are sensible, Photoshop can still display the colours accurately even if the document profile and working space are different.

  • It's worth paying attention, though, if you're planning to combine several images into one document. You'll want a consistent profile for predictable colour when you paste, convert or export.

3. The Hash Symbol # ("Untagged" Image)

What it looks like: … (RGB/8#)

What it means: If you see the hash/pound/hashtag symbol inside the parentheses, it means the image is Untagged RGB. There's no embedded colour profile at all, so Photoshop has no explicit instructions telling it how those RGB numbers are supposed to be interpreted.

Why this happens: This is very common with:

  • Screenshots

  • Many web images

  • Older files where metadata was stripped out

When Photoshop opens an untagged image, it has to assume a profile based on your Colour Settings (typically your RGB working space, often sRGB by default), which may or may not match how the file was originally created.

Should you worry?

  • If colour accuracy is critical (printing, branding, matching other assets), yes, you should pay attention to that #. Different assumptions about the profile can easily lead to differences in appearance between systems.

  • You can fix this by going to Edit > Assign Profile and choosing the correct profile. For many web-style images, assigning sRGB is a sensible starting point, but be aware that assigning the wrong profile will change how the image looks, so use it when you have a good idea of the original intent.

Summary Cheat Sheet

(RGB/8) *

  • This document has unsaved changes

  • Save the file and the star will disappear

(RGB/8*)

  • There's a colour-profile difference or related colour-management status

  • Typically means the document's profile is not the same as your current working RGB space

(RGB/8#)

  • The image is Untagged RGB, with no embedded colour profile

  • Photoshop has to assume a profile based on your settings

Choosing the Right AI Model in Photoshop: A Credit-Smart Guide

If you've opened Photoshop recently, you've likely noticed that Generative Fill has received a significant upgrade. The platform now offers multiple AI models to choose from, each with distinct capabilities. However, there's an important consideration: these models vary considerably in their generative credit costs.

Understanding the Credit Structure

Adobe's proprietary Firefly model requires only 1 credit per generation, making it the most economical option. The newer partner models from Google (Gemini) and Black Forest Labs (FLUX), however, are classified as premium features and consume credits at a substantially higher rate. Depending on the model selected, you can expect to use between 10 and 40 credits per generation.

For users looking to maximize their monthly credit allocation, selecting the appropriate model for each task becomes an essential consideration.

Firefly: Your Go-To Workhorse (1 Credit)

Firefly serves as the default option and remains the most practical choice for everyday tasks. At just 1 credit per generation, it offers excellent efficiency for routine editing work. Whether you need to remove unwanted objects, extend backgrounds, or clean up imperfections, Firefly handles these tasks effectively.

Additionally, it benefits from full Creative Cloud integration, Adobe's commercial-use guarantees, and Content Credentials support. For standard production workflows, it's difficult to find a more cost-effective solution.

The Premium Players

The partner models represent a significant increase in cost, but they also deliver enhanced capabilities. Adobe operates these models on external infrastructure, which accounts for their higher credit requirements. These models excel at handling complex prompts, challenging lighting scenarios, and situations requiring exceptional realism or fine detail.

The credit costs break down as follows:

  • Gemini 2.5 (Nano Banana): 10 credits

  • FLUX.1 Kontext [pro]: 10 credits

  • FLUX.2 Pro: 20 credits

  • Gemini 3 (Nano Banana Pro): 40 credits

All of these models draw from the same credit pool as Firefly, but they deplete it considerably faster.

When to Use What

Gemini 2.5 (Nano Banana) occupies a middle position in the model hierarchy. It performs well when Firefly struggles with precise prompt interpretation, particularly for complex, multi-part instructions. This model also excels at maintaining consistent subject appearance across multiple variations.

FLUX.1 Kontext [pro] specialises in contextual integration. It analyses existing scenes to match perspective, lighting, and colour accurately. When adding new elements to complex photographs, this model provides the most seamless integration, making additions appear native to the original image.

FLUX.2 Pro elevates realism significantly. It generates outputs at higher resolution (approximately 2K-class) and demonstrates particular strength with textures. Areas that typically present challenges, such as skin, hair, and hands, appear notably more natural. For portrait and lifestyle photography requiring professional polish, the 20-credit investment may be justified.

Gemini 3 (Nano Banana Pro) represents the premium tier at 40 credits per generation. This "4K-class" option addresses one of Firefly's primary limitations: text rendering. When projects require legible signage, product labels, or user interface elements, Nano Banana Pro delivers the necessary clarity.

A Practical Approach to Model Selection

Default to Firefly (1 credit) for standard edits, cleanup tasks, and basic extensions

  1. Upgrade to Gemini 2.5 (10 credits) when improved prompt interpretation or likeness consistency is required

  2. Select FLUX.1 Kontext (10 credits) when lighting and perspective matching are priorities

  3. Deploy FLUX.2 Pro (20 credits) when realism and texture quality are essential

  4. Reserve Gemini 3 (40 credits) for situations requiring exceptional text clarity and fine detail

The guiding principle is straightforward: begin with the most economical option and upgrade only when project requirements justify the additional cost.

ADOBE just changed ChatGPT FOREVER 💥 But Why???

Adobe has just rolled out one of the most significant updates we've seen in a while by integrating Photoshop, Express, and Acrobat directly into ChatGPT. And here's the kicker: these features are currently free to use, no Creative Cloud subscription required.

Why This Matters

This is a fascinating strategic play. ChatGPT has roughly 800 million active users, many of whom recognize the Photoshop brand but find the actual software intimidating or prohibitively expensive. By embedding these tools inside a chat interface where people already feel comfortable, Adobe is dismantling that barrier to entry. They're essentially converting casual users into potential creators through familiarity and ease of use.

What the Integration Actually Does

The capabilities are surprisingly robust for a chat-based tool. You can upload an image and ask Photoshop to handle basic retouching or apply artistic styles. The masking feature is particularly impressive, intelligently selecting subjects without manual input. Adobe Express generates social media posts or birthday cards from simple text prompts, while the Acrobat integration handles PDF merging and organization without leaving the conversation.

The Bigger Picture

Make no mistake: this isn't replacing the full desktop software. It's a streamlined, accessible version optimized for speed and convenience. Users who need granular control or heavy processing power will still require the complete applications.

This is a textbook freemium strategy. Adobe is giving users a taste of their engine, creating a natural upgrade path. Once someone hits the limitations of the chat interface, they're just one click away from the full experience. It's a smart way to widen the funnel and meet users exactly where they are.


Learn more

Photoshop Compositing Hack with Harmonize

If you use Photoshop for compositing, you’ve probably tried out the Harmonize feature currently in Photoshop beta. It’s a great addition when blending objects into a scene, adjusting color and adding shadows to make everything look more natural. The problem is, Harmonize isn’t really designed for people - it tends to break down on human subjects.

But I’ve found a handy workaround that makes Harmonize incredibly useful when compositing people, particularly when it comes to the hardest part: creating realistic contact and cast shadows.

Why Shadows Are the Hardest Part

When you’re compositing, matching colors is one thing, but making sure the subject looks truly grounded in the scene is another. Shadows - both contact shadows right under the feet, and cast shadows stretching into the scene - are what really sell the effect. Without them, the subject looks like they’re floating.

Testing Harmonize on People

Harmonize works brilliantly on objects, but when applied to a person it usually ruins detail and texture. For example, in a composite with a Viking figure photographed in the studio, Harmonize messed up the fine detail in the image but still attempted to generate shadows. Not perfect, but promising.

The Workaround: Adding a Fake Light Source

Here’s where the trick comes in. By adding a fake light source into the background before running Harmonize, the results improve dramatically.

  • Duplicate your background layer.

  • With a soft white brush, paint a bright “light spot” in the sky area.

  • Run Harmonize again with your subject layer active.

This extra light influences how Harmonize interprets the scene and produces stronger, more believable contact and cast shadows.

Keeping Only the Shadows

Of course, we don’t want the strange coloring Harmonize often applies to people. To fix this:

  1. Rasterize the Harmonize layer to make it editable.

  2. Apply the layer mask so only the visible result remains.

  3. Add a black layer mask to hide everything.

  4. With a white brush, paint back just the shadows from the Harmonize layer.

Now you have realistic shadows under your subject, without losing the original detail and color of the person.

Bonus Tip: Dealing with Flyaway Hair

Compositing hair can be a nightmare. Instead of spending hours trying to cut out every strand, I’ve had success using Generative Fill.

  • Make a quick selection of the hair area.

  • In Generative Fill (Firefly Image 3 model), type something like “long brown wavy hair blowing in the wind”.

  • Photoshop generates natural-looking variations that save a ton of time.

Final Thoughts

Harmonize might not be built for people yet, but with this compositing hack it becomes a powerful tool for one of the trickiest parts of the job — shadows. Add in the Generative Fill trick for hair, and you’ve got a much faster way to create composites that look believable.

Give it a try and see how it works in your own projects.

Editing a Photo in Lightroom + Photoshop ... on an iPad

Not too long ago, I never would have considered editing my photos on an iPad. It always felt like something I should save for my desktop. But things have changed. Both Lightroom and Photoshop on the iPad have improved massively, and these days I often use them when traveling. More and more, this mobile workflow is becoming a real option for photographers.

In this walkthrough, I’ll show you how I edited an image completely on the iPad, starting in Lightroom, jumping over to Photoshop when needed, and then finishing off with a print.

Starting in Lightroom on the iPad

The photo I worked on was taken with my iPhone. The first job was the obvious one: straightening the image. In Lightroom, I headed to the Geometry panel and switched on the Upright option, which immediately fixed the horizon.

Next, I dealt with a distraction in the bottom left corner. Using the Remove Tool with Generative AI switched on, I brushed over the wall that had crept into the frame. Lightroom offered three variations, and the second one was perfect.

With those fixes made, I converted the photo to black and white using one of my own synced presets. A quick tweak of the Amount slider gave me just the right level of contrast.

Masking and Sky Adjustments

The sky needed attention, so I created a Select Sky mask. As usual, the AI selection bled slightly into the hills, so I used a Subtract mask to tidy things up. It wasn’t perfect, but it was good enough to move forward.

From there, I added some Dehaze and Clarity to bring detail back into the clouds. A bit of sharpening pushed the image further, but that also revealed halos around a distant lamppost. At that point, I knew it was time to send the photo into Photoshop.

Fixing Halos in Photoshop on the iPad

Jumping into Photoshop on the iPad takes a little getting used to, but once you know where things are, it feels very familiar.

To remove the halos, I used the Clone Stamp Tool on a blank layer set to Darken blend mode. This technique is brilliant because it only darkens areas brighter than the sample point. With a bit of careful cloning, the halos disappeared quickly.

I then added a subtle “glow” effect often used on landscapes. By duplicating the layer, applying a Gaussian Blur, and changing the blend mode to Soft Light at low opacity, the image gained a soft, atmospheric look.

Back to Lightroom and Printing

With the edits complete, I sent the image back to Lightroom. From there it synced seamlessly across to my desktop, but the important point is that all of the editing was done entirely on the iPad.

Before printing, I checked the histogram and made some final tweaks. Then it was straight to print on a textured matte fine art paper. Once the ink settled, the result looked fantastic — no halos in sight.

Final Thoughts

I’m not suggesting you should abandon your desktop for editing. Far from it. But the iPad has become a powerful option when you’re traveling, sitting in a café, or simply want to work away from your desk.

This workflow shows what’s possible: you can straighten, retouch, convert to black and white, make sky adjustments, refine details in Photoshop, and even prepare a final print — all from the iPad. And of course, everything syncs back to your desktop for finishing touches if needed.

Exciting times indeed.

AI Just Changed How We ENHANCE EYES in PHOTOSHOP 💥

Two Ways to Add Detail to Dark Eyes in Photoshop

If you’ve ever edited a portrait where the eyes are so dark there’s no detail to recover, you’ll know how tricky it can be. Brightening them often makes things look worse, leaving the subject with flat, lifeless eyes.

In the video above, I walk you through two powerful techniques that solve this problem:

  • A reliable method using Photoshop’s traditional tools

  • A newer approach that uses AI to generate realistic iris detail

Here’s a quick overview of what you’ll see in the tutorial.

The Traditional Photoshop Method

This approach has been in my toolkit for years. It doesn’t try to recover what isn’t there. Instead it creates the impression of natural iris texture.

By adding grain, applying a subtle radial blur, and carefully masking the effect, you can fake detail that looks convincing. A touch of colour adjustment finishes the look, leaving you with eyes that feel alive instead of flat.

It’s a manual process but it gives you full control, and the result is surprisingly realistic.

The AI-Powered Method

Photoshop’s Generative Fill takes things in a different direction. With a simple selection around the iris and a prompt like “brown iris identification pattern”, Photoshop can generate natural-looking iris textures, the kind of fine patterns you’d expect to see in a close-up eye photo.

Once the AI has created the base texture, you can enhance it further using Camera Raw:

  • brighten the iris

  • increase contrast, clarity, and texture

  • even add a little extra saturation

Add a subtle catchlight and the transformation is incredible. The eyes go from lifeless to full of depth and realism in seconds.

Why These Techniques Matter

Eyes are the focal point of most portraits. If they’re dark and featureless, the whole image suffers.

These two techniques, one traditional and one modern, give you reliable options to fix the problem. Whether you want the hands-on control of Photoshop’s tools or the speed and realism of AI, you’ll be able to bring that essential spark back into the eyes.

🚨 Adobe’s New Cloud Selection Technology: Hype or Reality?

One of the areas Adobe has been relentlessly improving in both Photoshop and Lightroom is selections. Over the years, the tools have become smarter, faster, and more automated. Today, we can make incredibly intricate selections with just a single click. At least, that is what Adobe says.

But if you are anything like me, you will know that demo images shown on stage or in marketing videos always look perfect. They are the kind of photos you would expect to work well in a demo: clean backgrounds, well defined edges, controlled lighting.

That is not real life.
So the question is: what happens when we use these tools on our own photos?

Recently, I tested Adobe’s new Cloud Detailed Results option for Select Subject using nothing more than some quick shots I had taken on my iPhone. The results were genuinely impressive.

Device vs. Cloud: What Is the Difference?

When you click Select Subject in Photoshop, you now have a choice:

  • Device – the selection is processed locally on your computer.

  • Cloud Detailed Results – the file is sent to Adobe’s servers, where the AI analyzes the image and sends back a more refined selection.

The device option is fast but often rough around the edges. The cloud option takes a little longer, but the results are noticeably more accurate.

Putting It to the Test

To really see the difference, I used a handful of everyday photos. Nothing staged, just casual iPhone shots. Here are a few examples:

Motorbike Portrait

With the device option, edges around wheels, helmets, and clothing looked rough and patchy. Switching to the cloud option instantly cleaned things up. Spokes, frames, and even tiny gaps were handled beautifully.

Tree with Branches

This was the kind of subject that used to take several different techniques combined. The cloud option managed to capture the branches and trunk in one go. Yes, there were a few areas that could be tidied up with a brush, but the heavy lifting was done.

Bicycle Spokes

Ordinarily, this is a nightmare for selections. Yet the cloud option picked out individual spokes, valves, and gaps between them. Minimal cleanup needed.

Setting Your Default

If you want Photoshop to always use the cloud option, head to Preferences > Image Processing. Under Select Subject and Remove Background, choose Cloud Detailed Results. From then on, every time you use those tools, Photoshop will default to the cloud method unless you manually switch.

Final Thoughts

I will admit I was skeptical. On demo images these things always look good, but I did not expect my casual iPhone shots to stand up so well. The results from Cloud Detailed Results were consistently sharper, cleaner, and more accurate than anything the device option gave me.

And just to clear up a common question: this does not use your generative AI credits. It is simply sending your image to Adobe’s servers for analysis and returning a selection.

Selections have always been one of the most tedious parts of editing. This new technology does not just save time, it also frees up creative energy. Instead of fighting with edges and masks, you can focus on the fun part: being creative.

Exciting times ahead, and if this is what Adobe is offering now, I cannot wait to see how much better it gets.

The Remove Prompt in Photoshop

The NEW Remove Button in Photoshop that I mentioned about in an earlier post where I shared a video, has been added into Photoshop to prevent what are referred to as "Hallucinations", which is when instead of Removing something, Photoshop would add in a random object.

This works incredibly well BUT doesn't give 3 Variations to choose from, so (and this is new) to use the EXACT SAME technology, make a selection and then type "Remove" in the Contextual Task Bar.

This WILL remove whatever you have selected but now gives you 3 variations to choose from.

Note: Even though this is removing, as it's giving you 3 variations this does mean that credits are deducted.