Photoshop

Fix IMPOSSIBLE Backgrounds Instantly ( Lightroom + Photoshop )

Recently, Steven Gotz, a member of the Photography Community on SKOOL ( LINK ), sent over a brilliant RAW file of a condor. Stunning subject, great light, one problem: a massive fence running right through the background.

Rather than leave it on the shelf, I figured it was the perfect excuse to put the latest updates in Lightroom and Photoshop Beta through their paces. What would have taken ages with the Clone Stamp tool a couple of years ago can now be sorted in seconds. Here's exactly how I did it, using two different workflows.

Workflow 1: Photoshop Beta with Firefly Image 5

This is the quickest route right now, and the results are genuinely impressive.

The key is using the new Firefly Image 5 (Preview) model inside Photoshop Beta. It's been built specifically for editing while preserving detail, which matters a lot when you're dealing with complex textures like feathers and rocky backgrounds.

  1. From Lightroom to Photoshop Beta. Right-click the image in Lightroom and choose Edit In > Adobe Photoshop Beta.

  2. Select All. Once you're in Photoshop, go to Select > All. This gives the AI the full context of the frame before you do anything.

  3. Switch to Firefly Image 5. Click Generative Fill in the contextual taskbar. Here's the bit that matters: don't use the standard model. Switch it to Firefly Image 5 (Preview) from the dropdown.

  4. The prompt. This model needs a prompt to work, unlike some of the others. I kept it simple: "remove the fence from this picture."

  5. Refine the detail. The AI did a great job on the background, but because Firefly Image 5 currently outputs at 2K, the fine detail around the bird's eye and feathers was slightly softer than the original RAW. The fix is straightforward: use the Object Selection Tool to select the bird and the rock, then fill that area on the layer mask with black. That reveals the sharp original bird while keeping the AI-cleared background intact.

Workflow 2: Lightroom to Firefly Web

Not on the Photoshop Beta? No problem. You can get to the same place via Lightroom's sharing feature.

  1. Share to Firefly. In Lightroom, hit the Share button (top right) and select Firefly: Edit an image. This opens your browser and drops the photo straight into the Adobe Firefly web interface.

  2. Settings and generate. Select Firefly Image 5, bump the resolution to 2K, use the same prompt ("remove the fence from this picture"), and hit generate.

  3. Back to Photoshop. Download the cleaned image, go back to Lightroom, and open the original file in the regular version of Photoshop.

  4. Stack and align. Use File > Place Embedded to bring the Firefly-cleaned version in on top of your original. Rasterise the top layer, select both layers, then go to Edit > Auto-Align Layers to make sure everything lines up perfectly.

  5. The masking trick. Same principle as Workflow 1: use the Object Selection Tool to select the bird and the rock, then hold Option (Mac) or Alt (Windows) and click the mask icon. This hides the AI version of the bird and brings back the sharp, high-detail original underneath.

Why the masking step matters

This is the part I think is really important. It's not about letting AI take over the whole image. It's about using it to fix a specific problem, in this case the background, while keeping the actual subject exactly as it was captured in the RAW file. The integrity of the original is what you're protecting.

Have a look through your archives. Chances are there are shots you wrote off because of something in the background. It might be worth giving them another look.

NEW πŸ’₯ Photoshop's One-Click Auto Distraction Removal

Adobe has just dropped a seriously powerful update to the Remove Tool in the Photoshop Public Beta (version 27.6.0), and it’s a total game-changer for cleaning up your photos. It can now automatically scan your entire image, identify distractions across 26 different categories, and let you remove them with a single click.

Here is a quick look at how it works and how you can start using it to save yourself hours of manual cloning and healing.

What is the New "General Distractions" Feature?

Previously, the Remove Tool had specific buttons for "Wires and Cables" or "People." This new update introduces General Distractions. It uses generative AI to find things like trash cans, signs, vehicles, and even stray animals that might be cluttering up your shot.

How to Use It: A 3-Step Tutorial

Before you start, make sure you have GPU hardware acceleration turned on in your Photoshop settings (Preferences > Performance) to ensure the tool runs smoothly.

1. Select the Remove Tool

Head over to your toolbar and select the Remove Tool. In the options bar at the top, make sure Sample All Layers is ticked and, most importantly, check the Create New Layer box. This acts as a fail-safe, putting all your removals on a separate layer so you can easily bring things back if you change your mind later.

2. Find Your Distractions

In the options bar, click on the Find Distractions dropdown and choose General Distractions, then click Find. Photoshop will take a few moments to scan the image. When it’s finished, it will highlight potential distractions with color-coded overlays.

The cool part? The list of categories it shows you is dynamic. It won't show you all 26 categories; it only lists the ones it actually found in your specific photoβ€”like "Vehicles," "Animals," or "Urban Elements."

3. Refine and Remove

You have total control over what stays and what goes:

  • Toggle Categories: You can untick specific categories in the dropdown if Photoshop picked up something you actually want to keep (like a cool cloud it mistook for a "light diffusing element").

  • Manual Overwrite: Use the plus (+) or minus (-) brush icons in the options bar to manually add areas to be removed or protect areas you want to save.

  • The Big Reveal: Once you're happy with the selection, click the Tick icon. Photoshop will work its magic, and the distractions will vanish, seamlessly filling in the background.

Why This Matters

I've been testing this on complex street scenes and busy beach shots, and the results are mind-blowing. It handles everything from removing pigeons at someone's feet to rebuilding stone walls where a trash can used to be. It’s not just a time-saver; it’s doing work that used to require advanced cloning skills in just a few seconds.

Since this is currently in the Public Beta, if you run into anything unexpected, be sure to use the "Feedback" icon in the top right of Photoshop to let Adobe know. The more feedback we give them now, the better the final version will be.

The Photoshop Zoom Setting You NEED to Change βœ…

Whether you are just starting out with Photoshop or you have been using it for years, there is one specific setting that can occasionally make it feel like the software is behaving rather strangely. I wanted to share a quick tip about the Zoom tool that might just save you a bit of frustration.

The Mystery of the Shifting Zoom

Have you ever tried to zoom in on a specific detail, only for that area to suddenly jump to the middle of your screen? Usually, when you click with the Zoom tool, you expect the image to get larger exactly where your cursor is sitting. However, there is a setting that changes this behaviour entirely.

If your image keeps repositioning itself every time you click to magnify, it is likely because of a single tick box in your preferences.

How to Fix It

Depending on whether you are using a Mac or Windows, the menu location is slightly different, but the setting itself is the same:

  • On Mac: Go to the Photoshop menu, then Settings, and select Tools.

  • On Windows: Go to the Edit menu, then Preferences, and select Tools.

Look for the option labelled Zoom Clicked Point to Centre.

If this is ticked, Photoshop will take the exact point you clicked and move it to the very centre of your workspace as it zooms in. If you find this distracting, simply uncheck the box. Once you do that, your zoom will behave in the traditional way, staying put exactly where you click.

Why Would You Use It?

You might wonder why this setting even exists if it feels so counter-intuitive at first. It actually comes in quite handy when you are working on very large, high-resolution images or wide landscapes.

If you are trying to inspect a small mark or a bit of sensor dust right in the far corner of a photo, a standard zoom might actually push that detail off the edge of the screen as the image expands. By having "Zoom Clicked Point to Centre" turned on, Photoshop pulls that corner detail right into your main field of view, making it much easier to work on without having to scroll around.

It really comes down to personal preference. Some people love the control of keeping the image static, while others prefer the software to "hand" them the detail they are looking for by placing it in the middle.

Reality vs Photoshop - Is Faking It Cheating? πŸ€·β€β™‚οΈ

Car photography always looks that little bit more dramatic when there's a wet road reflection underneath the vehicle. But what do you do when the road is bone dry? In this guide, I'll walk you through two ways to fake a puddle reflection in Photoshop -- one traditional, one powered by AI -- and then I'll leave you with a question worth thinking about.

Method One: The Manual Approach

Step 1: Select the Car

Start by grabbing the Object Selection tool from the toolbar. In the options bar at the top of the screen, make sure the mode is set to Cloud for the best possible result, then click Select Subject. Photoshop will do a surprisingly good job of selecting the car in just a moment or two.

Step 2: Copy the Car onto Its Own Layer

With your selection active, press Command + J (Mac) or Control + J (Windows) to copy the car up onto a new layer. If you toggle every other layer off, you should see just the isolated car sitting cleanly on a transparent background.

Step 3: Flip It Upside Down

Go to Edit > Transform > Flip Vertical. This flips the car layer to create the basis of your reflection. Now grab the Move tool, hold down Shift (to keep movement perfectly vertical) and drag the flipped car downwards until the tyres of both the original and the reflection are just touching.

If things look slightly off-angle, go to Edit > Free Transform, move your cursor just outside the bounding box until you see the rotation cursor, and give it a gentle nudge until it lines up properly.

Step 4: Add a Black Layer Mask

Rename this layer "Reflection" to keep things tidy. Then, holding down Option (Mac) or Alt (Windows), click the Layer Mask icon at the bottom of the Layers panel. This adds a black mask that hides the layer entirely -- which is exactly what you want for now.

Step 5: Draw the Puddle Shape

Select the Lasso tool and make sure you click directly on the layer mask thumbnail (you should see a white border appear around it, confirming it's active). Now draw a rough, freehand puddle shape beneath the car's tyres -- it doesn't need to be perfect, natural-looking and irregular is actually better here.

Step 6: Fill with White to Reveal the Reflection

Go to Edit > Fill, set the contents to White, and click OK. The reflection will now appear only within the puddle shape you drew.

Step 7: Soften the Edges

Zoom in and you'll notice the puddle edge looks very sharp and unnatural. To fix that, go to Filter > Blur > Gaussian Blur and apply just a small amount -- around 3 pixels is usually enough. This softens the boundary and helps the reflection blend into the ground convincingly.

Finally, you can reduce the opacity of the Reflection layer slightly to make the whole thing look a little more subtle and true to life.

Method Two: Using Adobe Firefly's Generative Fill

If you want a quicker and arguably more realistic result, Photoshop's AI tools can do a remarkable job here.

Step 1: Load the Puddle Selection

Hold Command (Mac) or Control (Windows) and click directly on the layer mask from your first reflection layer. This loads the puddle shape back as an active selection, saving you from having to draw it again.

Step 2: Select the Background Layer

Click on the main image layer, so that Generative Fill works on the background rather than the reflection layer.

Step 3: Run Generative Fill

In the contextual taskbar, click Generative Fill and type a prompt along the lines of: a reflection of car in puddle of water. For the AI model, select Firefly (specifically the Firefly Built and Expand model released in January 2026). If you're on a Creative Cloud Pro account, this won't cost you any credits -- whereas models like Flux or Nano Banana can use anywhere between 20 and 30 credits per generation.

Click Generate.

Step 4: Choose Your Favourite Variation

Firefly will produce three variations for you to compare. Have a look through them and pick the one that looks most convincing. You'll likely notice that the AI does something quite clever: it reflects the sky in the puddle on the far side of the car, just as real water would. Achieving that manually in Photoshop would take considerably more time and effort.

Which Method Should You Use?

For a quick, dirty result, the manual method works well and gives you full control. But for something that genuinely looks like a photograph taken on a wet road, the AI approach is hard to argue with -- particularly because of how naturally it handles the environmental reflections in the water.

A Question Worth Thinking About

Here's something to consider. When photographing that car, there were really two options: bring bottles of water to pour around the car and create a real puddle on the dry road, or add the reflection later in post-production, either manually or with AI.

Both approaches result in a reflection that wasn't originally there. The only difference is when in the process you add it.

So what do you think -- is there a meaningful ethical difference between physically creating something on location and digitally adding it afterwards? When it comes to reflections specifically, does it matter?

Let me know your thoughts in the comments below.

βœ… Photoshop JANUARY 2026 - Everything NEW πŸ’₯

Adobe dropped Photoshop 27.3.0 on the 27th January, and for once it's not just AI hype and features nobody asked for. This update brings some genuinely useful stuff that photographers and editors have been requesting for years.

Camera Raw tools finally join the party

The headline features are 2 (actually 3) new Adjustment Layers: Clarity & Dehaze and Grain.

If you've ever wanted to use Clarity or Dehaze without opening Camera Raw or converting to a Smart Object, your prayers have been answered. They now work exactly like Curves, Levels or any other adjustment layer. You can mask them, adjust opacity, change blend modes, and they stay fully editable in your PSD.

Clarity is brilliant for adding punch to textures and details in your midtones without blowing out highlights or crushing shadows. Dehaze cuts through atmospheric haze (or adds it if you reverse the slider), and having it as an adjustment layer means you can apply it selectively with a mask.

Grain gets the same treatment. Want to add film-style texture to knock the digital edge off a super-clean file? Chuck a Grain adjustment layer on top, dial it in, and you're done. It's particularly good for black and white work or vintage treatments.

The AI tools are growing up

On the generative side, things have improved quite a bit.​

Generative Fill and Generative Expand now output at up to 2K resolution, which means extended canvases and filled areas look far less mushy and hold detail much better. Adobe has also added model selection, so you can pick the Firefly version that best suits what you're doing.

The real game-changer is Reference Image support in Generative Fill. You can now feed Photoshop a reference photo and it'll try to match the lighting, colour and structure when generating new content. This is massive for compositing work or keeping a series of images consistent.

The Remove tool has also been quietly upgraded. It does a much cleaner job removing objects and people, with fewer obvious smears and repetitive patterns. In most cases you'll get a usable result without needing to follow up with Clone Stamp or Healing Brush.​

Why this one matters

This isn't a flashy update, but it's the kind that actually changes how you work.

Having Clarity, Dehaze and Grain as proper adjustment layers keeps everything inside Photoshop's layer stack where it belongs. No more jumping between Camera Raw, no more Smart Objects eating up file size, no more destructive edits.

The AI improvements make the generative tools feel less like tech demos and more like something you'd actually use in client work. Higher resolution output and better reference matching mean you can rely on them for real projects, not just Instagram experiments.

If you're on Creative Cloud, the update should already be available. The new adjustment layers live in the standard Adjustments panel alongside everything else. Well worth checking out, especially if you shoot landscapes, architecture or do any kind of composite work.​

Photoshop Acting Weird? Fix These 3 Simple Settings βœ…

If Photoshop seems to be behaving odd when moving images around and zooming in, in this video I show how to easily fix it and get back control …

πŸ™…πŸΌβ€β™‚οΈ How to NEVER forget your Photoshop edits again βœ…

I have lost count of the times I have finished an edit, loved the result, and then completely forgotten how I actually got there.

In this video, I am showing you a simple trick using the Photoshop History Log and AI to create a perfect, step-by-step record of every single move you make.

No more guessing which filter you used or what that specific slider value was; it’s like having a digital assistant write your editing recipes for you while you work.

What I cover:

βœ… How to turn on the hidden History Log in Photoshop.
βœ… Exporting your editing steps as a simple text file.
βœ… Using a clever AI prompt to turn that messy log into a clear workflow.
βœ… Why this is a game-changer for your consistency and learning.

Come on Adobe πŸ™πŸ» We NEED THIS FEATURE ⚠️

I've put together this short video because I need to ask a favour from anyone who uses Photoshop Camera Raw or Lightroom. There's a fundamental feature that's been missing for years, and it seriously impacts how we edit our images and the results we achieve.

The Missing Piece in AI Masking

The issue centres on masking, specifically the AI-generated masks available in the masking panel. Being able to select a sky or subject with one click is genuinely incredible, but there's a massive gap in functionality. We have no way to soften, blur, or feather those AI masks after they've been created.

Instead, we're left with incredibly sharp, defined outlines that sometimes look like poorly executed cutouts. This makes blending our adjustments naturally into the rest of the image much harder than it needs to be.

Years HAVE PASSED

Adobe introduced the masking panel back in October 2021. It changed the way we work and represented a huge step forward. Yet here we are, years later, still without a simple slider to soften mask edges.

If you want to blend an adjustment now, you're often stuck trying to subtract with a large soft brush, using the intersect command with a gradient, or employing other crude workarounds to hide the transition. It feels like excessive work for what should be a standard function.

The Competition Gets It Right

What makes this even more frustrating is seeing other software solve this problem elegantly. The new Boris FX Optics 2026 release includes AI masking controls where a single slider softens and blurs the mask outline, and it works incredibly well. Luminar has been offering this functionality for quite a while too.

These tools understand that a mask is only as good as its edges. When the competition provides ways to feather and refine AI selections, the absence of this feature in Adobe's ecosystem feels glaringly obvious.

Adobe's Strengths and Opportunities

Don't get me wrong. I appreciate that Adobe constantly pushes boundaries. We've witnessed tremendous growth over recent years, with developments from third-party AI platforms like Google's Gemini, emerging models, and innovations from Black Forest Labs with Flux and Topaz Labs. It's an exciting time to be a creator.

But I wish Adobe would take a moment to polish what we already have. Adding flashy new features is great, but refining the core workflows we use every single day would be a massive leap forward for all of us.

How You Can Help

Rather than simply complaining about this issue, I've created a feature request post in the Adobe forums. It's been merged with an existing thread on the same topic, which actually helps consolidate our voices into one place.

Here's what I need you to do: click the link below to visit the post and give it an upvote by clicking or tapping the counter number in the upper left. If we can get enough visibility on this, Adobe might finally recognise how much the community wants and needs this feature.

( LINK )

I believe refining existing tools is just as important as inventing new ones. Thank you for taking the time to vote. It really does make a difference when we speak up together.

What Are Those Mystery * and # Symbols in Photoshop??? πŸ€”

If you spend any amount of time in Adobe Photoshop, you become very familiar with the document tab at the top of your workspace. It tells you the filename and the current zoom level.

But sometimes, little cryptic symbols appear next to that information. Have you ever looked up and wondered, "Why is there a random hashtag next to my image name?" or "What does that little star mean?"

Nothing is broken. These symbols are just Photoshop's way of giving you a quick status update on your file and its colour management, without you needing to dig through menus.

What These Symbols Tell You

The symbols represent:

  • The save state of your document

  • Whether it has a colour profile attached

  • Whether the document's profile differs from your working space

Here is a quick guide to decoding those little tab hieroglyphics.

1. The Asterisk After the Filename ("Save Me!" Star)

What it looks like: … (RGB/8) *

What it means: An asterisk hanging right off the end of your actual filename means you have unsaved changes.

When it appears: Photoshop is hypersensitive here. The star will appear if you:

  • Move a layer one pixel

  • Brush a single dot onto a mask

  • Simply toggle a layer's visibility

  • Do pretty much anything

It's a gentle reminder that the version on screen is different from the version saved on your hard drive. If the computer crashed right now, you would lose that work.

The fix: Press Cmd+S (Mac) or Ctrl+S (Windows). The moment you successfully save the file, that little star will disappear because Photoshop now considers the document "clean" again.

2. The Asterisk ("Profile Difference" Star)

What it looks like: … (RGB/8*)

What it means: This is a different symbol in a different spot. If the star is tucked inside the parentheses next to the bit depth (the 8 or 16), it's no longer talking about unsaved work but about colour management.

In current Photoshop versions, an asterisk here generally means the file's colour profile situation does not match your working RGB setup. For example, you're working in sRGB as your default, but the image you opened is tagged with Adobe RGB (1998). In other words, the document is "speaking" a slightly different colour language than your default workspace.

Should you worry?

  • Usually, no. As long as you keep the embedded profile and your Colour Settings are sensible, Photoshop can still display the colours accurately even if the document profile and working space are different.

  • It's worth paying attention, though, if you're planning to combine several images into one document. You'll want a consistent profile for predictable colour when you paste, convert or export.

3. The Hash Symbol # ("Untagged" Image)

What it looks like: … (RGB/8#)

What it means: If you see the hash/pound/hashtag symbol inside the parentheses, it means the image is Untagged RGB. There's no embedded colour profile at all, so Photoshop has no explicit instructions telling it how those RGB numbers are supposed to be interpreted.

Why this happens: This is very common with:

  • Screenshots

  • Many web images

  • Older files where metadata was stripped out

When Photoshop opens an untagged image, it has to assume a profile based on your Colour Settings (typically your RGB working space, often sRGB by default), which may or may not match how the file was originally created.

Should you worry?

  • If colour accuracy is critical (printing, branding, matching other assets), yes, you should pay attention to that #. Different assumptions about the profile can easily lead to differences in appearance between systems.

  • You can fix this by going to Edit > Assign Profile and choosing the correct profile. For many web-style images, assigning sRGB is a sensible starting point, but be aware that assigning the wrong profile will change how the image looks, so use it when you have a good idea of the original intent.

Summary Cheat Sheet

(RGB/8) *

  • This document has unsaved changes

  • Save the file and the star will disappear

(RGB/8*)

  • There's a colour-profile difference or related colour-management status

  • Typically means the document's profile is not the same as your current working RGB space

(RGB/8#)

  • The image is Untagged RGB, with no embedded colour profile

  • Photoshop has to assume a profile based on your settings

ADOBE just changed ChatGPT FOREVER πŸ’₯ But Why???

Adobe has just rolled out one of the most significant updates we've seen in a while by integrating Photoshop, Express, and Acrobat directly into ChatGPT. And here's the kicker: these features are currently free to use, no Creative Cloud subscription required.

Why This Matters

This is a fascinating strategic play. ChatGPT has roughly 800 million active users, many of whom recognize the Photoshop brand but find the actual software intimidating or prohibitively expensive. By embedding these tools inside a chat interface where people already feel comfortable, Adobe is dismantling that barrier to entry. They're essentially converting casual users into potential creators through familiarity and ease of use.

What the Integration Actually Does

The capabilities are surprisingly robust for a chat-based tool. You can upload an image and ask Photoshop to handle basic retouching or apply artistic styles. The masking feature is particularly impressive, intelligently selecting subjects without manual input. Adobe Express generates social media posts or birthday cards from simple text prompts, while the Acrobat integration handles PDF merging and organization without leaving the conversation.

The Bigger Picture

Make no mistake: this isn't replacing the full desktop software. It's a streamlined, accessible version optimized for speed and convenience. Users who need granular control or heavy processing power will still require the complete applications.

This is a textbook freemium strategy. Adobe is giving users a taste of their engine, creating a natural upgrade path. Once someone hits the limitations of the chat interface, they're just one click away from the full experience. It's a smart way to widen the funnel and meet users exactly where they are.