photography

A Community Guide to Bird Photography

Bird photography is one of those genres that quietly takes over. It's challenging, unpredictable, sometimes maddening – and completely addictive when it all comes together. You need patience, good fieldcraft, solid camera technique and the ability to make quick decisions, all at the same time.

This post pulls together some of the core ideas from a full guide I've put together for members of my Photography Community on Skool.

Think of this as a taster of what's waiting in the classroom ( LINK )

It's Not About the Gear (Not Really)

One of the strongest themes that comes up again and again is simple: bird photography is less about kit and more about understanding birds. Long lenses help, of course, but timing, fieldcraft, and awareness are what actually make the photograph.

Work with the gear you already have and learn to use it well. Focus on reading behaviour, light, and opportunities rather than chasing the "perfect" setup. And be realistic about what you can comfortably carry for a full outing – staying fresh and present matters more than hauling the biggest lens available.

Your Behaviour Matters More Than You Think

How you move and behave around birds will make or break your images. Rush in, move suddenly, or push too close and the bird will tell you it's uncomfortable long before it flies. Stay calm, move slowly, and respect its space and everything changes.

Learn to recognise a bird's comfort zone and stay on the right side of it. Sit and watch first – you'll start to see patterns in perches, feeding routines, and pre-flight behaviour. Patience isn't passive; it's an active technique that gives you better light, cleaner backgrounds, and more meaningful moments.

Start Close to Home

You don't need an exotic destination to make strong bird photographs. Your garden, local park, or the green space at the end of the road are perfect training grounds.

Regular access to familiar birds beats occasional trips to impressive locations. Repetition sharpens your reactions, improves camera handling, and helps you truly learn both the species and the location. Familiar spots let you predict where birds will appear, how the light falls, and when something is likely to happen – and that groundwork pays off when you do travel further afield.

The One Setting to Protect First

If there's a single technical priority in bird photography, it's shutter speed. Birds rarely stay still, long lenses magnify every movement, and softness from motion blur can't be rescued later.

A simple working approach: aim for around 1/1600 sec as a starting point, higher for small, fast birds or birds in flight. Let ISO rise to protect that shutter speed rather than sacrificing sharpness, and use your widest useful aperture to support both speed and subject separation from the background.

Autofocus, Flight, and the Hard Stuff

Birds in flight can feel like a different discipline altogether – and in many ways, they are. Fast shutter speeds, accurate tracking, and clean framing all need to come together in fractions of a second.

Use continuous AF and subject tracking, and take the time to learn how your specific system actually behaves. Start with larger, slower, more predictable birds to build confidence before tackling the fast, erratic ones. And use burst mode thoughtfully – fire it when something is actually happening, rather than spraying at everything that moves.

Light, Background, and Story

Good bird photographs aren't just about the bird – they're about light, background, and what's actually happening in the frame.

Early and late light give softer contrast, warmer tones, and better feather detail, often when birds are most active too. Your shooting position has a huge impact on how connected the final image feels – getting down to eye level with the bird changes everything. And clean, sympathetic backgrounds combined with considered use of habitat can turn a simple record shot into a photograph with real story and mood.

Where to Begin: A Simple Starting Plan

If you're getting serious about bird photography, here's a straightforward framework to work from:

Start in your garden or local park and visit often. Spend time watching before you shoot – look for perches, patterns, and pre-flight behaviour. Work with the gear you already own, using as much focal length as is comfortable. Keep shutter speed high, let ISO do its job, and begin at a wide aperture. Use continuous AF and subject tracking if your camera supports it. Pay attention to the bird's comfort, the quality of light, your background, and your shooting angle. And wait for behaviour and gesture – not just a "bird on a stick" confirmation shot.

Want the Full Guide?

This post just scratches the surface of what's inside A Community Guide to Bird Photography, which lives in the classroom inside my Photography Community on Skool.

In there you'll find the complete written guide laid out step by step, diagrams and example images with breakdowns from real shoots, and practical starting setups, checklists, and exercises you can take straight into the field.

If you'd like to dive deeper and join a group of photographers actively working on this together, come and join us.

Why Photography in 2026 Feels Less Perfect and More Human

Something really encouraging is happening in photography right now, and if you have been feeling quietly frustrated with the pressure to make everything look flawless, I think you are going to love it.

Industry reports from Aftershoot, Stills, and other creative sources are all pointing in the same direction: audiences are turning away from over-polished, over-processed imagery and responding much more warmly to photographs that feel honest, immediate, and genuinely human. After years of chasing technical perfection, it seems the tide is finally turning, and I find that genuinely exciting.

Why the shift is happening now

It makes a lot of sense when you think about it. We are living in a visual landscape absolutely saturated with AI-generated content, heavily filtered social media imagery, and endlessly refined visuals.

In that context, a photograph that carries real emotion, a little texture, or a fleeting spontaneous moment stands out precisely because it feels true. Audiences can sense the difference, even if they cannot always articulate it.

What "more human" actually looks like

What the trend reports describe as "more human" photography comes down to a handful of connected ideas: natural expressions rather than rigid poses, visible texture rather than smoothed-out skin, and editing choices that preserve the character of the original moment rather than ironing everything into a generic finish.

In practice that might mean embracing grain, direct flash, looser framing, or a more documentary approach to light and movement. None of that requires you to abandon your craft; it just asks you to use it in service of feeling rather than control.

Your personality is the point

The most encouraging thing I take from all of this is that technical quality still matters, it just is not the whole story anymore. The photographers who are really connecting with people right now are the ones combining solid skills with a genuine point of view and a willingness to let a little life into the frame.

That is something worth celebrating, because it means your personality and your way of seeing the world are actually an asset, not something to sand away in post.

The bigger picture

There is also a broader context worth keeping in mind. Adobe's recent updates to Photoshop and Express are a good reminder that automated, AI-assisted production is only going to become more common across the creative industry. That is not something to fear; it is actually an opportunity. The more synthetic visual content floods our screens, the more a real photograph, one made with intention and feeling, can cut through simply by being authentic.

The question worth asking yourself

So the question worth sitting with as you work in 2026 is not whether your images are perfect. It is whether they are meaningful. Can the photograph still do its job if it keeps a bit of roughness, a bit of risk, or a bit of life? According to everything being written about where the industry is heading, that roughness might be exactly what makes it memorable.

That feels like a genuinely good moment for photography, and for the people who make it.

AI and Photography: It's Not the End of the World

There is a conversation happening in photography right now, and chances are you’ve heard it. Whether it is in comment sections, Facebook groups, or at events, the same question keeps coming up: is AI going to kill photography?

I get it. When generative AI started producing photorealistic images from a text prompt, the alarm bells rang, and not entirely without reason. If a computer can conjure a dramatic seascape or a perfectly lit portrait from a few typed words, where does that leave those of us who actually pick up a camera?

So here’s what I believe: I don’t think it’s the threat people fear it is. Of course for some areas of Professional / Paid Photography but certainly not for enthusiasts; not for anyone who shoots because they genuinely love it.

We’ve Been Here Before

Photography has always had to adapt. Digital replaced film, and people said it would ruin photography. Smartphones put a camera in everyone's pocket, and people said that would ruin it too. It didn’t. If anything, more people are shooting now than ever before. Accoridng to CIPA shipment data with 2025 market reports, the hobbyist camera market made up well over two thirds of all digital camera sales, and the number of photography workshops and online courses has grown by more than 30 percent in recent years. People are not falling out of love with photography. They are falling deeper into it.

AI is the latest chapter in that same story. It’s a new tool arriving in an industry that has always evolved alongside new tools.

What AI Actually Is, and Is Not

Here’s the thing that often gets lost in the noise. AI, in the context of most photographers' day-to-day lives, is not generating fake images to replace yours; it’s quietly working inside the software you are already using.

Take Lightroom and Photoshop. Both are packed with AI-powered features now. Masking that would have taken me the best part of an hour a few years ago takes seconds. Removing a distracting element from the background of a portrait, reducing noise in a high ISO shot, selecting a subject with precision. These are the kinds of tasks that used to eat into your editing time without giving anything creative back. They were just tedious.

That is where I have found AI genuinely useful. Not as something that replaces my decisions, but as something that handles the mechanical stuff so I can focus on what I actually enjoy, developing the image, getting the look I had in my head when I pressed the shutter, making it feel the way I want it to feel. The creative part is still mine. AI just means I am not spending forty minutes (and more) doing fiddly selections to get there.

Yes, There Will Be Casualties

It would be dishonest to say AI has no impact on photography as a profession, because it does. Certain areas are already feeling it. Product photography is one. Generic stock imagery is another, and headshot photography is shifting too, with a growing number of AI applications now capable of producing professional-looking results at a fraction of the cost of hiring a photographer.

Will some people choose those options? Of course. But then, every industry has customers who will always gravitate towards the cheapest available option. Photography is not immune to that, and it never has been. There have always been clients who want results without paying for expertise. AI simply gives that segment of the market a new way to do what they were always going to do.

The clients worth having though, tend to think differently. They understand the difference between a generated image and a photograph made by someone who knows what they are doing. They value the professionalism, the experience of working with a skilled photographer, and ultimately an image that could not have come from a prompt box. That market is not shrinking. If anything, as AI imagery becomes more widespread, it is becoming more discerning.

The Authenticity Factor

There is something interesting happening on the other side of the AI conversation. As AI-generated imagery has flooded the internet, audiences have started to crave the opposite. The photography trends emerging in 2026 are centred around authenticity, real moments, real imperfections, real emotion. The slightly overcooked, hyperpolished aesthetic is losing its appeal. People want to see images that feel genuinely human.

That is actually great news for photographers, because the one thing AI cannot do, no matter how sophisticated it becomes, is be there. It cannot stand on a cold beach at six in the morning, read the light, time the wave, feel the composition before it happens. It cannot build a relationship with a portrait subject and find the moment where they forget the camera is there. It can produce images that look impressive on a screen, but impressive and meaningful are not the same thing.

Photography Is Not Just About the End Product

This is the point that gets missed entirely in the AI debate. When people talk about whether AI can replace photography, they tend to focus on the output. Can it produce a decent image? In some cases, yes. But photography has never really been just about the image at the end of it.

It is about being out in the world with a camera. It is the discipline of learning your craft, understanding light, making decisions in the moment. It is the feeling of nailing a shot you had been visualising for weeks. It is the connection you build with a subject during a portrait session. It is standing somewhere beautiful and choosing how to see it.

No AI can replicate that experience. And for the vast majority of photographers, enthusiast or professional, that experience is the whole point.

So Where Does That Leave You?

If you shoot because you enjoy it, AI changes very little about that. It might make some parts of the process quicker and easier, and used well, that is no bad thing. But it is not going to make the experience of making a photograph redundant.

Yes, the industry will keep changing. Some corners of it will shrink. But photography itself, the act of it, the craft of it, the joy of it, is not going anywhere. Do not be scared of AI; just don’t hand it the wheel either.

Photography is not dying. If anything, the conversation AI has started might just remind people why real photographs matter.

STOP using 10 Apps to Plan your Photography! (Do this instead)

The Problem with "Too Much" Information

We have an incredible amount of data at our fingertips these days. If you are planning a landscape or seascape trip, there are hundreds, maybe thousands of apps available. Honestly, that is part of the issue for me. There is just too much choice, and every day there seems to be a new app hitting the store. I never quite know which one to use for what.

While I still use a dedicated app to check the position of the sun, I have moved everything else over to AI.

How I Use AI as a Location Scout

It doesn't really matter which platform you prefer. I use Google Gemini, but you can do the exact same thing with ChatGPT, Claude, or Perplexity. The goal is to move away from checking ten different websites and instead have one single place that "scouts" the location for you.

I have set up something called a "Gem" in Gemini (or a Custom GPT if you use ChatGPT). I call it my Seascape Photography Planner. All I have to do is tell it where I am going and when, and it does the rest.

For example, if I tell it I am heading to Godrevy Lighthouse this coming Saturday, within seconds it populates the screen with:

  • Weather conditions: Temperature, precipitation, and wind speeds.

  • Lighting: Sunrise, sunset, and golden hour times.

  • The Ocean: Tide times and tide heights.

  • Logistics: Where to park, how to pay (cash or app), and where to find food or fuel nearby.

  • Safety: The nearest hospital and contact details for the police.

  • Drone Info: Nearest airfield and air traffic control contacts, just in case a drone goes rogue.

Setting It Up Yourself

The process is incredibly simple. You start by asking the AI to find this information for a specific trip. I often use a dictation app called Whisper to just speak my request into the text box.

Once the AI gives you a great result, you ask it one simple question: "Can you now create a system prompt from this so that the next time you can give me all of this information, but all I need to tell you is where I'm going and when?"

The AI will then write a "formula" for itself. It might say something like, "You are an expert photography location scout. Your goal is to provide a comprehensive, data-driven briefing."

You simply copy that text, go into your settings to create a new "Gem" or "Custom GPT," and paste those instructions in. Give it a name, save it, and you are done.

The Real-World Benefit

The best part about this is that it syncs to your phone. On the morning of a shoot, I can quickly check the latest updates while I'm having my coffee. I have even added "road conditions" to my prompt lately so I know if there are any last-minute diversions or roadblocks before I set off.

It is a massive time-saver. Instead of bouncing between weather apps, tide tables, and Google Maps, I get a tailored briefing in one go. It has definitely increased my success rate, but more than that, it has made the whole experience of being out in the field much more relaxed.

Content Credentials: The Future of Proving Your Photos Are Real ✅

In a world where AI can generate a photorealistic image in seconds, how do you prove that your photograph is actually real? That it was captured by a real camera, in a real place, by a real photographer?

That is exactly the problem Content Credentials are designed to solve, and in 2026 this technology is finally moving from niche experiment to something every working photographer needs to understand.

What Are Content Credentials?

Think of Content Credentials as a kind of nutrition label for your photographs. Just as a food label tells you what is inside the packet, Content Credentials can tell viewers key facts about an image: who created it, which camera or software was used, what kind of edits were made, and, crucially, whether AI tools were involved at any stage.

Under the hood, Content Credentials are powered by an open technical standard called C2PA, which stands for Coalition for Content Provenance and Authenticity. C2PA is a cross-industry specification backed by companies and organisations including Adobe, Microsoft, Google, Sony, Nikon, Canon, Leica, Fujifilm, the BBC, the Associated Press and many others.

The key point is that Content Credentials do not judge whether a photo is "good" or "bad". They provide a tamper-evident record of provenance, meaning a factual history of where an image came from and how it was made, so that editors, clients and audiences can make their own decisions about whether to trust what they are seeing.

How Do Content Credentials Actually Work?

At a technical level, C2PA uses cryptographic hashes and digital signatures, the same kind of technology that protects online banking, to bind provenance information to media files. In practice, the chain looks like this:

  1. Capture. On supported cameras, a C2PA manifest is signed at the moment of capture, recording the device identity and, where enabled, when and where the image was created.

  2. Edit. When the photo is opened in C2PA-enabled software such as Photoshop or Lightroom, the software can log key edits, including the use of generative AI tools, into an updated manifest.

  3. Export and publish. On export, the photographer chooses what information to include. The Content Credentials can be embedded in the file itself, published to a cloud service, or both.

  4. Verify. Anyone can later inspect the credentials using tools such as the Content Authenticity Initiative's Inspect site at contentcredentials.org/verify, browser extensions, or compatible apps and services.

If someone tampers with the pixels or tries to alter the signed provenance after the fact, the cryptographic checks break. The result is that the credentials are tamper-evident: you cannot quietly change the file or its signed history without that being detectable.

Which Cameras Support Content Credentials in 2026?

Camera support has accelerated over the last two years. A useful snapshot comes from the community-maintained c2pa.camera site, which tracks devices that can sign images using the C2PA standard.

As of early 2026, supported cameras include:

One particularly important entry is the Google Pixel 10. Thanks to its Tensor G5 and Titan M2 security chips and built-in C2PA support in the Google Camera app, it is currently the least expensive way to capture C2PA-signed images. That matters because not every working photographer or journalist will be carrying a flagship mirrorless body at the moment something newsworthy happens.

On the mirrorless side, Fujifilm has committed to rolling Content Authenticity support out across its X and GFX cameras, starting with models like the X-T50 and GFX100S II, with further firmware support planned but not yet fully detailed.

Content Credentials in Lightroom and Photoshop

The good news is you do not need a C2PA-enabled camera to start using Content Credentials. Adobe has built support directly into Lightroom Classic, Lightroom Desktop and Photoshop, using C2PA under the hood.

Lightroom Classic

In Lightroom Classic, Content Credentials are applied at export time.

Open the Export dialogue and scroll to the Content Credentials section, then enable Apply Content Credentials. You will need to choose how the credentials are stored: you can publish to Content Credentials Cloud, attach them to files by embedding them in the JPEG, or do both at once, which is the recommended option for most photographers. You can also decide what information to include, such as your name from your Adobe account, any connected social accounts, and a log of the editing steps recorded by Lightroom.

A few practical limitations are worth knowing about in 2026. Lightroom Classic only applies Content Credentials on JPEG export, not on TIFF, PSD or RAW files. An active internet connection is also required for the feature to work, even if you are simply attaching credentials to files rather than publishing to the cloud.

Lightroom Classic

Content Credentials are set in the Preferences and Export section …

Photoshop

Photoshop takes a slightly different approach because it can record provenance while you edit. Go to Settings or Preferences, then History and Content Credentials, and enable Content Credentials for saved documents. For each document you can turn credentials on or off individually, so not every file has to be recorded. When you export, Photoshop can embed a detailed edit history into the Content Credentials, including the use of Generative Fill, Generative Expand and other AI-powered tools.

The system records a summarised, provenance-oriented history rather than every brush stroke, but enough to show that AI tools were used and how the file evolved over time.

Keeping the Chain Intact Between Lightroom and Photoshop

If your workflow moves between Lightroom Classic and Photoshop, it is worth thinking about the provenance chain. A robust approach is to export from Lightroom with Content Credentials turned on, then open that exported file in Photoshop with Content Credentials enabled for the document. Export again from Photoshop with Content Credentials, and if you want the final file back in your Lightroom catalogue, import the Photoshop export so that Lightroom sees the credentialled version.

Is it perfectly seamless? Not yet. But this approach ensures that each major step in your workflow adds to the same signed chain instead of breaking it.

Why Content Credentials Matter in 2026

Several developments make Content Credentials especially relevant right now.

Photo Mechanic and Press Workflows

In February 2026, Camera Bits confirmed that Photo Mechanic is gaining support for the C2PA standard. For decades, Photo Mechanic has been the first stop in press photographers' workflows, used for ingest, culling and metadata. Camera Bits' goal is to preserve C2PA signatures from C2PA-enabled cameras all the way through to publication, so editors can trust that a signed image really traces back to a specific moment and camera.

Camera Bits has been clear that this feature is still in active development with no public release timeline yet, but for photojournalism this is a significant shift.

Competitions and Clubs

The Canadian Association of Photographic Art has adopted a Content Credential model for its competitions to address AI-generated imagery. Their current stance, through at least 2027, is that the model is optional and educational rather than mandatory, but potential winning entries already undergo verification that includes Content Credentials analysis, AI detection and forensic checks. Images that fail those verification steps can be disqualified, which is a strong signal of where competition rules are heading.

Platforms and the Broader Ecosystem

On the platform side, there has been real movement. LinkedIn now displays a CR icon for images carrying Content Credentials, which users can click to see the provenance summary. Google has brought C2PA-based Trusted Images to Android and Pixel, using Content Credentials and SynthID to distinguish originals and AI-generated content. Cloudflare Images and other services now preserve Content Credentials through transformations, so the provenance remains intact when images are resized or optimised for delivery.

The Content Authenticity Initiative itself has grown into a global community of more than 6,000 members by the end of 2025, spanning media, tech, education and government. This is no longer a small experiment.

The Honest Challenges (As of 2026)

That said, Content Credentials are not magic, and the current limitations are worth being transparent about.

Social Platforms Still Strip Metadata

Many social platforms still strip embedded metadata from uploads, which removes embedded C2PA manifests along with traditional EXIF and IPTC data. Tests have shown that platforms like Facebook remove Content Credentials on upload, which is one reason Adobe allows you to publish credentials to a cloud service as well, so you can still verify an image via the cloud record even if the embedded data is lost.

The Chicken-and-Egg Problem

Camera makers want platforms and tools to support provenance before they invest heavily. Platforms want a critical mass of signed content. Newsrooms want both to be stable before they change their workflows. PetaPixel's coverage of the Digimarc C2PA Chrome extension in 2025 summed up the situation bluntly: at that point, basically no photos published online were carrying C2PA metadata. That is slowly improving in 2026, but it remains an adoption loop rather than a solved problem.

The Perception Problem

At CES 2026, several analyses highlighted that many visitors misunderstood the Content Credentials icon, assuming it marked AI-generated content rather than authentic content with a provenance record. Without better public education, there is a real risk that authenticity labels are misread as AI labels, which is the exact opposite of the intended outcome.

Inconsistent Implementations

Some early implementations have also bent the semantics in unhelpful ways. Critics have pointed out that certain smartphone workflows only add C2PA manifests to images that have been processed with AI features, not to ordinary captures. That reverses the intent entirely: the real images are the ones that most need a verifiable credential.

Privacy and Identity

Finally, there is the privacy angle. C2PA and Adobe both make identity assertions optional and opt-in, so you choose whether to embed your name, social accounts or edit history. That flexibility is valuable, but it also means you should think carefully about what you are comfortable attaching to every exported file. For some photographers, including personal account details on every share will feel like a useful feature; for others, it may feel like over-exposure.

Should You Start Using Content Credentials?

For most photographers who share work online, the pragmatic answer in 2026 is yes, it is worth turning on now, even with the current rough edges.

There is no extra cost, as Content Credentials in Lightroom and Photoshop are included in your existing Adobe subscription and do not consume generative credits. They are non-destructive, meaning enabling them does not alter your image content or require a different editing approach. It simply adds metadata, and optionally a cloud record, at export.

Starting now also means you build good habits early. As more contests, clients and platforms start expecting provenance, having a back catalogue of signed images will be an advantage rather than something you are scrambling to retrofit. Organisations like the Canadian Association of Photographic Art explicitly highlight that embedded creator information and timestamps help strengthen copyright and attribution claims as part of a wider evidence chain. And the export settings give you control over privacy, so you can choose to share just a minimal provenance chain or a more detailed record including identity and edit history.

For photojournalists and press photographers, this is already moving from a nice-to-have to something expected. For commercial and fine-art photographers, it is a professional differentiator that signals authenticity and transparency at a time when clients are increasingly wary of AI fakery.

How to Check if an Image Has Content Credentials

If you want to verify an image, whether your own or someone else's, there are several options available. You can upload a file at contentcredentials.org/verify to see its provenance, including capture and edit history where available.

Adobe and its partners also provide browser extensions that detect and surface Content Credentials as you browse the web. On LinkedIn, look for the CR icon on images; clicking it shows the stored provenance for that image. Nikon users, editors and agencies can use the Nikon Authenticity Service to validate C2PA-signed images from supported cameras. And Leica's FOTOS app can read and display authenticity information for images from the M11-P, SL3-S and related cameras.

Where This Is Heading

The direction of travel is clear. The C2PA Conformance Programme and the CAI's growing membership are pushing the ecosystem towards more consistent implementations across cameras, software and platforms. Open-source tooling is making it easier for smaller developers to add support. And regulatory and industry pressure around AI transparency, especially in news and political advertising, is giving content authenticity a real tailwind.

As Camera Bits put it when discussing Photo Mechanic's planned support, the goal is not to replace trust in photographers, but to provide an additional layer of confidence in an environment where synthetic media is increasingly common.

For working photographers, the message in 2026 is straightforward. The tools are here, they are free to switch on, and they are only going to become more important. Enabling Content Credentials today is one of the simplest practical steps you can take to protect your work and to prove that it is genuinely yours.

Reality vs Photoshop - Is Faking It Cheating? 🤷‍♂️

Car photography always looks that little bit more dramatic when there's a wet road reflection underneath the vehicle. But what do you do when the road is bone dry? In this guide, I'll walk you through two ways to fake a puddle reflection in Photoshop -- one traditional, one powered by AI -- and then I'll leave you with a question worth thinking about.

Method One: The Manual Approach

Step 1: Select the Car

Start by grabbing the Object Selection tool from the toolbar. In the options bar at the top of the screen, make sure the mode is set to Cloud for the best possible result, then click Select Subject. Photoshop will do a surprisingly good job of selecting the car in just a moment or two.

Step 2: Copy the Car onto Its Own Layer

With your selection active, press Command + J (Mac) or Control + J (Windows) to copy the car up onto a new layer. If you toggle every other layer off, you should see just the isolated car sitting cleanly on a transparent background.

Step 3: Flip It Upside Down

Go to Edit > Transform > Flip Vertical. This flips the car layer to create the basis of your reflection. Now grab the Move tool, hold down Shift (to keep movement perfectly vertical) and drag the flipped car downwards until the tyres of both the original and the reflection are just touching.

If things look slightly off-angle, go to Edit > Free Transform, move your cursor just outside the bounding box until you see the rotation cursor, and give it a gentle nudge until it lines up properly.

Step 4: Add a Black Layer Mask

Rename this layer "Reflection" to keep things tidy. Then, holding down Option (Mac) or Alt (Windows), click the Layer Mask icon at the bottom of the Layers panel. This adds a black mask that hides the layer entirely -- which is exactly what you want for now.

Step 5: Draw the Puddle Shape

Select the Lasso tool and make sure you click directly on the layer mask thumbnail (you should see a white border appear around it, confirming it's active). Now draw a rough, freehand puddle shape beneath the car's tyres -- it doesn't need to be perfect, natural-looking and irregular is actually better here.

Step 6: Fill with White to Reveal the Reflection

Go to Edit > Fill, set the contents to White, and click OK. The reflection will now appear only within the puddle shape you drew.

Step 7: Soften the Edges

Zoom in and you'll notice the puddle edge looks very sharp and unnatural. To fix that, go to Filter > Blur > Gaussian Blur and apply just a small amount -- around 3 pixels is usually enough. This softens the boundary and helps the reflection blend into the ground convincingly.

Finally, you can reduce the opacity of the Reflection layer slightly to make the whole thing look a little more subtle and true to life.

Method Two: Using Adobe Firefly's Generative Fill

If you want a quicker and arguably more realistic result, Photoshop's AI tools can do a remarkable job here.

Step 1: Load the Puddle Selection

Hold Command (Mac) or Control (Windows) and click directly on the layer mask from your first reflection layer. This loads the puddle shape back as an active selection, saving you from having to draw it again.

Step 2: Select the Background Layer

Click on the main image layer, so that Generative Fill works on the background rather than the reflection layer.

Step 3: Run Generative Fill

In the contextual taskbar, click Generative Fill and type a prompt along the lines of: a reflection of car in puddle of water. For the AI model, select Firefly (specifically the Firefly Built and Expand model released in January 2026). If you're on a Creative Cloud Pro account, this won't cost you any credits -- whereas models like Flux or Nano Banana can use anywhere between 20 and 30 credits per generation.

Click Generate.

Step 4: Choose Your Favourite Variation

Firefly will produce three variations for you to compare. Have a look through them and pick the one that looks most convincing. You'll likely notice that the AI does something quite clever: it reflects the sky in the puddle on the far side of the car, just as real water would. Achieving that manually in Photoshop would take considerably more time and effort.

Which Method Should You Use?

For a quick, dirty result, the manual method works well and gives you full control. But for something that genuinely looks like a photograph taken on a wet road, the AI approach is hard to argue with -- particularly because of how naturally it handles the environmental reflections in the water.

A Question Worth Thinking About

Here's something to consider. When photographing that car, there were really two options: bring bottles of water to pour around the car and create a real puddle on the dry road, or add the reflection later in post-production, either manually or with AI.

Both approaches result in a reflection that wasn't originally there. The only difference is when in the process you add it.

So what do you think -- is there a meaningful ethical difference between physically creating something on location and digitally adding it afterwards? When it comes to reflections specifically, does it matter?

Let me know your thoughts in the comments below.

Stormy Sea at Lyme Regis 🌊

I hadn’t intended to head down to the seafront this morning, but with a storm still present I checked the tide times and with high tide in a couple of hours, I couldn’t resist.

WOW! The sea was incredible!

Waves crashing and pounding The Cobb as it stood firm protecting the harbour, the beach not visible as the sea washed over it throwing sea water onto the promenade and waves crashing against the sea wall at Gun Cliff dwarfing the tower two-fold!

Such a Thrill!

All photographs hand-held using …

Fuji X-T5 with 18mm f/1.4
1/125 sec
f/11
ISO 400

⛔️ Stop Policing Creativity

I don’t normally write a post such as this, but I’ve seen a fair bit of ‘this’ lately so just felt the need to put pen to paper, so to speak.

I’m tired of seeing people tell others what they should or shouldn’t be doing with their photography and editing.

We see it all the time in comments and forums; people acting like there is a "correct" way to be creative.

It's tedious. It’s exhausting.

The escape is the point

Photography and editing are personal.

For loads of us, picking up a camera is a break from all the rules, deadlines and stress that come with modern life.

When someone sits down to create, that might be the only hour in their day where they actually feel in control of something. If they want to use a tool that makes things easier or more enjoyable, that's up to them.

The minute we start slapping "rules" on creativity, we turn what should be a release valve into just another chore; we make people second-guess themselves before they share their work, or even worse, they stop creating altogether because they're worried about being judged by the purists.

Use the tools you want

This goes for the tools we choose too.

If someone wants to use a particular bit of software or decides to use AI, so what? That's their choice.

If what someone else is doing has absolutely no impact on you, your life, or your own creativity, then why let it concern you?

As long as they're not trying to deceive people or claim credit for something they didn't actually do, let them get on with it, and even if someone does try to be deceptive, they'll get found out eventually. We'd probably do better spending our time keeping our own house in order before we start telling everyone else how to run theirs.

The elitism of the "right" way

Then you've got the phrases that always come up, like "getting it right in camera" or "we should all go back to basics."

Every time I see or hear this, it comes across as elitist. It feels like they're saying "I'm better than you."

Do the people who say this honestly think everyone else is deliberately trying to get things "wrong" in camera?

We all try to do our best at the point of capture, but for many people, that's just the start of the process.

And as for going back to basics, who are we to say that?

Just because one person finds joy in the traditional way of doing things doesn't mean everyone else has to. Why should someone else do what you reckon they should do?

Leave them be

Life's tough enough as it is. We're all different, and thank goodness for that; the world would be a boring place if we all worked the same way.

If someone's getting enjoyment out of what they're doing, leave them be. The world doesn't need more critics, it needs more people finding a way to enjoy themselves.

If their process made their day a bit better, they didn't break a rule, they won.

Evoto's AI Headshots: When Your Favourite Tool Turns Against You

Evoto's AI headshot generator has become a cautionary tale about how quickly an AI company can burn through the trust of the very professionals who helped build its reputation.

When your retouching app becomes a rival

At Imaging USA 2026 in Nashville, portrait and headshot photographers discovered that Evoto had been quietly running a separate "Online AI Headshot Generator" site. The service let anyone upload a selfie and receive polished, corporate-style portraits, with marketing that openly pitched it as a cheaper, easier alternative to booking a photographer.

This wasn't a hidden experiment tucked away behind a login. The headshot generator had a public URL, example images, an FAQ and a clear path from upload to final "professional" headshot. For photographers who had built Evoto into their workflow, it felt like discovering that a trusted retouching assistant had quietly set up shop down the road and started undercutting them.

Why Evoto's role made this sting

Evoto built its identity as an AI-powered retouching and workflow tool aimed squarely at professional photographers, especially those shooting portraits, headshots and weddings. The pitch was straightforward: let the software handle the tedious stuff like skin smoothing, flyaway hairs, glasses glare, background cleanup and batch retouching so photographers can focus on directing and shooting.

That positioning worked. Photographers paid for it, used it on paid client work, recommended it in workshops and videos, and sometimes became ambassadors or power users. The unspoken deal was that Evoto would stay in the background, supporting human photographers rather than trying to replace them. A consumer-facing headshot generator cut straight across that understanding.

What the headshot generator offered

The AI headshot tool followed a familiar pattern: upload a casual selfie, choose a style and receive cleaned-up headshots with flattering lighting, neat clothing and tidy backgrounds, ready for LinkedIn or company profiles. The examples looked very similar to the kind of "studio-style" work many Evoto customers already produce for corporate clients.

*Simulation Only ; NOT the Evoto Interface

The wording is what really set people off. The marketing leaned heavily into cost savings, avoiding studio bookings, quick turnaround and "professional-looking" results without needing a photographer. Coming from a faceless tech startup, that would already be provocative. Coming from a tool that photographers had trusted with their files and workflows, it felt like a direct invitation for clients to pick AI over them.

For many creatives, this is the line that matters: AI that helps you deliver better work is one thing. AI that presents itself as your replacement is something else entirely.

Why photographers are so angry

Photographers' reactions centre on three main issues.

First is a deep sense of betrayal. People had paid into the Evoto ecosystem, uploaded thousands of client images and publicly championed the product. Learning that the same company had built a consumer brand aimed at undercutting them felt like discovering that their support had funded a tool designed to compete with them.

Second are concerns about training data. Photographers have pointed out that the look of the AI headshots seems very close to the kind of work Evoto users upload. Evoto now says its models are trained only on commercially licensed or purchased imagery, not on customer photos, but those reassurances arrived after the story broke and against a backdrop of widespread anxiety about AI scraping. Without long-standing, transparent policies on data use, many remain sceptical.

Third is the tone of the marketing. Promises of saving money, avoiding bookings and still getting "pro-quality" results read like a direct invitation for clients to choose a cheap AI pipeline instead of hiring a photographer. Photo Stealers captured the mood with a blunt "WTF: Evoto AI Headshot Generator" and reported photographers literally flipping off the Evoto stand at Imaging USA. The Phoblographer went further, calling the service an attempt to replace photographers with "AI slop" and questioning the claim that this was simply an innocent test.

The apology that didn't land

In response, Evoto posted a statement saying the headshot generator had "missed the mark", "crossed a line" and was being discontinued. The company framed it as a test of full image generation that strayed beyond the support role it wants to play, and promised that user images are not used to train its models, describing its protections as "ironclad" and its training data as licensed only.

On the surface, this sounds like the right approach: apology, cancelled feature, clearer explanation of data use. In practice, many photographers point out that a fully branded, public site with examples and a working workflow doesn't look like a small internal trial. Shutting down comments on the apology thread after a wave of criticism made it feel more like damage control than a genuine conversation with paying users.

Commentary from outlets such as The Phoblographer argues that the real problem is the direction Evoto appears to be heading. If a company plans to sell "good enough" AI portraits directly to end clients while also charging photographers for retouching tools, trust will be almost impossible to rebuild.

What photographers can learn from this

The Evoto story lands at a time when photographers are already rethinking their place in an AI-saturated world, from smartphone "moon shots" to generative backdrops and AI profile photos. Beyond the immediate anger, it points to a few practical lessons.

Treat AI tools as business partners, not just clever software. Pay attention to how they talk to end clients and where their roadmap is heading.

Ask clear questions about training data and future plans. You need to know if your uploads can ever be used for model training and whether the company intends to build services that compete with you.

Be careful about attaching your reputation to a brand. Discounts and referral codes matter less than whether the company's long-term vision keeps human photographers at the centre.

For AI companies in imaging, the message is equally direct. You cannot present yourself as a photographer-first platform while quietly testing products that encourage clients to bypass those same photographers. In a climate where trust is already thin, real transparency, clear boundaries and honest dialogue are the only way to stay on the right side of the people whose pictures, workflows and support built your business in the first place.

Why AI Enhancement Isn't Cheating in Wildlife Photography

Wildlife photography is something I'd love to do more of, but at the moment, time doesn't allow it. However, when I do get the chance to head out with a long lens to give it a go, I gain deep respect for what it takes to capture the shot.

That's why the debate around AI editing tools fascinates me.

Critics argue that tools like Topaz Gigapixel or AI sharpening "ruin" wildlife photography. If your lens wasn't long enough or your sensor didn't capture fine details, using AI to reconstruct them is cheating.

I disagree completely.

The soul of wildlife photography is being there. If you hiked to a remote location, endured harsh weather, and invested hours of patience to witness a specific behaviour, that has real value. That's the foundation of your photograph.

So why should using AI to overcome your gear's physical limitations invalidate your fieldwork?

AI enlargement or texture refinement doesn't fabricate what the animal did. When a predator chases prey, AI doesn't invent the event. It helps your image reflect what you actually witnessed. It bridges the gap between your equipment's constraints and the magnitude of the moment.

We obsess over the technical "purity" of raw files, but we should focus on the effort required to be standing in that field. Cameras are tools, and every tool has limits. If software rescues a once-in-a-lifetime encounter from being a blurry mess, that's a win.

The truth of wildlife photography isn't in the pixels. It's in the person willing to get cold, wet, and tired to document the natural world.

What's your take?

Does AI enhancement cross a line, or does the real work happen in the field?

I'd genuinely love to hear your perspective.