photography

💥 How I use AI to Plan the Perfect Photo Shoot 👌🏻

The Problem with "Too Much" Information

We have an incredible amount of data at our fingertips these days. If you are planning a landscape or seascape trip, there are hundreds, maybe thousands of apps available. Honestly, that is part of the issue for me. There is just too much choice, and every day there seems to be a new app hitting the store. I never quite know which one to use for what.

While I still use a dedicated app to check the position of the sun, I have moved everything else over to AI.

How I Use AI as a Location Scout

It doesn't really matter which platform you prefer. I use Google Gemini, but you can do the exact same thing with ChatGPT, Claude, or Perplexity. The goal is to move away from checking ten different websites and instead have one single place that "scouts" the location for you.

I have set up something called a "Gem" in Gemini (or a Custom GPT if you use ChatGPT). I call it my Seascape Photography Planner. All I have to do is tell it where I am going and when, and it does the rest.

For example, if I tell it I am heading to Godrevy Lighthouse this coming Saturday, within seconds it populates the screen with:

  • Weather conditions: Temperature, precipitation, and wind speeds.

  • Lighting: Sunrise, sunset, and golden hour times.

  • The Ocean: Tide times and tide heights.

  • Logistics: Where to park, how to pay (cash or app), and where to find food or fuel nearby.

  • Safety: The nearest hospital and contact details for the police.

  • Drone Info: Nearest airfield and air traffic control contacts, just in case a drone goes rogue.

Setting It Up Yourself

The process is incredibly simple. You start by asking the AI to find this information for a specific trip. I often use a dictation app called Whisper to just speak my request into the text box.

Once the AI gives you a great result, you ask it one simple question: "Can you now create a system prompt from this so that the next time you can give me all of this information, but all I need to tell you is where I'm going and when?"

The AI will then write a "formula" for itself. It might say something like, "You are an expert photography location scout. Your goal is to provide a comprehensive, data-driven briefing."

You simply copy that text, go into your settings to create a new "Gem" or "Custom GPT," and paste those instructions in. Give it a name, save it, and you are done.

The Real-World Benefit

The best part about this is that it syncs to your phone. On the morning of a shoot, I can quickly check the latest updates while I'm having my coffee. I have even added "road conditions" to my prompt lately so I know if there are any last-minute diversions or roadblocks before I set off.

It is a massive time-saver. Instead of bouncing between weather apps, tide tables, and Google Maps, I get a tailored briefing in one go. It has definitely increased my success rate, but more than that, it has made the whole experience of being out in the field much more relaxed.

Content Credentials: The Future of Proving Your Photos Are Real ✅

In a world where AI can generate a photorealistic image in seconds, how do you prove that your photograph is actually real? That it was captured by a real camera, in a real place, by a real photographer?

That is exactly the problem Content Credentials are designed to solve, and in 2026 this technology is finally moving from niche experiment to something every working photographer needs to understand.

What Are Content Credentials?

Think of Content Credentials as a kind of nutrition label for your photographs. Just as a food label tells you what is inside the packet, Content Credentials can tell viewers key facts about an image: who created it, which camera or software was used, what kind of edits were made, and, crucially, whether AI tools were involved at any stage.

Under the hood, Content Credentials are powered by an open technical standard called C2PA, which stands for Coalition for Content Provenance and Authenticity. C2PA is a cross-industry specification backed by companies and organisations including Adobe, Microsoft, Google, Sony, Nikon, Canon, Leica, Fujifilm, the BBC, the Associated Press and many others.

The key point is that Content Credentials do not judge whether a photo is "good" or "bad". They provide a tamper-evident record of provenance, meaning a factual history of where an image came from and how it was made, so that editors, clients and audiences can make their own decisions about whether to trust what they are seeing.

How Do Content Credentials Actually Work?

At a technical level, C2PA uses cryptographic hashes and digital signatures, the same kind of technology that protects online banking, to bind provenance information to media files. In practice, the chain looks like this:

  1. Capture. On supported cameras, a C2PA manifest is signed at the moment of capture, recording the device identity and, where enabled, when and where the image was created.

  2. Edit. When the photo is opened in C2PA-enabled software such as Photoshop or Lightroom, the software can log key edits, including the use of generative AI tools, into an updated manifest.

  3. Export and publish. On export, the photographer chooses what information to include. The Content Credentials can be embedded in the file itself, published to a cloud service, or both.

  4. Verify. Anyone can later inspect the credentials using tools such as the Content Authenticity Initiative's Inspect site at contentcredentials.org/verify, browser extensions, or compatible apps and services.

If someone tampers with the pixels or tries to alter the signed provenance after the fact, the cryptographic checks break. The result is that the credentials are tamper-evident: you cannot quietly change the file or its signed history without that being detectable.

Which Cameras Support Content Credentials in 2026?

Camera support has accelerated over the last two years. A useful snapshot comes from the community-maintained c2pa.camera site, which tracks devices that can sign images using the C2PA standard.

As of early 2026, supported cameras include:

One particularly important entry is the Google Pixel 10. Thanks to its Tensor G5 and Titan M2 security chips and built-in C2PA support in the Google Camera app, it is currently the least expensive way to capture C2PA-signed images. That matters because not every working photographer or journalist will be carrying a flagship mirrorless body at the moment something newsworthy happens.

On the mirrorless side, Fujifilm has committed to rolling Content Authenticity support out across its X and GFX cameras, starting with models like the X-T50 and GFX100S II, with further firmware support planned but not yet fully detailed.

Content Credentials in Lightroom and Photoshop

The good news is you do not need a C2PA-enabled camera to start using Content Credentials. Adobe has built support directly into Lightroom Classic, Lightroom Desktop and Photoshop, using C2PA under the hood.

Lightroom Classic

In Lightroom Classic, Content Credentials are applied at export time.

Open the Export dialogue and scroll to the Content Credentials section, then enable Apply Content Credentials. You will need to choose how the credentials are stored: you can publish to Content Credentials Cloud, attach them to files by embedding them in the JPEG, or do both at once, which is the recommended option for most photographers. You can also decide what information to include, such as your name from your Adobe account, any connected social accounts, and a log of the editing steps recorded by Lightroom.

A few practical limitations are worth knowing about in 2026. Lightroom Classic only applies Content Credentials on JPEG export, not on TIFF, PSD or RAW files. An active internet connection is also required for the feature to work, even if you are simply attaching credentials to files rather than publishing to the cloud.

Lightroom Classic

Content Credentials are set in the Preferences and Export section …

Photoshop

Photoshop takes a slightly different approach because it can record provenance while you edit. Go to Settings or Preferences, then History and Content Credentials, and enable Content Credentials for saved documents. For each document you can turn credentials on or off individually, so not every file has to be recorded. When you export, Photoshop can embed a detailed edit history into the Content Credentials, including the use of Generative Fill, Generative Expand and other AI-powered tools.

The system records a summarised, provenance-oriented history rather than every brush stroke, but enough to show that AI tools were used and how the file evolved over time.

Keeping the Chain Intact Between Lightroom and Photoshop

If your workflow moves between Lightroom Classic and Photoshop, it is worth thinking about the provenance chain. A robust approach is to export from Lightroom with Content Credentials turned on, then open that exported file in Photoshop with Content Credentials enabled for the document. Export again from Photoshop with Content Credentials, and if you want the final file back in your Lightroom catalogue, import the Photoshop export so that Lightroom sees the credentialled version.

Is it perfectly seamless? Not yet. But this approach ensures that each major step in your workflow adds to the same signed chain instead of breaking it.

Why Content Credentials Matter in 2026

Several developments make Content Credentials especially relevant right now.

Photo Mechanic and Press Workflows

In February 2026, Camera Bits confirmed that Photo Mechanic is gaining support for the C2PA standard. For decades, Photo Mechanic has been the first stop in press photographers' workflows, used for ingest, culling and metadata. Camera Bits' goal is to preserve C2PA signatures from C2PA-enabled cameras all the way through to publication, so editors can trust that a signed image really traces back to a specific moment and camera.

Camera Bits has been clear that this feature is still in active development with no public release timeline yet, but for photojournalism this is a significant shift.

Competitions and Clubs

The Canadian Association of Photographic Art has adopted a Content Credential model for its competitions to address AI-generated imagery. Their current stance, through at least 2027, is that the model is optional and educational rather than mandatory, but potential winning entries already undergo verification that includes Content Credentials analysis, AI detection and forensic checks. Images that fail those verification steps can be disqualified, which is a strong signal of where competition rules are heading.

Platforms and the Broader Ecosystem

On the platform side, there has been real movement. LinkedIn now displays a CR icon for images carrying Content Credentials, which users can click to see the provenance summary. Google has brought C2PA-based Trusted Images to Android and Pixel, using Content Credentials and SynthID to distinguish originals and AI-generated content. Cloudflare Images and other services now preserve Content Credentials through transformations, so the provenance remains intact when images are resized or optimised for delivery.

The Content Authenticity Initiative itself has grown into a global community of more than 6,000 members by the end of 2025, spanning media, tech, education and government. This is no longer a small experiment.

The Honest Challenges (As of 2026)

That said, Content Credentials are not magic, and the current limitations are worth being transparent about.

Social Platforms Still Strip Metadata

Many social platforms still strip embedded metadata from uploads, which removes embedded C2PA manifests along with traditional EXIF and IPTC data. Tests have shown that platforms like Facebook remove Content Credentials on upload, which is one reason Adobe allows you to publish credentials to a cloud service as well, so you can still verify an image via the cloud record even if the embedded data is lost.

The Chicken-and-Egg Problem

Camera makers want platforms and tools to support provenance before they invest heavily. Platforms want a critical mass of signed content. Newsrooms want both to be stable before they change their workflows. PetaPixel's coverage of the Digimarc C2PA Chrome extension in 2025 summed up the situation bluntly: at that point, basically no photos published online were carrying C2PA metadata. That is slowly improving in 2026, but it remains an adoption loop rather than a solved problem.

The Perception Problem

At CES 2026, several analyses highlighted that many visitors misunderstood the Content Credentials icon, assuming it marked AI-generated content rather than authentic content with a provenance record. Without better public education, there is a real risk that authenticity labels are misread as AI labels, which is the exact opposite of the intended outcome.

Inconsistent Implementations

Some early implementations have also bent the semantics in unhelpful ways. Critics have pointed out that certain smartphone workflows only add C2PA manifests to images that have been processed with AI features, not to ordinary captures. That reverses the intent entirely: the real images are the ones that most need a verifiable credential.

Privacy and Identity

Finally, there is the privacy angle. C2PA and Adobe both make identity assertions optional and opt-in, so you choose whether to embed your name, social accounts or edit history. That flexibility is valuable, but it also means you should think carefully about what you are comfortable attaching to every exported file. For some photographers, including personal account details on every share will feel like a useful feature; for others, it may feel like over-exposure.

Should You Start Using Content Credentials?

For most photographers who share work online, the pragmatic answer in 2026 is yes, it is worth turning on now, even with the current rough edges.

There is no extra cost, as Content Credentials in Lightroom and Photoshop are included in your existing Adobe subscription and do not consume generative credits. They are non-destructive, meaning enabling them does not alter your image content or require a different editing approach. It simply adds metadata, and optionally a cloud record, at export.

Starting now also means you build good habits early. As more contests, clients and platforms start expecting provenance, having a back catalogue of signed images will be an advantage rather than something you are scrambling to retrofit. Organisations like the Canadian Association of Photographic Art explicitly highlight that embedded creator information and timestamps help strengthen copyright and attribution claims as part of a wider evidence chain. And the export settings give you control over privacy, so you can choose to share just a minimal provenance chain or a more detailed record including identity and edit history.

For photojournalists and press photographers, this is already moving from a nice-to-have to something expected. For commercial and fine-art photographers, it is a professional differentiator that signals authenticity and transparency at a time when clients are increasingly wary of AI fakery.

How to Check if an Image Has Content Credentials

If you want to verify an image, whether your own or someone else's, there are several options available. You can upload a file at contentcredentials.org/verify to see its provenance, including capture and edit history where available.

Adobe and its partners also provide browser extensions that detect and surface Content Credentials as you browse the web. On LinkedIn, look for the CR icon on images; clicking it shows the stored provenance for that image. Nikon users, editors and agencies can use the Nikon Authenticity Service to validate C2PA-signed images from supported cameras. And Leica's FOTOS app can read and display authenticity information for images from the M11-P, SL3-S and related cameras.

Where This Is Heading

The direction of travel is clear. The C2PA Conformance Programme and the CAI's growing membership are pushing the ecosystem towards more consistent implementations across cameras, software and platforms. Open-source tooling is making it easier for smaller developers to add support. And regulatory and industry pressure around AI transparency, especially in news and political advertising, is giving content authenticity a real tailwind.

As Camera Bits put it when discussing Photo Mechanic's planned support, the goal is not to replace trust in photographers, but to provide an additional layer of confidence in an environment where synthetic media is increasingly common.

For working photographers, the message in 2026 is straightforward. The tools are here, they are free to switch on, and they are only going to become more important. Enabling Content Credentials today is one of the simplest practical steps you can take to protect your work and to prove that it is genuinely yours.

Reality vs Photoshop - Is Faking It Cheating? 🤷‍♂️

Car photography always looks that little bit more dramatic when there's a wet road reflection underneath the vehicle. But what do you do when the road is bone dry? In this guide, I'll walk you through two ways to fake a puddle reflection in Photoshop -- one traditional, one powered by AI -- and then I'll leave you with a question worth thinking about.

Method One: The Manual Approach

Step 1: Select the Car

Start by grabbing the Object Selection tool from the toolbar. In the options bar at the top of the screen, make sure the mode is set to Cloud for the best possible result, then click Select Subject. Photoshop will do a surprisingly good job of selecting the car in just a moment or two.

Step 2: Copy the Car onto Its Own Layer

With your selection active, press Command + J (Mac) or Control + J (Windows) to copy the car up onto a new layer. If you toggle every other layer off, you should see just the isolated car sitting cleanly on a transparent background.

Step 3: Flip It Upside Down

Go to Edit > Transform > Flip Vertical. This flips the car layer to create the basis of your reflection. Now grab the Move tool, hold down Shift (to keep movement perfectly vertical) and drag the flipped car downwards until the tyres of both the original and the reflection are just touching.

If things look slightly off-angle, go to Edit > Free Transform, move your cursor just outside the bounding box until you see the rotation cursor, and give it a gentle nudge until it lines up properly.

Step 4: Add a Black Layer Mask

Rename this layer "Reflection" to keep things tidy. Then, holding down Option (Mac) or Alt (Windows), click the Layer Mask icon at the bottom of the Layers panel. This adds a black mask that hides the layer entirely -- which is exactly what you want for now.

Step 5: Draw the Puddle Shape

Select the Lasso tool and make sure you click directly on the layer mask thumbnail (you should see a white border appear around it, confirming it's active). Now draw a rough, freehand puddle shape beneath the car's tyres -- it doesn't need to be perfect, natural-looking and irregular is actually better here.

Step 6: Fill with White to Reveal the Reflection

Go to Edit > Fill, set the contents to White, and click OK. The reflection will now appear only within the puddle shape you drew.

Step 7: Soften the Edges

Zoom in and you'll notice the puddle edge looks very sharp and unnatural. To fix that, go to Filter > Blur > Gaussian Blur and apply just a small amount -- around 3 pixels is usually enough. This softens the boundary and helps the reflection blend into the ground convincingly.

Finally, you can reduce the opacity of the Reflection layer slightly to make the whole thing look a little more subtle and true to life.

Method Two: Using Adobe Firefly's Generative Fill

If you want a quicker and arguably more realistic result, Photoshop's AI tools can do a remarkable job here.

Step 1: Load the Puddle Selection

Hold Command (Mac) or Control (Windows) and click directly on the layer mask from your first reflection layer. This loads the puddle shape back as an active selection, saving you from having to draw it again.

Step 2: Select the Background Layer

Click on the main image layer, so that Generative Fill works on the background rather than the reflection layer.

Step 3: Run Generative Fill

In the contextual taskbar, click Generative Fill and type a prompt along the lines of: a reflection of car in puddle of water. For the AI model, select Firefly (specifically the Firefly Built and Expand model released in January 2026). If you're on a Creative Cloud Pro account, this won't cost you any credits -- whereas models like Flux or Nano Banana can use anywhere between 20 and 30 credits per generation.

Click Generate.

Step 4: Choose Your Favourite Variation

Firefly will produce three variations for you to compare. Have a look through them and pick the one that looks most convincing. You'll likely notice that the AI does something quite clever: it reflects the sky in the puddle on the far side of the car, just as real water would. Achieving that manually in Photoshop would take considerably more time and effort.

Which Method Should You Use?

For a quick, dirty result, the manual method works well and gives you full control. But for something that genuinely looks like a photograph taken on a wet road, the AI approach is hard to argue with -- particularly because of how naturally it handles the environmental reflections in the water.

A Question Worth Thinking About

Here's something to consider. When photographing that car, there were really two options: bring bottles of water to pour around the car and create a real puddle on the dry road, or add the reflection later in post-production, either manually or with AI.

Both approaches result in a reflection that wasn't originally there. The only difference is when in the process you add it.

So what do you think -- is there a meaningful ethical difference between physically creating something on location and digitally adding it afterwards? When it comes to reflections specifically, does it matter?

Let me know your thoughts in the comments below.

Stormy Sea at Lyme Regis 🌊

I hadn’t intended to head down to the seafront this morning, but with a storm still present I checked the tide times and with high tide in a couple of hours, I couldn’t resist.

WOW! The sea was incredible!

Waves crashing and pounding The Cobb as it stood firm protecting the harbour, the beach not visible as the sea washed over it throwing sea water onto the promenade and waves crashing against the sea wall at Gun Cliff dwarfing the tower two-fold!

Such a Thrill!

All photographs hand-held using …

Fuji X-T5 with 18mm f/1.4
1/125 sec
f/11
ISO 400

⛔️ Stop Policing Creativity

I don’t normally write a post such as this, but I’ve seen a fair bit of ‘this’ lately so just felt the need to put pen to paper, so to speak.

I’m tired of seeing people tell others what they should or shouldn’t be doing with their photography and editing.

We see it all the time in comments and forums; people acting like there is a "correct" way to be creative.

It's tedious. It’s exhausting.

The escape is the point

Photography and editing are personal.

For loads of us, picking up a camera is a break from all the rules, deadlines and stress that come with modern life.

When someone sits down to create, that might be the only hour in their day where they actually feel in control of something. If they want to use a tool that makes things easier or more enjoyable, that's up to them.

The minute we start slapping "rules" on creativity, we turn what should be a release valve into just another chore; we make people second-guess themselves before they share their work, or even worse, they stop creating altogether because they're worried about being judged by the purists.

Use the tools you want

This goes for the tools we choose too.

If someone wants to use a particular bit of software or decides to use AI, so what? That's their choice.

If what someone else is doing has absolutely no impact on you, your life, or your own creativity, then why let it concern you?

As long as they're not trying to deceive people or claim credit for something they didn't actually do, let them get on with it, and even if someone does try to be deceptive, they'll get found out eventually. We'd probably do better spending our time keeping our own house in order before we start telling everyone else how to run theirs.

The elitism of the "right" way

Then you've got the phrases that always come up, like "getting it right in camera" or "we should all go back to basics."

Every time I see or hear this, it comes across as elitist. It feels like they're saying "I'm better than you."

Do the people who say this honestly think everyone else is deliberately trying to get things "wrong" in camera?

We all try to do our best at the point of capture, but for many people, that's just the start of the process.

And as for going back to basics, who are we to say that?

Just because one person finds joy in the traditional way of doing things doesn't mean everyone else has to. Why should someone else do what you reckon they should do?

Leave them be

Life's tough enough as it is. We're all different, and thank goodness for that; the world would be a boring place if we all worked the same way.

If someone's getting enjoyment out of what they're doing, leave them be. The world doesn't need more critics, it needs more people finding a way to enjoy themselves.

If their process made their day a bit better, they didn't break a rule, they won.

Evoto's AI Headshots: When Your Favourite Tool Turns Against You

Evoto's AI headshot generator has become a cautionary tale about how quickly an AI company can burn through the trust of the very professionals who helped build its reputation.

When your retouching app becomes a rival

At Imaging USA 2026 in Nashville, portrait and headshot photographers discovered that Evoto had been quietly running a separate "Online AI Headshot Generator" site. The service let anyone upload a selfie and receive polished, corporate-style portraits, with marketing that openly pitched it as a cheaper, easier alternative to booking a photographer.

This wasn't a hidden experiment tucked away behind a login. The headshot generator had a public URL, example images, an FAQ and a clear path from upload to final "professional" headshot. For photographers who had built Evoto into their workflow, it felt like discovering that a trusted retouching assistant had quietly set up shop down the road and started undercutting them.

Why Evoto's role made this sting

Evoto built its identity as an AI-powered retouching and workflow tool aimed squarely at professional photographers, especially those shooting portraits, headshots and weddings. The pitch was straightforward: let the software handle the tedious stuff like skin smoothing, flyaway hairs, glasses glare, background cleanup and batch retouching so photographers can focus on directing and shooting.

That positioning worked. Photographers paid for it, used it on paid client work, recommended it in workshops and videos, and sometimes became ambassadors or power users. The unspoken deal was that Evoto would stay in the background, supporting human photographers rather than trying to replace them. A consumer-facing headshot generator cut straight across that understanding.

What the headshot generator offered

The AI headshot tool followed a familiar pattern: upload a casual selfie, choose a style and receive cleaned-up headshots with flattering lighting, neat clothing and tidy backgrounds, ready for LinkedIn or company profiles. The examples looked very similar to the kind of "studio-style" work many Evoto customers already produce for corporate clients.

*Simulation Only ; NOT the Evoto Interface

The wording is what really set people off. The marketing leaned heavily into cost savings, avoiding studio bookings, quick turnaround and "professional-looking" results without needing a photographer. Coming from a faceless tech startup, that would already be provocative. Coming from a tool that photographers had trusted with their files and workflows, it felt like a direct invitation for clients to pick AI over them.

For many creatives, this is the line that matters: AI that helps you deliver better work is one thing. AI that presents itself as your replacement is something else entirely.

Why photographers are so angry

Photographers' reactions centre on three main issues.

First is a deep sense of betrayal. People had paid into the Evoto ecosystem, uploaded thousands of client images and publicly championed the product. Learning that the same company had built a consumer brand aimed at undercutting them felt like discovering that their support had funded a tool designed to compete with them.

Second are concerns about training data. Photographers have pointed out that the look of the AI headshots seems very close to the kind of work Evoto users upload. Evoto now says its models are trained only on commercially licensed or purchased imagery, not on customer photos, but those reassurances arrived after the story broke and against a backdrop of widespread anxiety about AI scraping. Without long-standing, transparent policies on data use, many remain sceptical.

Third is the tone of the marketing. Promises of saving money, avoiding bookings and still getting "pro-quality" results read like a direct invitation for clients to choose a cheap AI pipeline instead of hiring a photographer. Photo Stealers captured the mood with a blunt "WTF: Evoto AI Headshot Generator" and reported photographers literally flipping off the Evoto stand at Imaging USA. The Phoblographer went further, calling the service an attempt to replace photographers with "AI slop" and questioning the claim that this was simply an innocent test.

The apology that didn't land

In response, Evoto posted a statement saying the headshot generator had "missed the mark", "crossed a line" and was being discontinued. The company framed it as a test of full image generation that strayed beyond the support role it wants to play, and promised that user images are not used to train its models, describing its protections as "ironclad" and its training data as licensed only.

On the surface, this sounds like the right approach: apology, cancelled feature, clearer explanation of data use. In practice, many photographers point out that a fully branded, public site with examples and a working workflow doesn't look like a small internal trial. Shutting down comments on the apology thread after a wave of criticism made it feel more like damage control than a genuine conversation with paying users.

Commentary from outlets such as The Phoblographer argues that the real problem is the direction Evoto appears to be heading. If a company plans to sell "good enough" AI portraits directly to end clients while also charging photographers for retouching tools, trust will be almost impossible to rebuild.

What photographers can learn from this

The Evoto story lands at a time when photographers are already rethinking their place in an AI-saturated world, from smartphone "moon shots" to generative backdrops and AI profile photos. Beyond the immediate anger, it points to a few practical lessons.

Treat AI tools as business partners, not just clever software. Pay attention to how they talk to end clients and where their roadmap is heading.

Ask clear questions about training data and future plans. You need to know if your uploads can ever be used for model training and whether the company intends to build services that compete with you.

Be careful about attaching your reputation to a brand. Discounts and referral codes matter less than whether the company's long-term vision keeps human photographers at the centre.

For AI companies in imaging, the message is equally direct. You cannot present yourself as a photographer-first platform while quietly testing products that encourage clients to bypass those same photographers. In a climate where trust is already thin, real transparency, clear boundaries and honest dialogue are the only way to stay on the right side of the people whose pictures, workflows and support built your business in the first place.

Why AI Enhancement Isn't Cheating in Wildlife Photography

Wildlife photography is something I'd love to do more of, but at the moment, time doesn't allow it. However, when I do get the chance to head out with a long lens to give it a go, I gain deep respect for what it takes to capture the shot.

That's why the debate around AI editing tools fascinates me.

Critics argue that tools like Topaz Gigapixel or AI sharpening "ruin" wildlife photography. If your lens wasn't long enough or your sensor didn't capture fine details, using AI to reconstruct them is cheating.

I disagree completely.

The soul of wildlife photography is being there. If you hiked to a remote location, endured harsh weather, and invested hours of patience to witness a specific behaviour, that has real value. That's the foundation of your photograph.

So why should using AI to overcome your gear's physical limitations invalidate your fieldwork?

AI enlargement or texture refinement doesn't fabricate what the animal did. When a predator chases prey, AI doesn't invent the event. It helps your image reflect what you actually witnessed. It bridges the gap between your equipment's constraints and the magnitude of the moment.

We obsess over the technical "purity" of raw files, but we should focus on the effort required to be standing in that field. Cameras are tools, and every tool has limits. If software rescues a once-in-a-lifetime encounter from being a blurry mess, that's a win.

The truth of wildlife photography isn't in the pixels. It's in the person willing to get cold, wet, and tired to document the natural world.

What's your take?

Does AI enhancement cross a line, or does the real work happen in the field?

I'd genuinely love to hear your perspective.

The Smartphone AI Photography Controversy: What's Really Going On?

The smartphone photography world is having a bit of an identity crisis right now, and it's forcing us all to ask an uncomfortable question: when does making a photo look better cross the line into just making stuff up?

Samsung's Moon Photo Fiasco

The whole thing kicked off properly in March 2023 when a Reddit user called ibreakphotos ran a brilliant experiment. They took a high-resolution moon photo, blurred it until you couldn't see any detail at all, stuck it on a monitor, and photographed it from across the room using a Samsung Galaxy phone's Space Zoom. What happened next was pretty shocking: Samsung's camera added crisp crater details that simply weren't there in the blurry image.

This wasn't your typical computational photography where the phone combines multiple frames to pull out hidden detail. Samsung was using a neural network trained on hundreds of moon images to recognise the moon and basically paste texture where none existed. The company more or less admitted this in their technical explanation, saying they apply "a deep learning-based AI detail enhancement engine" to "maximise the details of the moon" because their 100x zoom images "have a lot of noise" and aren't good enough on their own.

The controversy came back round in August 2025 when Samsung's One UI 8 beta revealed they were working to reduce confusion "between the act of taking a picture of the real moon and an image of the moon". In other words, they admitted their AI creates moon photos rather than capturing them.

Other Companies Are At It Too

Samsung isn't the only one playing this game. Huawei faced similar accusations with its P30 Pro back in 2019, using AI to enhance moon photography beyond what the camera actually saw. The pattern is pretty clear: smartphone manufacturers are using AI to make up for physical limitations that no amount of clever software can genuinely overcome.

Google's Approach to Reality

Google went in a slightly different direction with the Pixel 8 and 8 Pro, introducing "Best Take", a feature that swaps facial expressions between different photos in a sequence. If someone blinks or frowns in a group shot, the phone finds a better expression from other photos and drops it into your chosen image.

They also launched "Magic Editor", which lets you erase, move, and resize elements in photos (people, buildings, whatever) with AI filling in the gaps using algorithms trained on millions of images. These tools work on any photo in your Google Photos library, not just ones you've just taken.

Tech commentators called these features "icky", "creepy", and potentially threatening to "people's (already fragile) trust of online content". Google's Isaac Reynolds defended the approach by saying that "people don't want to capture reality, they want to capture beautiful images" and calling the results "representations of a moment" rather than documentary records.

Photographers Are Fighting Back

The controversy has created what some observers call a "perfection paradox". As AI became capable of churning out flawless imagery at industrial scale in 2025, perfection itself lost its appeal. Social feeds filled up with technically immaculate visuals, but the images actually getting attention were the ones showing signs of real human touch.

Professional photographers responded by deliberately embracing film grain, motion blur, quirky colours, accidental flare, and cameras with intentional limitations. The message is clear: authenticity and imperfection have become the things that set you apart in an AI-saturated landscape.

One photographer noted that when clients were offered choices between AI-crafted footage and work shot by humans with clear creative perspectives, they "still gravitated to the latter". Despite AI's technical achievements, there's still a "gap between technological capability and cultural readiness".

The Trust Problem

The fundamental issue is that smartphone manufacturers market these AI enhancements as camera capabilities without clearly telling users when AI is manufacturing details rather than capturing them. Samsung's moon photos showcase this perfectly. Users think they've captured incredible detail through superior hardware and processing, when actually the phone has just overlaid trained data.

Professor Rafal Mantiuk from the University of Cambridge explained that smartphone AI isn't designed to make photographs look like real life: "People don't want to capture reality. They want to capture beautiful images". However, the physical limitations of smartphones mean they rely on machine learning to "fill in" information that doesn't exist in the photo, whether that's for zoom, low-light situations, or adding elements that were never there.

What's Happening Next

There's growing pressure on the industry for what's being called "the year of AI transparency" in 2026. People are demanding that manufacturers like Samsung, Apple, and Google disclose when and how AI is manipulating photos.

Google has started responding with detection tools, rolling out AI detection capabilities through Gemini that can spot artificially generated photos using hidden SynthID watermarks and C2PA metadata. These watermarks stay detectable by machines whilst remaining invisible to human eyes, surviving compression, cropping, and colour adjustments. The system analyses images on-device without sending data to external servers.

Samsung, meanwhile, continues embracing AI integration. They recently published an infographic declaring that future cameras "will only get smarter with AI" and describing their camera as "part of the intuitive interface that turns what users see into understanding and action". This language notably sidesteps the authenticity questions that plagued their moon photography feature.

The Cultural Pushback

Perhaps most tellingly, younger users are increasingly seeking cameras that produce "real" and "raw" photos rather than AI-enhanced imagery, driving a resurgence of early-2000s compact digital cameras. This represents a rebellion against smartphone AI manipulation and a genuine desire for photographic authenticity.

The controversy forces a broader reckoning about what photography means in the AI era. As one analysis noted, 2025's deeper story wasn't simply that AI improved, it was "the confrontation it forced: what counts as real, what counts as ours, and what creativity looks like when machines can mimic almost anything".

The Bottom Line

The core issue is straightforward: smartphone manufacturers are using AI to create photographic details that cameras never actually captured, then marketing these capabilities as camera performance rather than AI fabrication.

Companies haven't clearly disclosed when AI is manufacturing versus enhancing, which is eroding trust in smartphone photography. Real photographers are differentiating themselves by embracing authenticity and imperfection as AI floods the market with technically perfect but soulless imagery.

And 2026 is shaping up as a pivotal year for AI transparency demands and authenticity verification tools.

This controversy represents more than just technical debates. It's fundamentally about trust, authenticity, and what we expect from our photographic tools in an increasingly AI-mediated world.

Picture This - A Musical Gift 🎸

Last Friday I was left completely speechless!

I logged in to a live video chat to join members of The Photography Creative Circle for our weekly coffee hour, and immediately there seemed more members present than usual … way more.

Shortly after logging in I found out why, as member and dear friend Jean-François Léger began reading out something he’d prepared …

Glyn, In the spirit of the holiday season, we have a surprise for you today.

About six months ago, you shared a vision with us by creating this Photographic Creative Circle. At first, we all joined to learn from you, to master our cameras and refine our post-processing skills. But very quickly, something much deeper began to take shape.

It has become a place where we share our lives, celebrate our successes, and support one another through difficult times. Photography, in the end, became the beautiful pretext for us to become true friends.

You laid the foundation for this community, now this community wanted to create something for you that gives full meaning to the word 'community.'

Glyn, this is our way of saying a big thank you for the commitment, the generosity and the tremendous work you’ve done for all of us.

So Picture this!

And this is what I was presented with …

Written, recorded and edited by Jean-François and with contributions by other members of the community, including 2 in particular that have had traumatic loss in their families in recent weeks … this blew me away!

Such an incredible gift that I will treasure forever … and be playing over and over again ❤️

Catching the New Years Day Sunrise 2026 ☀️

Got up early and popped down to the local beach to photograph the sunrise, and Mother Nature did not disappoint 😍

Happy New Year 🎉

Fuji X-T5
Fuji 18mm f/1.4 @ f/11
2.5 sec, f/11 , ISO 125

NiSi 3 Stop JetMag Pro ND Filter

Benro Rhino Carbon Fibre Tripod

Images below captured on my iPhone 17 Pro Max using the Leica Camera App and the Greg WLM B&W setting…