A.I.

Evoto's AI Headshots: When Your Favourite Tool Turns Against You

Evoto's AI headshot generator has become a cautionary tale about how quickly an AI company can burn through the trust of the very professionals who helped build its reputation.

When your retouching app becomes a rival

At Imaging USA 2026 in Nashville, portrait and headshot photographers discovered that Evoto had been quietly running a separate "Online AI Headshot Generator" site. The service let anyone upload a selfie and receive polished, corporate-style portraits, with marketing that openly pitched it as a cheaper, easier alternative to booking a photographer.

This wasn't a hidden experiment tucked away behind a login. The headshot generator had a public URL, example images, an FAQ and a clear path from upload to final "professional" headshot. For photographers who had built Evoto into their workflow, it felt like discovering that a trusted retouching assistant had quietly set up shop down the road and started undercutting them.

Why Evoto's role made this sting

Evoto built its identity as an AI-powered retouching and workflow tool aimed squarely at professional photographers, especially those shooting portraits, headshots and weddings. The pitch was straightforward: let the software handle the tedious stuff like skin smoothing, flyaway hairs, glasses glare, background cleanup and batch retouching so photographers can focus on directing and shooting.

That positioning worked. Photographers paid for it, used it on paid client work, recommended it in workshops and videos, and sometimes became ambassadors or power users. The unspoken deal was that Evoto would stay in the background, supporting human photographers rather than trying to replace them. A consumer-facing headshot generator cut straight across that understanding.

What the headshot generator offered

The AI headshot tool followed a familiar pattern: upload a casual selfie, choose a style and receive cleaned-up headshots with flattering lighting, neat clothing and tidy backgrounds, ready for LinkedIn or company profiles. The examples looked very similar to the kind of "studio-style" work many Evoto customers already produce for corporate clients.

*Simulation Only ; NOT the Evoto Interface

The wording is what really set people off. The marketing leaned heavily into cost savings, avoiding studio bookings, quick turnaround and "professional-looking" results without needing a photographer. Coming from a faceless tech startup, that would already be provocative. Coming from a tool that photographers had trusted with their files and workflows, it felt like a direct invitation for clients to pick AI over them.

For many creatives, this is the line that matters: AI that helps you deliver better work is one thing. AI that presents itself as your replacement is something else entirely.

Why photographers are so angry

Photographers' reactions centre on three main issues.

First is a deep sense of betrayal. People had paid into the Evoto ecosystem, uploaded thousands of client images and publicly championed the product. Learning that the same company had built a consumer brand aimed at undercutting them felt like discovering that their support had funded a tool designed to compete with them.

Second are concerns about training data. Photographers have pointed out that the look of the AI headshots seems very close to the kind of work Evoto users upload. Evoto now says its models are trained only on commercially licensed or purchased imagery, not on customer photos, but those reassurances arrived after the story broke and against a backdrop of widespread anxiety about AI scraping. Without long-standing, transparent policies on data use, many remain sceptical.

Third is the tone of the marketing. Promises of saving money, avoiding bookings and still getting "pro-quality" results read like a direct invitation for clients to choose a cheap AI pipeline instead of hiring a photographer. Photo Stealers captured the mood with a blunt "WTF: Evoto AI Headshot Generator" and reported photographers literally flipping off the Evoto stand at Imaging USA. The Phoblographer went further, calling the service an attempt to replace photographers with "AI slop" and questioning the claim that this was simply an innocent test.

The apology that didn't land

In response, Evoto posted a statement saying the headshot generator had "missed the mark", "crossed a line" and was being discontinued. The company framed it as a test of full image generation that strayed beyond the support role it wants to play, and promised that user images are not used to train its models, describing its protections as "ironclad" and its training data as licensed only.

On the surface, this sounds like the right approach: apology, cancelled feature, clearer explanation of data use. In practice, many photographers point out that a fully branded, public site with examples and a working workflow doesn't look like a small internal trial. Shutting down comments on the apology thread after a wave of criticism made it feel more like damage control than a genuine conversation with paying users.

Commentary from outlets such as The Phoblographer argues that the real problem is the direction Evoto appears to be heading. If a company plans to sell "good enough" AI portraits directly to end clients while also charging photographers for retouching tools, trust will be almost impossible to rebuild.

What photographers can learn from this

The Evoto story lands at a time when photographers are already rethinking their place in an AI-saturated world, from smartphone "moon shots" to generative backdrops and AI profile photos. Beyond the immediate anger, it points to a few practical lessons.

Treat AI tools as business partners, not just clever software. Pay attention to how they talk to end clients and where their roadmap is heading.

Ask clear questions about training data and future plans. You need to know if your uploads can ever be used for model training and whether the company intends to build services that compete with you.

Be careful about attaching your reputation to a brand. Discounts and referral codes matter less than whether the company's long-term vision keeps human photographers at the centre.

For AI companies in imaging, the message is equally direct. You cannot present yourself as a photographer-first platform while quietly testing products that encourage clients to bypass those same photographers. In a climate where trust is already thin, real transparency, clear boundaries and honest dialogue are the only way to stay on the right side of the people whose pictures, workflows and support built your business in the first place.

The Invisible Currency of AI πŸ€–

If you've been using AI tools for any length of time, you've probably hit a confusing moment. Maybe your chatbot suddenly "forgot" something important from earlier in the conversation. Or you opened Photoshop's Generative Fill and noticed a credit counter ticking down. Perhaps you glanced at an API pricing page and saw costs listed per "1K tokens."

When we interact with AI, we think in words and images. The AI thinks in tokens and credits. Understanding these two concepts makes it far easier to get better results and keep an eye on costs.

Part 1: How Text AI Actually "Reads"

When you type a prompt into a text-based AI like Claude, ChatGPT, or Gemini, it doesn't read your sentences the way a human does. Before processing your request, it breaks your text into small digestible pieces.

Those pieces are called tokens.

  • Simple words: Short, common words like "cat," "run," or "the" are often just one token.

  • Complex words: Longer or rarer words get split up, so "unbelievable" might become three tokens.

  • Punctuation and spaces: Commas, periods, and other symbols can also count as tokens, depending on how the model's tokeniser works.

A useful rule of thumb: 1,000 tokens equals roughly 750 English words.

The Context Window: Why AI Forgets

Every AI model has what's called a "context window." Think of it as a short-term memory bucket with a hard limit on how many tokens it can hold at once.

Everything from your conversation has to fit inside:

  • Your original instructions

  • Every question you've asked

  • Every answer the AI has given back

When you're deep into a long conversation or you've pasted in a massive document, this bucket fills up. Once it's full, the model has to discard the oldest tokens to make room for new ones. That's why your AI might suddenly ignore a rule you set at the beginning of the chat. It's not being stubborn, it's just run out of room.

Input vs Output: The Cost of Thinking

From a billing perspective, not all tokens are created equal. Most API pricing separates input tokens from output tokens.

  • Input tokens (reading): What you send in: prompts, documents, instructions.

  • Output tokens (writing): What the AI generates: summaries, code, emails.

Output tokens typically cost more than input tokens because generating new text requires more computation than simply reading and encoding it. This isn't universal, but it's a safe working assumption: longer requested outputs usually cost more on paid APIs.

Part 2: The Visual Artist: Credits

Text models run on tokens, but what about Photoshop's Generative Fill or image tools like Midjourney and Flux? These systems might use tokens behind the scenes, but what you see is usually a simpler "per-generation" approach.

Here's a helpful analogy: when you ask a text model to write an essay, it's like a taxi meter that keeps running until you reach your destination. When you ask for an image, you're ordering a fixed item: "one 1024Γ—1024 image." So billing becomes straightforward: one generation, one charge.

How Adobe Uses Generative Credits

Photoshop's Generative Fill and other Firefly-powered features run on "Generative Credits." Your Adobe plan gives you a monthly allowance, and each generative action consumes part of that allowance.

  • Most standard features use one credit per generation.

  • More intensive features or very large outputs can consume multiple credits, according to Adobe's rate cards.

  • The length of your prompt usually doesn't affect the credit cost. What matters is the type of operation and sometimes the output size.

Other visual platforms work similarly, often charging different amounts based on resolution or quality settings, since bigger, higher-quality images demand more computing power.

Why Some AI Uses More "Juice"

Depending on what you're doing, you can burn through your limits faster than expected.

For text AI (tokens):

  • Long inputs: Pasting a 50-page transcript into your prompt can devour a huge chunk of both your context window and token budget before the model even starts responding.

  • Long outputs: Asking for a detailed, multi-page answer consumes far more output tokens, and therefore more compute and money, than requesting a tight one-paragraph summary.

For image AI (credits):

  • Quantity: Generating 10 variations costs roughly 10 times as many credits as generating one image, because each generation is its own job.

  • Resolution and complexity: Higher resolutions or video-like outputs often consume more credits per job, reflecting the extra server work required.

The Takeaway

You don't need to be a mathematician to use AI well, but understanding tokens and credits makes you a far better pilot.

If your text AI starts getting confused or ignoring earlier instructions, you've likely pushed past its context window. Try trimming or summarising earlier content to free up space. If you're worried about image costs, invest time in a clear, targeted prompt and sensible resolution settings. Get what you need in as few generations as possible, rather than brute-forcing dozens of variations.

Master these invisible currencies, and you'll get better results while keeping your costs under control.


Learn more

ADOBE just changed ChatGPT FOREVER πŸ’₯ But Why???

Adobe has just rolled out one of the most significant updates we've seen in a while by integrating Photoshop, Express, and Acrobat directly into ChatGPT. And here's the kicker: these features are currently free to use, no Creative Cloud subscription required.

Why This Matters

This is a fascinating strategic play. ChatGPT has roughly 800 million active users, many of whom recognize the Photoshop brand but find the actual software intimidating or prohibitively expensive. By embedding these tools inside a chat interface where people already feel comfortable, Adobe is dismantling that barrier to entry. They're essentially converting casual users into potential creators through familiarity and ease of use.

What the Integration Actually Does

The capabilities are surprisingly robust for a chat-based tool. You can upload an image and ask Photoshop to handle basic retouching or apply artistic styles. The masking feature is particularly impressive, intelligently selecting subjects without manual input. Adobe Express generates social media posts or birthday cards from simple text prompts, while the Acrobat integration handles PDF merging and organization without leaving the conversation.

The Bigger Picture

Make no mistake: this isn't replacing the full desktop software. It's a streamlined, accessible version optimized for speed and convenience. Users who need granular control or heavy processing power will still require the complete applications.

This is a textbook freemium strategy. Adobe is giving users a taste of their engine, creating a natural upgrade path. Once someone hits the limitations of the chat interface, they're just one click away from the full experience. It's a smart way to widen the funnel and meet users exactly where they are.


Learn more

Why I Love AI ... and Why I'm Still the Boss 😎

When I think back to the not-too-distant past, it's wild how much has changed. An editing task that used to take me thirty minutes (hair selections, cloning, backdrop cleanup) now takes about thirty seconds; and THIS is why I'm excited about what AI brings to the party.

AI: The Best Assistant I Never Had to Train

Look, nobody got into photography because they dreamed of removing sensor dust or spending three hours meticulously masking flyaway hairs. That stuff isn't the art. It's just the cleanup crew work that has to happen before we get to make our magic.

And honestly? AI is brilliant at that stuff.

Think of it this way: AI handles the grunt work so I can focus on the vision. It's like having an incredibly fast, never-complaining assistant who's amazing at the boring bits and then steps aside when the real creative decisions need to be made.

Oh and those decisions that need to be made? Those are still mine.

Where AI Stops and I Begin

There's a line, though, and it's one I think about a lot.

AI can smooth skin, relight faces, swap out skies. It can create a technically "perfect" image in seconds. But here's the thing: perfect isn't always interesting. Sometimes those ultra-polished images feel a bit ... lifeless. Like they're missing something human.

The magic happens in the choices we make. The colour grade that shifts the whole mood. The decision to keep a little texture in the skin because real people aren't porcelain. The way we balance light and shadow to tell the story we want to tell.

That's where our style lives. That's the part AI can't do because it doesn't have taste, intuition, or a point of view.

It's a tool. A really good tool. But we’re the one holding it.

Getting My Life Back

The biggest win isn't just sharper images or cleaner backgrounds. It's time.

AI is giving me hours back. Hours I used to spend in my office, squinting (I’ve now got new glasses) at a monitor, doing repetitive tasks that made me question my life choices.

Now I can use that time for the stuff that actually matters: shooting more personal projects, experimenting with new techniques, or better still, spending time with my wife and friends. Making memories instead of just editing them. Living the life that's supposed to inspire the work in the first place.

We shouldn't fear these tools. We should embrace them (smartly) so we can get back to doing what we love.

Where Do You Stand?

I'm curious how you're navigating this shift.

Are you using AI tools to speed up your workflow? Or are you still figuring out where the line is between "helpful assistant" and "too much automation"?

I'd love to hear how you're finding the balance in the comments below.


Learn more

πŸ€– 4 Key Insights from Google's Gemini 3 Launch

4 Key Insights from Google's Gemini 3 Launch That Go Beyond the Numbers

With new AI models arriving every week, it's hard to tell which announcements actually matter. Many releases simply offer minor improvements and higher test scores, leaving us wondering what it all means for everyday use.

Google's Gemini 3 launch is different. Beyond the impressive benchmarks lie four important changes that show where AI technology is really headed. This article highlights the most significant developments that point to a major shift in both what AI can do and how we interact with it.

Insight 1: From Assistant to Thinking Partner

The biggest change in Gemini 3 isn't just improved performance. It's a deeper level of understanding that transforms how we interact with AI. Google designed the model to "grasp depth and nuance" so it can "peel apart the overlapping layers of a difficult problem."

This creates a noticeably different experience. Google says Gemini 3 "trades clichΓ© and flattery for genuine insight, telling you what you need to hear, not just what you want to hear." This represents an important evolution in how we work with AI. Instead of a simple tool that answers questions, it becomes a real collaborative partner for tackling complex challenges and working through difficult problems.

This new relationship demands more from us as users. When your main tool acts like a critical colleague rather than an obedient helper, you need to step up your own thinking and collaboration skills to get the most out of it.

Google CEO Sundar Pichai put it this way:

It's amazing to think that in just two years, AI has evolved from simply reading text and images to reading the room.

Insight 2: Deep Think Mode Brings Specialized Reasoning

Google introduced Gemini 3 Deep Think mode with this launch. This enhanced reasoning mode is specifically designed to handle "even more complex problems." The name isn't just marketing. It's backed by real performance improvements on some of the industry's toughest tests.

In testing, Deep Think surpasses the already powerful Gemini 3 Pro on challenging benchmarks. On "Humanity's Last Exam," it achieved 41.0% (without tools), compared to Gemini 3 Pro's 37.5%. On "GPQA Diamond," it reached 93.8%, beating Gemini 3 Pro's 91.9%.

This matters because it shows a future where AI isn't a single, universal intelligence. Instead, we're seeing the development of specialized "modes" for different thinking tasks. This isn't just about raw power. It's a strategic approach to computational efficiency, using the right amount of processing for each specific task. This is crucial for making AI sustainable as it scales up.

Insight 3: Antigravity Changes How Developers Build Software

Perhaps the most forward-thinking announcement was Google Antigravity, a new "agentic development platform." This represents a fundamental change in how developers work with AI, aiming to "transform AI assistance from a tool in a developer's toolkit into an active partner."

What makes Antigravity revolutionary is what it can actually do. Its AI agents have "direct access to the editor, terminal and browser," letting them "autonomously plan and execute complex, end-to-end software tasks." The potential impact is huge. Going far beyond simple code suggestions, it completely redefines the developer's role. Instead of writing every line of code, developers become directors of AI agents that can build, test, and validate entire applications independently.

Insight 4: AI Agents Can Now Handle Long-Term Tasks

A major challenge for AI has always been "long-horizon planning." This means executing complex, multi-step tasks over extended periods without losing focus or getting confused. Gemini 3 shows a real breakthrough here.

The model demonstrated its abilities on "Vending-Bench 2," where it managed a simulated vending machine business for a "full simulated year of operation without drifting off task." This capability translates directly to practical, everyday uses like "booking local services or organizing your inbox."

This new reliability over long sequences of actions is the critical piece that could finally deliver on the promise of truly autonomous AI. It marks AI's evolution from a "single-task tool" you use (like a calculator) to a "persistent process manager" you direct (like an executive assistant who handles your projects for months at a time).

Looking Ahead: A New Era of AI Interaction

These aren't isolated features. They're the building blocks for the next generation of AI. The main themes from the Gemini 3 launch (collaborative partnership, specialized reasoning, agent-first development, and long-term reliability) all point toward a future that goes beyond simple prompts and responses.

The focus has clearly shifted from basic question-and-answer interactions to integrated, autonomous systems built for real complexity. As AI moves from a tool we command to a partner we collaborate with, we'll need to adapt how we think, work, and create alongside it.


Learn more

Interviewed by London Camera Exchange at the South West Photo Show

At the recent South West Photo Show in Exeter, it was great to be asked and to sit downc and chat with London Camera Exchange’s Pete Rawlinson …

In this exclusive interview, Pete catches up with the incredibly talented Glyn Dewis at the South West Photo Show. We delve into Glyn’s moving and powerful 39-45 Portraits Project – a series honouring World War II veterans through timeless portraiture – and explore the stories and inspiration behind his work. Glyn also shares his thoughts on the evolving world of photography, including how he's embraced AI editing tools to enhance his creative process while staying true to his signature style. Whether you're a photographer, history enthusiast, or simply love a great conversation, this is one not to miss!