AI

AI and Photography: It's Not the End of the World

There is a conversation happening in photography right now, and chances are you’ve heard it. Whether it is in comment sections, Facebook groups, or at events, the same question keeps coming up: is AI going to kill photography?

I get it. When generative AI started producing photorealistic images from a text prompt, the alarm bells rang, and not entirely without reason. If a computer can conjure a dramatic seascape or a perfectly lit portrait from a few typed words, where does that leave those of us who actually pick up a camera?

So here’s what I believe: I don’t think it’s the threat people fear it is. Of course for some areas of Professional / Paid Photography but certainly not for enthusiasts; not for anyone who shoots because they genuinely love it.

We’ve Been Here Before

Photography has always had to adapt. Digital replaced film, and people said it would ruin photography. Smartphones put a camera in everyone's pocket, and people said that would ruin it too. It didn’t. If anything, more people are shooting now than ever before. Accoridng to CIPA shipment data with 2025 market reports, the hobbyist camera market made up well over two thirds of all digital camera sales, and the number of photography workshops and online courses has grown by more than 30 percent in recent years. People are not falling out of love with photography. They are falling deeper into it.

AI is the latest chapter in that same story. It’s a new tool arriving in an industry that has always evolved alongside new tools.

What AI Actually Is, and Is Not

Here’s the thing that often gets lost in the noise. AI, in the context of most photographers' day-to-day lives, is not generating fake images to replace yours; it’s quietly working inside the software you are already using.

Take Lightroom and Photoshop. Both are packed with AI-powered features now. Masking that would have taken me the best part of an hour a few years ago takes seconds. Removing a distracting element from the background of a portrait, reducing noise in a high ISO shot, selecting a subject with precision. These are the kinds of tasks that used to eat into your editing time without giving anything creative back. They were just tedious.

That is where I have found AI genuinely useful. Not as something that replaces my decisions, but as something that handles the mechanical stuff so I can focus on what I actually enjoy, developing the image, getting the look I had in my head when I pressed the shutter, making it feel the way I want it to feel. The creative part is still mine. AI just means I am not spending forty minutes (and more) doing fiddly selections to get there.

Yes, There Will Be Casualties

It would be dishonest to say AI has no impact on photography as a profession, because it does. Certain areas are already feeling it. Product photography is one. Generic stock imagery is another, and headshot photography is shifting too, with a growing number of AI applications now capable of producing professional-looking results at a fraction of the cost of hiring a photographer.

Will some people choose those options? Of course. But then, every industry has customers who will always gravitate towards the cheapest available option. Photography is not immune to that, and it never has been. There have always been clients who want results without paying for expertise. AI simply gives that segment of the market a new way to do what they were always going to do.

The clients worth having though, tend to think differently. They understand the difference between a generated image and a photograph made by someone who knows what they are doing. They value the professionalism, the experience of working with a skilled photographer, and ultimately an image that could not have come from a prompt box. That market is not shrinking. If anything, as AI imagery becomes more widespread, it is becoming more discerning.

The Authenticity Factor

There is something interesting happening on the other side of the AI conversation. As AI-generated imagery has flooded the internet, audiences have started to crave the opposite. The photography trends emerging in 2026 are centred around authenticity, real moments, real imperfections, real emotion. The slightly overcooked, hyperpolished aesthetic is losing its appeal. People want to see images that feel genuinely human.

That is actually great news for photographers, because the one thing AI cannot do, no matter how sophisticated it becomes, is be there. It cannot stand on a cold beach at six in the morning, read the light, time the wave, feel the composition before it happens. It cannot build a relationship with a portrait subject and find the moment where they forget the camera is there. It can produce images that look impressive on a screen, but impressive and meaningful are not the same thing.

Photography Is Not Just About the End Product

This is the point that gets missed entirely in the AI debate. When people talk about whether AI can replace photography, they tend to focus on the output. Can it produce a decent image? In some cases, yes. But photography has never really been just about the image at the end of it.

It is about being out in the world with a camera. It is the discipline of learning your craft, understanding light, making decisions in the moment. It is the feeling of nailing a shot you had been visualising for weeks. It is the connection you build with a subject during a portrait session. It is standing somewhere beautiful and choosing how to see it.

No AI can replicate that experience. And for the vast majority of photographers, enthusiast or professional, that experience is the whole point.

So Where Does That Leave You?

If you shoot because you enjoy it, AI changes very little about that. It might make some parts of the process quicker and easier, and used well, that is no bad thing. But it is not going to make the experience of making a photograph redundant.

Yes, the industry will keep changing. Some corners of it will shrink. But photography itself, the act of it, the craft of it, the joy of it, is not going anywhere. Do not be scared of AI; just don’t hand it the wheel either.

Photography is not dying. If anything, the conversation AI has started might just remind people why real photographs matter.

STOP using 10 Apps to Plan your Photography! (Do this instead)

The Problem with "Too Much" Information

We have an incredible amount of data at our fingertips these days. If you are planning a landscape or seascape trip, there are hundreds, maybe thousands of apps available. Honestly, that is part of the issue for me. There is just too much choice, and every day there seems to be a new app hitting the store. I never quite know which one to use for what.

While I still use a dedicated app to check the position of the sun, I have moved everything else over to AI.

How I Use AI as a Location Scout

It doesn't really matter which platform you prefer. I use Google Gemini, but you can do the exact same thing with ChatGPT, Claude, or Perplexity. The goal is to move away from checking ten different websites and instead have one single place that "scouts" the location for you.

I have set up something called a "Gem" in Gemini (or a Custom GPT if you use ChatGPT). I call it my Seascape Photography Planner. All I have to do is tell it where I am going and when, and it does the rest.

For example, if I tell it I am heading to Godrevy Lighthouse this coming Saturday, within seconds it populates the screen with:

  • Weather conditions: Temperature, precipitation, and wind speeds.

  • Lighting: Sunrise, sunset, and golden hour times.

  • The Ocean: Tide times and tide heights.

  • Logistics: Where to park, how to pay (cash or app), and where to find food or fuel nearby.

  • Safety: The nearest hospital and contact details for the police.

  • Drone Info: Nearest airfield and air traffic control contacts, just in case a drone goes rogue.

Setting It Up Yourself

The process is incredibly simple. You start by asking the AI to find this information for a specific trip. I often use a dictation app called Whisper to just speak my request into the text box.

Once the AI gives you a great result, you ask it one simple question: "Can you now create a system prompt from this so that the next time you can give me all of this information, but all I need to tell you is where I'm going and when?"

The AI will then write a "formula" for itself. It might say something like, "You are an expert photography location scout. Your goal is to provide a comprehensive, data-driven briefing."

You simply copy that text, go into your settings to create a new "Gem" or "Custom GPT," and paste those instructions in. Give it a name, save it, and you are done.

The Real-World Benefit

The best part about this is that it syncs to your phone. On the morning of a shoot, I can quickly check the latest updates while I'm having my coffee. I have even added "road conditions" to my prompt lately so I know if there are any last-minute diversions or roadblocks before I set off.

It is a massive time-saver. Instead of bouncing between weather apps, tide tables, and Google Maps, I get a tailored briefing in one go. It has definitely increased my success rate, but more than that, it has made the whole experience of being out in the field much more relaxed.

🪦 Is Adobe Killing Lightroom with Topaz?

A few days ago I posted a video about the latest Lightroom update, version 9.2, and one of the big headlines was the new generative upscale feature powered by Topaz Gigapixel. A lot of people were excited about it, and honestly, so was I at first. But now that the dust has settled, I've had a chance to really sit with it, and I'll be straight with you: something feels off.

I've been going through your comments and doing a lot of thinking, and there are a few things here that I just can't get past.

Are We Really Going Backwards on Non-Destructive Editing?

The non-destructive workflow is one of the things that makes Lightroom so brilliant. We've reached a point where we can do masking, lighting adjustments, special effects, all without ever leaving the app or touching the original file. It's genuinely impressive how far it's come.

But this Topaz integration throws a spanner in the works. It basically puts a full stop on your edit and spits out a brand new file, which is a destructive process. And here's the thing, we've been here before. Remember when Super Resolution had the same problem? Adobe actually listened back then and sorted it so we weren't drowning in extra DNG files. So why are we going in the opposite direction now?

Innovation or Just Outsourcing?

Adobe is supposed to be leading the way in creative software. They already have Super Resolution, and it works well. So rather than pushing that further, say, allowing a proper 4x upscale, they've decided to hand it off to a third party instead.

That doesn't feel like innovation to me. It feels like taking the easy route. Especially when you consider the price increases we've seen recently. You'd expect that extra revenue to go towards building better, more seamless tools, not just bolting on someone else's technology and calling it a feature.

The Credits Problem

This is the bit that really gets me. The version of Topaz built into Lightroom is incredibly stripped back compared to the standalone app. There's no preview, barely any controls, and it costs you generative credits every single time you use it.

Compare that to the standalone Topaz app, where you get a proper preview, far more control, and unlimited upscales as part of your monthly subscription. In Lightroom, you're essentially guessing and spending credits to find out whether the result is even usable. It makes you wonder whether this is genuinely designed to improve your workflow or whether it's just another way to drive credit sales.

Let's Not Lose Sight of What Matters

I'm a big fan of AI and what it can do for our editing. It can save time, open up new possibilities, and make certain jobs a lot easier. But it should be a tool that supports your creativity, not a shortcut that sidesteps it.

Lightroom has always been a platform I've championed, and I still believe in what it can be. But moves like this make it harder to recommend with a straight face. I don't want to see it turn into a hub for third-party plugins that slowly bleed you dry with credit charges.

I've built my career on Adobe software and I'll always back it when it deserves it. But I also think it's important to say something when things don't feel right.

So Adobe, if you're paying attention: we know what you're capable of. Give us tools that respect the way we work, rather than features that complicate it. And in the meantime, if I run out of credits, I'll quite happily go back into Photoshop and rely on the traditional skills that have served me well for years. AI is a brilliant tool. But it's not the whole craft.

Reality vs Photoshop - Is Faking It Cheating? 🤷‍♂️

Car photography always looks that little bit more dramatic when there's a wet road reflection underneath the vehicle. But what do you do when the road is bone dry? In this guide, I'll walk you through two ways to fake a puddle reflection in Photoshop -- one traditional, one powered by AI -- and then I'll leave you with a question worth thinking about.

Method One: The Manual Approach

Step 1: Select the Car

Start by grabbing the Object Selection tool from the toolbar. In the options bar at the top of the screen, make sure the mode is set to Cloud for the best possible result, then click Select Subject. Photoshop will do a surprisingly good job of selecting the car in just a moment or two.

Step 2: Copy the Car onto Its Own Layer

With your selection active, press Command + J (Mac) or Control + J (Windows) to copy the car up onto a new layer. If you toggle every other layer off, you should see just the isolated car sitting cleanly on a transparent background.

Step 3: Flip It Upside Down

Go to Edit > Transform > Flip Vertical. This flips the car layer to create the basis of your reflection. Now grab the Move tool, hold down Shift (to keep movement perfectly vertical) and drag the flipped car downwards until the tyres of both the original and the reflection are just touching.

If things look slightly off-angle, go to Edit > Free Transform, move your cursor just outside the bounding box until you see the rotation cursor, and give it a gentle nudge until it lines up properly.

Step 4: Add a Black Layer Mask

Rename this layer "Reflection" to keep things tidy. Then, holding down Option (Mac) or Alt (Windows), click the Layer Mask icon at the bottom of the Layers panel. This adds a black mask that hides the layer entirely -- which is exactly what you want for now.

Step 5: Draw the Puddle Shape

Select the Lasso tool and make sure you click directly on the layer mask thumbnail (you should see a white border appear around it, confirming it's active). Now draw a rough, freehand puddle shape beneath the car's tyres -- it doesn't need to be perfect, natural-looking and irregular is actually better here.

Step 6: Fill with White to Reveal the Reflection

Go to Edit > Fill, set the contents to White, and click OK. The reflection will now appear only within the puddle shape you drew.

Step 7: Soften the Edges

Zoom in and you'll notice the puddle edge looks very sharp and unnatural. To fix that, go to Filter > Blur > Gaussian Blur and apply just a small amount -- around 3 pixels is usually enough. This softens the boundary and helps the reflection blend into the ground convincingly.

Finally, you can reduce the opacity of the Reflection layer slightly to make the whole thing look a little more subtle and true to life.

Method Two: Using Adobe Firefly's Generative Fill

If you want a quicker and arguably more realistic result, Photoshop's AI tools can do a remarkable job here.

Step 1: Load the Puddle Selection

Hold Command (Mac) or Control (Windows) and click directly on the layer mask from your first reflection layer. This loads the puddle shape back as an active selection, saving you from having to draw it again.

Step 2: Select the Background Layer

Click on the main image layer, so that Generative Fill works on the background rather than the reflection layer.

Step 3: Run Generative Fill

In the contextual taskbar, click Generative Fill and type a prompt along the lines of: a reflection of car in puddle of water. For the AI model, select Firefly (specifically the Firefly Built and Expand model released in January 2026). If you're on a Creative Cloud Pro account, this won't cost you any credits -- whereas models like Flux or Nano Banana can use anywhere between 20 and 30 credits per generation.

Click Generate.

Step 4: Choose Your Favourite Variation

Firefly will produce three variations for you to compare. Have a look through them and pick the one that looks most convincing. You'll likely notice that the AI does something quite clever: it reflects the sky in the puddle on the far side of the car, just as real water would. Achieving that manually in Photoshop would take considerably more time and effort.

Which Method Should You Use?

For a quick, dirty result, the manual method works well and gives you full control. But for something that genuinely looks like a photograph taken on a wet road, the AI approach is hard to argue with -- particularly because of how naturally it handles the environmental reflections in the water.

A Question Worth Thinking About

Here's something to consider. When photographing that car, there were really two options: bring bottles of water to pour around the car and create a real puddle on the dry road, or add the reflection later in post-production, either manually or with AI.

Both approaches result in a reflection that wasn't originally there. The only difference is when in the process you add it.

So what do you think -- is there a meaningful ethical difference between physically creating something on location and digitally adding it afterwards? When it comes to reflections specifically, does it matter?

Let me know your thoughts in the comments below.

Why AI Enhancement Isn't Cheating in Wildlife Photography

Wildlife photography is something I'd love to do more of, but at the moment, time doesn't allow it. However, when I do get the chance to head out with a long lens to give it a go, I gain deep respect for what it takes to capture the shot.

That's why the debate around AI editing tools fascinates me.

Critics argue that tools like Topaz Gigapixel or AI sharpening "ruin" wildlife photography. If your lens wasn't long enough or your sensor didn't capture fine details, using AI to reconstruct them is cheating.

I disagree completely.

The soul of wildlife photography is being there. If you hiked to a remote location, endured harsh weather, and invested hours of patience to witness a specific behaviour, that has real value. That's the foundation of your photograph.

So why should using AI to overcome your gear's physical limitations invalidate your fieldwork?

AI enlargement or texture refinement doesn't fabricate what the animal did. When a predator chases prey, AI doesn't invent the event. It helps your image reflect what you actually witnessed. It bridges the gap between your equipment's constraints and the magnitude of the moment.

We obsess over the technical "purity" of raw files, but we should focus on the effort required to be standing in that field. Cameras are tools, and every tool has limits. If software rescues a once-in-a-lifetime encounter from being a blurry mess, that's a win.

The truth of wildlife photography isn't in the pixels. It's in the person willing to get cold, wet, and tired to document the natural world.

What's your take?

Does AI enhancement cross a line, or does the real work happen in the field?

I'd genuinely love to hear your perspective.

Choosing the Right AI Model in Photoshop: A Credit-Smart Guide

If you've opened Photoshop recently, you've likely noticed that Generative Fill has received a significant upgrade. The platform now offers multiple AI models to choose from, each with distinct capabilities. However, there's an important consideration: these models vary considerably in their generative credit costs.

Understanding the Credit Structure

Adobe's proprietary Firefly model requires only 1 credit per generation, making it the most economical option. The newer partner models from Google (Gemini) and Black Forest Labs (FLUX), however, are classified as premium features and consume credits at a substantially higher rate. Depending on the model selected, you can expect to use between 10 and 40 credits per generation.

For users looking to maximize their monthly credit allocation, selecting the appropriate model for each task becomes an essential consideration.

Firefly: Your Go-To Workhorse (1 Credit)

Firefly serves as the default option and remains the most practical choice for everyday tasks. At just 1 credit per generation, it offers excellent efficiency for routine editing work. Whether you need to remove unwanted objects, extend backgrounds, or clean up imperfections, Firefly handles these tasks effectively.

Additionally, it benefits from full Creative Cloud integration, Adobe's commercial-use guarantees, and Content Credentials support. For standard production workflows, it's difficult to find a more cost-effective solution.

The Premium Players

The partner models represent a significant increase in cost, but they also deliver enhanced capabilities. Adobe operates these models on external infrastructure, which accounts for their higher credit requirements. These models excel at handling complex prompts, challenging lighting scenarios, and situations requiring exceptional realism or fine detail.

The credit costs break down as follows:

  • Gemini 2.5 (Nano Banana): 10 credits

  • FLUX.1 Kontext [pro]: 10 credits

  • FLUX.2 Pro: 20 credits

  • Gemini 3 (Nano Banana Pro): 40 credits

All of these models draw from the same credit pool as Firefly, but they deplete it considerably faster.

When to Use What

Gemini 2.5 (Nano Banana) occupies a middle position in the model hierarchy. It performs well when Firefly struggles with precise prompt interpretation, particularly for complex, multi-part instructions. This model also excels at maintaining consistent subject appearance across multiple variations.

FLUX.1 Kontext [pro] specialises in contextual integration. It analyses existing scenes to match perspective, lighting, and colour accurately. When adding new elements to complex photographs, this model provides the most seamless integration, making additions appear native to the original image.

FLUX.2 Pro elevates realism significantly. It generates outputs at higher resolution (approximately 2K-class) and demonstrates particular strength with textures. Areas that typically present challenges, such as skin, hair, and hands, appear notably more natural. For portrait and lifestyle photography requiring professional polish, the 20-credit investment may be justified.

Gemini 3 (Nano Banana Pro) represents the premium tier at 40 credits per generation. This "4K-class" option addresses one of Firefly's primary limitations: text rendering. When projects require legible signage, product labels, or user interface elements, Nano Banana Pro delivers the necessary clarity.

A Practical Approach to Model Selection

Default to Firefly (1 credit) for standard edits, cleanup tasks, and basic extensions

  1. Upgrade to Gemini 2.5 (10 credits) when improved prompt interpretation or likeness consistency is required

  2. Select FLUX.1 Kontext (10 credits) when lighting and perspective matching are priorities

  3. Deploy FLUX.2 Pro (20 credits) when realism and texture quality are essential

  4. Reserve Gemini 3 (40 credits) for situations requiring exceptional text clarity and fine detail

The guiding principle is straightforward: begin with the most economical option and upgrade only when project requirements justify the additional cost.

The Invisible Currency of AI 🤖

If you've been using AI tools for any length of time, you've probably hit a confusing moment. Maybe your chatbot suddenly "forgot" something important from earlier in the conversation. Or you opened Photoshop's Generative Fill and noticed a credit counter ticking down. Perhaps you glanced at an API pricing page and saw costs listed per "1K tokens."

When we interact with AI, we think in words and images. The AI thinks in tokens and credits. Understanding these two concepts makes it far easier to get better results and keep an eye on costs.

Part 1: How Text AI Actually "Reads"

When you type a prompt into a text-based AI like Claude, ChatGPT, or Gemini, it doesn't read your sentences the way a human does. Before processing your request, it breaks your text into small digestible pieces.

Those pieces are called tokens.

  • Simple words: Short, common words like "cat," "run," or "the" are often just one token.

  • Complex words: Longer or rarer words get split up, so "unbelievable" might become three tokens.

  • Punctuation and spaces: Commas, periods, and other symbols can also count as tokens, depending on how the model's tokeniser works.

A useful rule of thumb: 1,000 tokens equals roughly 750 English words.

The Context Window: Why AI Forgets

Every AI model has what's called a "context window." Think of it as a short-term memory bucket with a hard limit on how many tokens it can hold at once.

Everything from your conversation has to fit inside:

  • Your original instructions

  • Every question you've asked

  • Every answer the AI has given back

When you're deep into a long conversation or you've pasted in a massive document, this bucket fills up. Once it's full, the model has to discard the oldest tokens to make room for new ones. That's why your AI might suddenly ignore a rule you set at the beginning of the chat. It's not being stubborn, it's just run out of room.

Input vs Output: The Cost of Thinking

From a billing perspective, not all tokens are created equal. Most API pricing separates input tokens from output tokens.

  • Input tokens (reading): What you send in: prompts, documents, instructions.

  • Output tokens (writing): What the AI generates: summaries, code, emails.

Output tokens typically cost more than input tokens because generating new text requires more computation than simply reading and encoding it. This isn't universal, but it's a safe working assumption: longer requested outputs usually cost more on paid APIs.

Part 2: The Visual Artist: Credits

Text models run on tokens, but what about Photoshop's Generative Fill or image tools like Midjourney and Flux? These systems might use tokens behind the scenes, but what you see is usually a simpler "per-generation" approach.

Here's a helpful analogy: when you ask a text model to write an essay, it's like a taxi meter that keeps running until you reach your destination. When you ask for an image, you're ordering a fixed item: "one 1024×1024 image." So billing becomes straightforward: one generation, one charge.

How Adobe Uses Generative Credits

Photoshop's Generative Fill and other Firefly-powered features run on "Generative Credits." Your Adobe plan gives you a monthly allowance, and each generative action consumes part of that allowance.

  • Most standard features use one credit per generation.

  • More intensive features or very large outputs can consume multiple credits, according to Adobe's rate cards.

  • The length of your prompt usually doesn't affect the credit cost. What matters is the type of operation and sometimes the output size.

Other visual platforms work similarly, often charging different amounts based on resolution or quality settings, since bigger, higher-quality images demand more computing power.

Why Some AI Uses More "Juice"

Depending on what you're doing, you can burn through your limits faster than expected.

For text AI (tokens):

  • Long inputs: Pasting a 50-page transcript into your prompt can devour a huge chunk of both your context window and token budget before the model even starts responding.

  • Long outputs: Asking for a detailed, multi-page answer consumes far more output tokens, and therefore more compute and money, than requesting a tight one-paragraph summary.

For image AI (credits):

  • Quantity: Generating 10 variations costs roughly 10 times as many credits as generating one image, because each generation is its own job.

  • Resolution and complexity: Higher resolutions or video-like outputs often consume more credits per job, reflecting the extra server work required.

The Takeaway

You don't need to be a mathematician to use AI well, but understanding tokens and credits makes you a far better pilot.

If your text AI starts getting confused or ignoring earlier instructions, you've likely pushed past its context window. Try trimming or summarising earlier content to free up space. If you're worried about image costs, invest time in a clear, targeted prompt and sensible resolution settings. Get what you need in as few generations as possible, rather than brute-forcing dozens of variations.

Master these invisible currencies, and you'll get better results while keeping your costs under control.


Why I Love AI ... and Why I'm Still the Boss 😎

When I think back to the not-too-distant past, it's wild how much has changed. An editing task that used to take me thirty minutes (hair selections, cloning, backdrop cleanup) now takes about thirty seconds; and THIS is why I'm excited about what AI brings to the party.

AI: The Best Assistant I Never Had to Train

Look, nobody got into photography because they dreamed of removing sensor dust or spending three hours meticulously masking flyaway hairs. That stuff isn't the art. It's just the cleanup crew work that has to happen before we get to make our magic.

And honestly? AI is brilliant at that stuff.

Think of it this way: AI handles the grunt work so I can focus on the vision. It's like having an incredibly fast, never-complaining assistant who's amazing at the boring bits and then steps aside when the real creative decisions need to be made.

Oh and those decisions that need to be made? Those are still mine.

Where AI Stops and I Begin

There's a line, though, and it's one I think about a lot.

AI can smooth skin, relight faces, swap out skies. It can create a technically "perfect" image in seconds. But here's the thing: perfect isn't always interesting. Sometimes those ultra-polished images feel a bit ... lifeless. Like they're missing something human.

The magic happens in the choices we make. The colour grade that shifts the whole mood. The decision to keep a little texture in the skin because real people aren't porcelain. The way we balance light and shadow to tell the story we want to tell.

That's where our style lives. That's the part AI can't do because it doesn't have taste, intuition, or a point of view.

It's a tool. A really good tool. But we’re the one holding it.

Getting My Life Back

The biggest win isn't just sharper images or cleaner backgrounds. It's time.

AI is giving me hours back. Hours I used to spend in my office, squinting (I’ve now got new glasses) at a monitor, doing repetitive tasks that made me question my life choices.

Now I can use that time for the stuff that actually matters: shooting more personal projects, experimenting with new techniques, or better still, spending time with my wife and friends. Making memories instead of just editing them. Living the life that's supposed to inspire the work in the first place.

We shouldn't fear these tools. We should embrace them (smartly) so we can get back to doing what we love.

Where Do You Stand?

I'm curious how you're navigating this shift.

Are you using AI tools to speed up your workflow? Or are you still figuring out where the line is between "helpful assistant" and "too much automation"?

I'd love to hear how you're finding the balance in the comments below.