news

Drone Photography: Are the Changes in Law and Restrictions Killing it?

If you have glanced at the headlines recently, you could be forgiven for thinking the drone hobby is coming back down to earth. Between sweeping restrictions in the United States and tighter registration rules in the UK, the carefree "wild west" years of flying are clearly behind us. Yet despite the extra admin, the sector itself is thriving. Recent reports put the global drone photography services market at close to the one‑billion‑dollar mark and growing at around 19–25 percent a year, which firmly positions aerial imagery as a serious commercial service rather than a weekend toy.

What Has Changed in the Rules?

The big question many pilots are asking is how the latest rules actually affect them. The answer depends heavily on where you live.

In the United States, the updated FCC "Covered List" is the main story. In December 2025, the FCC was effectively barred from granting new equipment authorisations to certain foreign‑made drones and components, including DJI products, which means newly designed foreign models cannot be approved for import, marketing or sale in the US unless they qualify for a specific waiver. Existing drones tell a different story: aircraft that already have FCC approval remain legal to purchase, own and fly, and retailers can still sell those earlier authorised models. That makes the situation more of a squeeze on future variety than an overnight flying ban.

In the United Kingdom, the Civil Aviation Authority has confirmed a major shift in weight thresholds. From 1 January 2026, anyone flying a drone or model aircraft that weighs 100 grams or more must hold a Flyer ID, and if that drone has a camera (or weighs 250 grams or more), they also need an Operator ID. This is a big change from the previous 250 gram threshold for most registration, and it brings a large number of small "everyday" drones into the regulated category, especially popular mini camera drones.

Regulators are also getting tougher on bad behaviour. In the US, the FAA and other authorities have made clear they intend to take enforcement more seriously when flights put people at risk, and civil penalties for serious violations can run into the tens of thousands of dollars per incident. The message is straightforward: casual flying is still welcome, but reckless flying increasingly has real financial consequences.

The Rise of the Lightweight Drone

All of this has turned drone "weight‑watching" into a serious buying consideration. Many pilots are moving towards lighter aircraft to reduce friction with the rules while still getting strong image quality.

On the prosumer side, there is intense interest in compact models that squeeze larger‑than‑phone‑sized sensors into sub‑250 gram frames, offering high‑resolution video, good low‑light performance and multi‑directional obstacle avoidance in a bag‑friendly package. For beginners, the sweet spot tends to be affordable drones with strong safety features, such as built‑in propeller guards, simplified flight modes and easy hand launches, which make that first flight much less intimidating.

The regulatory pressure in the US has also opened the door wider for alternative brands. With new foreign‑made models facing an approval freeze, manufacturers that already have authorised aircraft in the market, or those operating outside the traditional big‑name ecosystem, are getting more attention, particularly when they can offer 3‑axis gimbals and 4K recording at a lower price. The result is a slow but noticeable diversification of the shelves, even as some pilots remain loyal to existing line‑ups.

Are People Actually Giving Up?

So with more paperwork and stricter enforcement, are hobbyists dumping their drones and walking away? The broader picture suggests the opposite.

Market research on drone services and drone photography shows steady growth through 2024 and 2025, with strong forecasts into the early 2030s, particularly in sectors like real estate, construction monitoring, inspections and media. That does not look like a hobby in decline. While there is certainly some regulatory fatigue in online communities, usage data and revenue projections point towards more flights, more paid work and more creative output … not less.

On the second‑hand market, much of the activity looks less like a mass exit and more like a "fleet refresh". Many pilots are selling older, heavier aircraft in favour of lighter, regulation‑friendly models that are easier to keep compliant under the 2026 rules in both the UK and US. It is a natural response: swap one or two bulky legacy drones for a compact, modern model that is simpler to register, carry and justify to clients.

What 2026 Really Means for Drone Photography

Drone photography has grown up. It has moved from being treated as a novelty to being recognised as a serious imaging tool that sits alongside your main camera kit. The entry barrier is undeniably higher than it was a few years ago, with registration requirements, Remote ID timelines and more stringent enforcement now part of the landscape. At the same time, the technology has never been better: smaller drones, better sensors, improved safety features and expanding commercial demand are all pulling the market upwards.

For bloggers, creators and photographers, the takeaway is simple. The sky is not closing. It is just becoming more organised. If you are willing to learn the rules, pick the right aircraft and fly responsibly, drone photography in 2026 is still very much on the way up.

APS-C and Micro Four Thirds are Quietly Winning

Fresh shipment data from the Camera & Imaging Products Association (CIPA) for 2025 shows that mirrorless cameras keep growing, and that most interchangeable-lens cameras being sold are not full frame at all, but APS-C and Micro Four Thirds.

Out of more than 9.4 million cameras shipped worldwide in 2025, around 6.3 million were mirrorless models, while DSLRs fell to just over 690,000 units.

Mirrorless up, DSLRs down

CIPA's latest report confirms what most of us have been seeing in camera announcements for a while now.

Mirrorless shipments in 2025 reached about 6.3 million bodies, which represents roughly 112.5% of the previous year's levels. That's actual year-on-year growth rather than just holding steady. Meanwhile, DSLR shipments dropped to just over 690,900 units worldwide, only 69.3% of what we saw in 2024.

In other words, mirrorless isn't just the future anymore. It's the present. And the traditional DSLR market continues to shrink.

Smaller sensors outsell full frame

For 2025, CIPA began breaking out interchangeable-lens camera shipments by sensor size, and this paints a really clear picture.

APS-C and Micro Four Thirds bodies accounted for more than 4.45 million units shipped. Full-frame and larger (including medium format) reached around 2.54 million units.

So despite all the marketing focus on high-end full-frame systems, the majority of buyers are actually choosing cameras with smaller sensors.

This makes sense when you look at where these cameras sit in the market:

  • Price: APS-C and Micro Four Thirds models typically launch at more accessible price points, which makes them attractive to newcomers and enthusiasts who don't want to commit full-frame money on day one.

  • Size and weight: Smaller sensors usually mean smaller bodies and lenses, which is brilliant if you travel, hike, or just don't fancy lugging around a heavy bag.

  • Reach: The crop factor effectively gives you more telephoto reach from the same focal lengths, which is really handy for wildlife, sports, and distant subjects.

The flip side is that wide-angle work becomes trickier, as you need much shorter focal lengths to get the same field of view as full frame. Of course, if you love ultra-wide landscapes, you just have to adjust your lens choice. You’ll be looking for shorter focal lengths to get the same view as a full-frame setup, but there are some fantastic, tiny wide-angle lenses out there that do the job perfectly.

Regional trends: where DSLRs still hang on

When you zoom into the regional breakdown, DSLRs haven't vanished everywhere at the same pace.

In the Americas, DSLR shipments were still at 86.9% of their 2024 level. That's a decline, but not a total collapse. In Europe, the figure was 61.7% of the previous year. In Japan, fewer than 14,500 DSLRs were shipped, only about 47.3% of the 2024 volume. And in China, just over 28,250 DSLRs went out, which is 33.1% compared with the year before.

This suggests that in markets like Japan and China, the shift to mirrorless has been more decisive, while in the Americas and Europe there's still a meaningful base of DSLR users and buyers.

crop systems still dominate, but the gap is narrower

The lens numbers tell a similar story, but it's slightly more nuanced.

CIPA members shipped more than 10.6 million lenses worldwide in 2025, which corresponds to 102.8% of the 2024 figure, so lens sales are growing alongside cameras.

Lenses designed for sensors smaller than full frame accounted for about 5.82 million units. Full-frame and larger lenses reached more than 4.77 million units.

Here the split between crop and full-frame glass is tighter than it is for camera bodies. This implies that full-frame shooters are more likely to invest in multiple lenses, while many crop-sensor buyers stick with a kit zoom or a minimal setup.

Compacts: a small comeback from a very low base

Compact cameras are also seeing a modest resurgence, though the segment is still a shadow of its early-2010s heyday.

CIPA's report notes growth in compact shipments in 2025, but they remain far below the peak of the point-and-shoot era around 2010.

Today's compact buyers tend to be people looking for something clearly better than a phone. Often that means premium compacts, travel zooms, or niche models, rather than the mass-market "family camera" of the past.

What these trends mean for photographers

A few practical takeaways if you're deciding where to invest next:

You don't need full frame to be "serious". The majority of new interchangeable-lens cameras sold in 2025 were APS-C or Micro Four Thirds, and the lens ecosystem around them is clearly healthy.

Full frame is increasingly a committed choice. The tighter body numbers but strong lens sales suggest that full-frame systems are being used by photographers who are happy to invest more heavily in lenses.

DSLR systems will keep shrinking. There's still life in DSLRs in some regions, but the long-term trend in shipments is firmly downward.

For most photographers, especially those who value portability or are budget-conscious, sticking with or moving to a modern crop-sensor mirrorless system remains a very smart choice.

Evoto's AI Headshots: When Your Favourite Tool Turns Against You

Evoto's AI headshot generator has become a cautionary tale about how quickly an AI company can burn through the trust of the very professionals who helped build its reputation.

When your retouching app becomes a rival

At Imaging USA 2026 in Nashville, portrait and headshot photographers discovered that Evoto had been quietly running a separate "Online AI Headshot Generator" site. The service let anyone upload a selfie and receive polished, corporate-style portraits, with marketing that openly pitched it as a cheaper, easier alternative to booking a photographer.

This wasn't a hidden experiment tucked away behind a login. The headshot generator had a public URL, example images, an FAQ and a clear path from upload to final "professional" headshot. For photographers who had built Evoto into their workflow, it felt like discovering that a trusted retouching assistant had quietly set up shop down the road and started undercutting them.

Why Evoto's role made this sting

Evoto built its identity as an AI-powered retouching and workflow tool aimed squarely at professional photographers, especially those shooting portraits, headshots and weddings. The pitch was straightforward: let the software handle the tedious stuff like skin smoothing, flyaway hairs, glasses glare, background cleanup and batch retouching so photographers can focus on directing and shooting.

That positioning worked. Photographers paid for it, used it on paid client work, recommended it in workshops and videos, and sometimes became ambassadors or power users. The unspoken deal was that Evoto would stay in the background, supporting human photographers rather than trying to replace them. A consumer-facing headshot generator cut straight across that understanding.

What the headshot generator offered

The AI headshot tool followed a familiar pattern: upload a casual selfie, choose a style and receive cleaned-up headshots with flattering lighting, neat clothing and tidy backgrounds, ready for LinkedIn or company profiles. The examples looked very similar to the kind of "studio-style" work many Evoto customers already produce for corporate clients.

*Simulation Only ; NOT the Evoto Interface

The wording is what really set people off. The marketing leaned heavily into cost savings, avoiding studio bookings, quick turnaround and "professional-looking" results without needing a photographer. Coming from a faceless tech startup, that would already be provocative. Coming from a tool that photographers had trusted with their files and workflows, it felt like a direct invitation for clients to pick AI over them.

For many creatives, this is the line that matters: AI that helps you deliver better work is one thing. AI that presents itself as your replacement is something else entirely.

Why photographers are so angry

Photographers' reactions centre on three main issues.

First is a deep sense of betrayal. People had paid into the Evoto ecosystem, uploaded thousands of client images and publicly championed the product. Learning that the same company had built a consumer brand aimed at undercutting them felt like discovering that their support had funded a tool designed to compete with them.

Second are concerns about training data. Photographers have pointed out that the look of the AI headshots seems very close to the kind of work Evoto users upload. Evoto now says its models are trained only on commercially licensed or purchased imagery, not on customer photos, but those reassurances arrived after the story broke and against a backdrop of widespread anxiety about AI scraping. Without long-standing, transparent policies on data use, many remain sceptical.

Third is the tone of the marketing. Promises of saving money, avoiding bookings and still getting "pro-quality" results read like a direct invitation for clients to choose a cheap AI pipeline instead of hiring a photographer. Photo Stealers captured the mood with a blunt "WTF: Evoto AI Headshot Generator" and reported photographers literally flipping off the Evoto stand at Imaging USA. The Phoblographer went further, calling the service an attempt to replace photographers with "AI slop" and questioning the claim that this was simply an innocent test.

The apology that didn't land

In response, Evoto posted a statement saying the headshot generator had "missed the mark", "crossed a line" and was being discontinued. The company framed it as a test of full image generation that strayed beyond the support role it wants to play, and promised that user images are not used to train its models, describing its protections as "ironclad" and its training data as licensed only.

On the surface, this sounds like the right approach: apology, cancelled feature, clearer explanation of data use. In practice, many photographers point out that a fully branded, public site with examples and a working workflow doesn't look like a small internal trial. Shutting down comments on the apology thread after a wave of criticism made it feel more like damage control than a genuine conversation with paying users.

Commentary from outlets such as The Phoblographer argues that the real problem is the direction Evoto appears to be heading. If a company plans to sell "good enough" AI portraits directly to end clients while also charging photographers for retouching tools, trust will be almost impossible to rebuild.

What photographers can learn from this

The Evoto story lands at a time when photographers are already rethinking their place in an AI-saturated world, from smartphone "moon shots" to generative backdrops and AI profile photos. Beyond the immediate anger, it points to a few practical lessons.

Treat AI tools as business partners, not just clever software. Pay attention to how they talk to end clients and where their roadmap is heading.

Ask clear questions about training data and future plans. You need to know if your uploads can ever be used for model training and whether the company intends to build services that compete with you.

Be careful about attaching your reputation to a brand. Discounts and referral codes matter less than whether the company's long-term vision keeps human photographers at the centre.

For AI companies in imaging, the message is equally direct. You cannot present yourself as a photographer-first platform while quietly testing products that encourage clients to bypass those same photographers. In a climate where trust is already thin, real transparency, clear boundaries and honest dialogue are the only way to stay on the right side of the people whose pictures, workflows and support built your business in the first place.

Picture This - A Musical Gift 🎸

Last Friday I was left completely speechless!

I logged in to a live video chat to join members of The Photography Creative Circle for our weekly coffee hour, and immediately there seemed more members present than usual … way more.

Shortly after logging in I found out why, as member and dear friend Jean-François Léger began reading out something he’d prepared …

Glyn, In the spirit of the holiday season, we have a surprise for you today.

About six months ago, you shared a vision with us by creating this Photographic Creative Circle. At first, we all joined to learn from you, to master our cameras and refine our post-processing skills. But very quickly, something much deeper began to take shape.

It has become a place where we share our lives, celebrate our successes, and support one another through difficult times. Photography, in the end, became the beautiful pretext for us to become true friends.

You laid the foundation for this community, now this community wanted to create something for you that gives full meaning to the word 'community.'

Glyn, this is our way of saying a big thank you for the commitment, the generosity and the tremendous work you’ve done for all of us.

So Picture this!

And this is what I was presented with …

Written, recorded and edited by Jean-François and with contributions by other members of the community, including 2 in particular that have had traumatic loss in their families in recent weeks … this blew me away!

Such an incredible gift that I will treasure forever … and be playing over and over again ❤️

UK Drone Rules are Changing

It looks like some big updates are coming to the UK drone scene from 1 January 2026, especially around how drones are classed, identified, and registered. Here is a revised, plain‑English version that reflects the latest CAA guidance.​

1. New UK class marks

From 1 January 2026, most new drones sold in the UK for normal hobby and commercial flying will carry a UK class mark from UK0 to UK6. This mark shows what safety standards the drone meets and which set of rules apply.​

  • UK0: Very light drones under 250g, including many small “sub‑250” models.​

  • UK1–UK3: Heavier drones intended for typical Open Category flying, with increasing levels of safety features as the class number goes up.​

  • UK4: Mostly used for model aircraft and some specialist use.​

  • UK5 & UK6: Higher‑risk drones designed for more advanced or specialist operations, usually in the Specific Category.​

EU C‑class drones:
If you already own an EU C‑marked drone, it will continue to be recognised in the UK until 31st December 2027, so you can keep flying it under the transitional rules until then.​

2. Remote ID – your “digital number plate”

Remote ID (RID) is like a digital number plate for your drone: it broadcasts identification and flight information while you are in the air. This helps the CAA, police and other authorities see who is flying where, and pick out illegal or unsafe flights.​

  • From 1st January 2026

    • Any UK‑class drone in UK1, UK2, UK3, UK5 or UK6 must have Remote ID fitted and switched on when it is flying.​

  • From 1st January 2028 (the “big” deadline)

    • Remote ID will also be required for:​

      • UK0 drones weighing 100g or more with a camera.

      • UK4 drones (often model aircraft) unless specifically exempted.

      • Privately built drones 100g or more with a camera.

      • “Legacy” drones (no UK class mark) 100g or more with a camera.

What RID does (and does not) share:

  • It broadcasts things like your drone’s location, height and an identification code (serial/Operator ID), plus some details about the flight.​

  • It does not broadcast your name or home address to the general public; it is designed for safety and enforcement, not doxxing pilots.​

3. Registration

The UK is tightening registration so that more small camera drones are covered. The key change is that the threshold drops from 250g to 100g for many requirements.​

From the new CAA table:​

  • Flyer ID – for the person who flies

    • Required if your drone or model aircraft weighs 100g to less than 250g
      (including UK0), and for anything 250g or heavier.​

  • Operator ID – for the person responsible for the drone

    • Required if your drone:

      • Weighs 100g to less than 250g and has a camera; or

      • Weighs 250g or more, even without a camera.​

    • If your drone is 100–250g without a camera, an Operator ID is optional
      (though it is still recommended).​

In everyday terms:

  • If your drone has a camera and weighs 100g or more, you should expect to need both an Operator ID and a Flyer ID.​

  • Sub‑100g aircraft remain outside the legal registration requirement, but the CAA still recommends taking the Flyer ID test for knowledge and safety.​

4. Night flying

If you fly at night, your aircraft must now have at least one green flashing light turned on. This makes it easier for other people and aircraft to see where it is and in which direction it is moving.​

A2 CofC and how close you can fly

The A2 Certificate of Competency (A2 CofC) still matters for flying certain drones closer to people. Under the new regime:​

  • With an A2 CofC, you can fly UK2‑class drones:

    • As close as 30m horizontally from uninvolved people in normal operation.​

    • Down to 5m in a dedicated “low‑speed mode” if your drone supports it and you comply with all conditions.​

  • For legacy drones under 2 kg, you should still keep at least 50m away from uninvolved people when using A2‑style privileges under the transitional rules.​

Always check the latest CAA drone code for the category you are flying in, as extra restrictions may apply depending on location and type of operation.​

5. What you need to do

If you are already flying legally today, you do not need to panic, but you should plan ahead over the next couple of years.​

  • Now–end of 2025

    • Make sure you have a valid Flyer ID and Operator ID if your drone falls into the current registration thresholds.​

  • From 1st January 2026

    • When buying a new drone, check that it has the correct UK class mark and built‑in Remote ID if it is UK1, UK2, UK3, UK5 or UK6.​

    • Use a green flashing light when flying at night.​

  • By 1st January 2028

    • If you own a legacy drone or UK0/UK4 aircraft 100g or more with a camera, ensure you are ready to comply with Remote ID, either through built‑in hardware or an approved add‑on.​

If you keep an eye on these dates and make sure your registration, class marks and Remote ID are in order, your current setup should remain usable under the new rules for years to come.​

Choosing the Right AI Model in Photoshop: A Credit-Smart Guide

If you've opened Photoshop recently, you've likely noticed that Generative Fill has received a significant upgrade. The platform now offers multiple AI models to choose from, each with distinct capabilities. However, there's an important consideration: these models vary considerably in their generative credit costs.

Understanding the Credit Structure

Adobe's proprietary Firefly model requires only 1 credit per generation, making it the most economical option. The newer partner models from Google (Gemini) and Black Forest Labs (FLUX), however, are classified as premium features and consume credits at a substantially higher rate. Depending on the model selected, you can expect to use between 10 and 40 credits per generation.

For users looking to maximize their monthly credit allocation, selecting the appropriate model for each task becomes an essential consideration.

Firefly: Your Go-To Workhorse (1 Credit)

Firefly serves as the default option and remains the most practical choice for everyday tasks. At just 1 credit per generation, it offers excellent efficiency for routine editing work. Whether you need to remove unwanted objects, extend backgrounds, or clean up imperfections, Firefly handles these tasks effectively.

Additionally, it benefits from full Creative Cloud integration, Adobe's commercial-use guarantees, and Content Credentials support. For standard production workflows, it's difficult to find a more cost-effective solution.

The Premium Players

The partner models represent a significant increase in cost, but they also deliver enhanced capabilities. Adobe operates these models on external infrastructure, which accounts for their higher credit requirements. These models excel at handling complex prompts, challenging lighting scenarios, and situations requiring exceptional realism or fine detail.

The credit costs break down as follows:

  • Gemini 2.5 (Nano Banana): 10 credits

  • FLUX.1 Kontext [pro]: 10 credits

  • FLUX.2 Pro: 20 credits

  • Gemini 3 (Nano Banana Pro): 40 credits

All of these models draw from the same credit pool as Firefly, but they deplete it considerably faster.

When to Use What

Gemini 2.5 (Nano Banana) occupies a middle position in the model hierarchy. It performs well when Firefly struggles with precise prompt interpretation, particularly for complex, multi-part instructions. This model also excels at maintaining consistent subject appearance across multiple variations.

FLUX.1 Kontext [pro] specialises in contextual integration. It analyses existing scenes to match perspective, lighting, and colour accurately. When adding new elements to complex photographs, this model provides the most seamless integration, making additions appear native to the original image.

FLUX.2 Pro elevates realism significantly. It generates outputs at higher resolution (approximately 2K-class) and demonstrates particular strength with textures. Areas that typically present challenges, such as skin, hair, and hands, appear notably more natural. For portrait and lifestyle photography requiring professional polish, the 20-credit investment may be justified.

Gemini 3 (Nano Banana Pro) represents the premium tier at 40 credits per generation. This "4K-class" option addresses one of Firefly's primary limitations: text rendering. When projects require legible signage, product labels, or user interface elements, Nano Banana Pro delivers the necessary clarity.

A Practical Approach to Model Selection

Default to Firefly (1 credit) for standard edits, cleanup tasks, and basic extensions

  1. Upgrade to Gemini 2.5 (10 credits) when improved prompt interpretation or likeness consistency is required

  2. Select FLUX.1 Kontext (10 credits) when lighting and perspective matching are priorities

  3. Deploy FLUX.2 Pro (20 credits) when realism and texture quality are essential

  4. Reserve Gemini 3 (40 credits) for situations requiring exceptional text clarity and fine detail

The guiding principle is straightforward: begin with the most economical option and upgrade only when project requirements justify the additional cost.

🤖 4 Key Insights from Google's Gemini 3 Launch

4 Key Insights from Google's Gemini 3 Launch That Go Beyond the Numbers

With new AI models arriving every week, it's hard to tell which announcements actually matter. Many releases simply offer minor improvements and higher test scores, leaving us wondering what it all means for everyday use.

Google's Gemini 3 launch is different. Beyond the impressive benchmarks lie four important changes that show where AI technology is really headed. This article highlights the most significant developments that point to a major shift in both what AI can do and how we interact with it.

Insight 1: From Assistant to Thinking Partner

The biggest change in Gemini 3 isn't just improved performance. It's a deeper level of understanding that transforms how we interact with AI. Google designed the model to "grasp depth and nuance" so it can "peel apart the overlapping layers of a difficult problem."

This creates a noticeably different experience. Google says Gemini 3 "trades cliché and flattery for genuine insight, telling you what you need to hear, not just what you want to hear." This represents an important evolution in how we work with AI. Instead of a simple tool that answers questions, it becomes a real collaborative partner for tackling complex challenges and working through difficult problems.

This new relationship demands more from us as users. When your main tool acts like a critical colleague rather than an obedient helper, you need to step up your own thinking and collaboration skills to get the most out of it.

Google CEO Sundar Pichai put it this way:

It's amazing to think that in just two years, AI has evolved from simply reading text and images to reading the room.

Insight 2: Deep Think Mode Brings Specialized Reasoning

Google introduced Gemini 3 Deep Think mode with this launch. This enhanced reasoning mode is specifically designed to handle "even more complex problems." The name isn't just marketing. It's backed by real performance improvements on some of the industry's toughest tests.

In testing, Deep Think surpasses the already powerful Gemini 3 Pro on challenging benchmarks. On "Humanity's Last Exam," it achieved 41.0% (without tools), compared to Gemini 3 Pro's 37.5%. On "GPQA Diamond," it reached 93.8%, beating Gemini 3 Pro's 91.9%.

This matters because it shows a future where AI isn't a single, universal intelligence. Instead, we're seeing the development of specialized "modes" for different thinking tasks. This isn't just about raw power. It's a strategic approach to computational efficiency, using the right amount of processing for each specific task. This is crucial for making AI sustainable as it scales up.

Insight 3: Antigravity Changes How Developers Build Software

Perhaps the most forward-thinking announcement was Google Antigravity, a new "agentic development platform." This represents a fundamental change in how developers work with AI, aiming to "transform AI assistance from a tool in a developer's toolkit into an active partner."

What makes Antigravity revolutionary is what it can actually do. Its AI agents have "direct access to the editor, terminal and browser," letting them "autonomously plan and execute complex, end-to-end software tasks." The potential impact is huge. Going far beyond simple code suggestions, it completely redefines the developer's role. Instead of writing every line of code, developers become directors of AI agents that can build, test, and validate entire applications independently.

Insight 4: AI Agents Can Now Handle Long-Term Tasks

A major challenge for AI has always been "long-horizon planning." This means executing complex, multi-step tasks over extended periods without losing focus or getting confused. Gemini 3 shows a real breakthrough here.

The model demonstrated its abilities on "Vending-Bench 2," where it managed a simulated vending machine business for a "full simulated year of operation without drifting off task." This capability translates directly to practical, everyday uses like "booking local services or organizing your inbox."

This new reliability over long sequences of actions is the critical piece that could finally deliver on the promise of truly autonomous AI. It marks AI's evolution from a "single-task tool" you use (like a calculator) to a "persistent process manager" you direct (like an executive assistant who handles your projects for months at a time).

Looking Ahead: A New Era of AI Interaction

These aren't isolated features. They're the building blocks for the next generation of AI. The main themes from the Gemini 3 launch (collaborative partnership, specialized reasoning, agent-first development, and long-term reliability) all point toward a future that goes beyond simple prompts and responses.

The focus has clearly shifted from basic question-and-answer interactions to integrated, autonomous systems built for real complexity. As AI moves from a tool we command to a partner we collaborate with, we'll need to adapt how we think, work, and create alongside it.


🚀 AI: Creative Leap, NOT Deception

The headlines are full of outrage: AI is ruining photography, destroying trust, and spreading lies. The critics claim that generative tools are the death knell for visual truth, weaponizing deception on a scale we've never seen.

But let's pause. This argument is fundamentally flawed. It misdiagnoses the problem and unfairly demonizes the most powerful creative tool invented in a generation.

AI isn't the origin of the lie; it's the radical acceleration of the human desire to tell a more compelling story.

The Real History of "The Lie" in Photography

To claim that AI introduces deception to photography is to ignore the entire history of the medium. Visual manipulation has always been an inherent part of the creative process.

Consider the foundation of photojournalism: narrative construction.

The "Migrant Mother" (1936): Dorothea Lange's iconic image is hailed as a moment of truth, yet she meticulously constructed it. She cropped out the husband and teenage daughter to create a solitary, suffering figure. She physically directed the children to turn away. This wasn't a lie about poverty, but it was a masterful, intentional editing job designed to maximize emotional impact. It was truth made more powerful through manipulation.

"Valley of the Shadow of Death" (1855): During the Crimean War, Roger Fenton is believed to have literally moved cannonballs onto the road to make the scene look more dramatic and dangerous. The technology was primitive, but the intent to shape reality for a better picture was exactly the same as today's AI tools.

"The Falling Soldier" (1936): Robert Capa’s famous war photo is widely accepted as having been staged to capture an image of heroism and death that was too fleeting or dangerous to capture authentically.

These historical examples show that photographers have been physically arranging reality, staging scenes, and using darkroom techniques to tell the story they wanted to tell for over a century. The core issue has never been the camera or the software; it has always been the editorial judgment of the person behind it.

The Crop Tool Was Always More Dangerous Than AI

We also must remember the power of basic, low-tech deception. Long before generative fill, simple techniques were used to create outright political and social lies:

Intentional Cropping: The infamous photo of the toppling of the Saddam Hussein statue in 2003 was widely published using a tight crop to imply a massive, cheering crowd. The reality, revealed in a wide-angle shot, was an almost empty square. A simple crop created a massive global political narrative that contradicted the facts on the ground.

Perspective Tricks: The photo appearing to show Prince William making a rude gesture was simply a trick of perspective, hiding fingers to create a completely false narrative of aggression.

These are not complex manipulations. They are intentional deceptions using the most basic tools of photography: angle and crop. If simple tools can be used to propagate such significant lies, why is the focus solely on AI?

AI: The Ultimate Creative Democratizer

The fear surrounding AI is largely rooted in its speed, scale, and accessibility, not its capacity for invention.

AI is not primarily a tool of deception; it is a profound creative liberation.

  1. It Democratizes Vision: AI allows a person who cannot afford expensive equipment or complex training to visualize concepts instantly. It lowers the barrier to entry for creative expression to the point of a text prompt.

  2. It Expands Possibility: For professional photographers and artists, AI is not a replacement but an enhancer. It can instantly remove unwanted elements, seamlessly extend a scene, or realize complex conceptual ideas that would have previously taken days or weeks of painstaking work.

  3. It Forces Honesty: The very existence of perfect AI fakes means the public must now learn to treat all images, even traditional photos, with a new level of healthy skepticism. This shift forces better media literacy and demands higher ethical standards from those who publish images.

The problem is not the tool that can generate a manipulated image; the problem is the person who chooses to present that manipulated image as an unvarnished, factual truth. Blaming AI for deception is like blaming a pen for writing a lie. The pen is merely a tool.

Ultimately, AI is forcing us to acknowledge the truth about photography: it has always been an art of subjective framing, editing, and narrative construction. The ethical debate must move away from demonizing the technology and focus instead on demanding transparency and integrity from the people who use it.


Affinity Software Announcement - 30th October 2025

Affinity, now Affinity by Canva has unified its three separate applications (Designer, Photo, and Publisher) into a single app and made it permanently free.

🔑 Key Details

🎨 Unified Application

Instead of three separate applications, Affinity now offers one unified app that consolidates:

  • Vector design tools (formerly Designer)

  • Photo editing tools (formerly Photo)

  • Layout and publishing tools (formerly Publisher)

💰 Pricing Structure

  • Core App: Completely free with no feature restrictions, no trial periods, or hidden costs

  • AI Features: Available exclusively to Canva Pro subscribers ($14.99/month or $119.99/year)

🤖 AI Capabilities (Canva Pro Required)

Users with Canva Pro accounts can access Canva AI tools directly within Affinity through the Canva AI Studio:

  • Generative Fill

  • Expand & Edit

  • Remove Background

  • Additional Canva AI features

💻 Platform Availability

  • Mac & Windows: Available immediately

  • iPad: Scheduled for release in 2026

🔐 Account Requirements

A free Canva account is required to download and use the application.

📚 Background

  • Canva acquired Affinity in March 2024 for $380 million

  • In early October 2025, Affinity stopped selling all existing software versions

  • This announcement represents a shift from Affinity's previous paid perpetual license model

My Glyn Dewis Masterclass Community on SKOOL

Yesterday I launched my Masterclass Community on SKOOL.

“A community for photography lovers wanting to build skills, confidence, and inspiration to create images that excite them and they’re truly proud of.”

Members that have already joined will have seen the calendar with Scott Kelby joining us for a LIVE Guest Seminar in July and Joel Grimes joining us for a seminar in August ... with a new Guest each month.

I have also set the referall commission at 50% which means if you recommend just 2 people who join then your membership is paid for ... and any recommendations ontop of that means money back in your pocket ... ( $175 on-off for Annual Membership and $19.50 every month ongoing for monthly membership )