Home From the Founders

From the Founders

Perspectives on AI, business and building things that actually work.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

How AI changes business - strategy, ops and the bigger picture.

Stephen Greszczyszyn

Stephen Greszczyszyn

CTO & Co-Founder

Hands-on AI - what I'm learning, building and figuring out.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Take Control of AI — Before It Takes Control of You

February 25, 2026

If you're worried AI is coming for your job ... this is what to do about it. Tell your bosses to invest in it.

Wait! What?

Tell them to invest in automating the sh*t you hate - the stuff that stops you doing the work that grows the business. YOU have to own what gets automated.

If you don't do this, the bosses will decide. And they'll choose the things that benefit the bottom line, because that's what the stock markets are expecting and what McKinsey suits will be advising them to do from 30k feet. And that means you're at risk.

But you know what really needs automating so that you can grow the business. So get involved. Take control of the automations and prove the value of humans for growth.

This IS coming. Don't let it happen TO you.

Part 2: Make sure what you're keeping is actually yours to keep

Automate the shit bits. Keep the growth bits. That's the argument above and I stand by it.

Well now for something that might feel quite uncomfortable for some of you.

That advice has growth bits in it.

Most jobs do - a mix of grind and growth. For those people, the advice stands.

But some roles ... don't have that mix.

And I'm not just talking about data entry.

The latest AI models can reason through complex scenarios, build financial models, analyse contracts, write code and keep going for hours without stopping. This isn't "AI can do your admin." This is "AI can do the thinking work too."

So if your role is mostly analysis, mostly modelling, mostly reasoning through complex problems and producing outputs - even sophisticated outputs - that's not automatically safe just because it feels intellectually hard.

Here's what AI still can't do.

Reading a room and knowing when someone disagrees even though they're nodding. Sitting with contradictory data and no right answer and making a call anyway. Having the courage to kill a project everyone loves because the market has shifted. Thinking of a genuinely new way to approach something. Building the relationship over a cup of tea.

That stuff isn't automatable. It might never be. And it's learnable.

If you look at your week and most of it lives in volume and reasoning ... start moving. Not in a panic. But deliberately. Build skills in the areas AI can't touch - while you've still got time to do it on your terms.

Don't let AI happen TO you. But for some of you, that means something bigger than automating your workflows. It means moving toward work that's yours to keep.

Part 3: What if your work IS the growth product?

This one is for those of you who read the above and thought neither part applied to you because your work IS the growth product.

The analyst building the models the agency sells. The consultant producing the research the client pays for. The developer writing the code that IS the product.

Your work drives revenue. It's the growth engine. You feel safe.

But "my work drives growth" and "my job is safe" are not the same thing.

The business will keep selling that service. It'll just need fewer people to deliver it.

So whether you recognised yourself in part one, part two, or here - the move is the same.

Get closer to the bits AI can't do. Be the analyst they ask to be in the pitch. Be the one who reads the room when the data says one thing but your gut says another. Be the one who finds the new thing to analyse and buy that nobody was looking for. Be the one whose judgement call requires nerve, not just analysis.

The work that's yours to keep isn't necessarily the output, but it's very possibly what makes the output matter.

View on Substack →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

A Karaoke Company Just Crashed the Logistics Sector

February 20, 2026

A former karaoke company with $6m in market cap and less than $2m in quarterly revenue just crashed the entire logistics sector.

And that's why you might not get a choice about AI.

Algorithm Holdings, previously known as The Singing Machine Company, I sh*t you not - put out a press release in February claiming their new logistics platform could scale freight volumes by 300-400% without adding headcount. Within hours, C.H. Robinson Worldwide, a proper freight giant, saw its stock plunge 24%. The Russell 3000 trucking index had its worst day in years - literally billions in market cap … gone!

Was the press release credible? No. Was the timeline "delusional" as one analyst put it? Probably. Did Wall Street care? Not even slightly.

And it wasn't just logistics. An AI announcement from Palantir repriced every company selling enterprise software on a per-seat basis. Anthropic releasing legal workflow tools wiped $285bn from SaaS legal tech stocks - the Jefferies trading desk actually called it the "SaaS acalypse." A startup nobody had heard of launching an AI tax planning tool sent Raymond James down 8.8% and Charles Schwab down 7.4%. Private credit firms, commercial real estate, wealth management - all hammered, not on any actual disruption or basis of fact … but on the fear of it.

And this is why you might not have a choice but to start embracing AI: The market has developed what one analyst called an autoimmune disorder - it can no longer tell real threats from exaggerated ones and so it's attacking everything. And the thing about an autoimmune disorder is that it doesn't matter whether the attack is rational. What matters is that the expectation has shifted ... and your competitors and clients are now operating inside that expectation.

So - If your competitors have shareholders, those shareholders will force them to embrace AI - because if they don't, the market will punish them as though they're already losing. Which means you will end up competing with someone who has. And given the 10x potential on productivity and growth, you really don't want to be the person who hasn't.

And if your clients are public companies, or trying to behave like one, then they will be expecting similar gains from you. On cost, speed, output. I'm sorry, but that's where this is going.

So... how will you keep up with them without breaking your people?

Look - I know it might seem self-serving for an AI consultancy to be saying all this, and honestly, it is. But I also know that if I were still running a business at the scale I used to, this logic would be keeping me up at night and I'd want to partner with someone who could actually help me move fast. A couple of Google Gems does not pass muster, by the way. Ask me how I know. There is a solution to all this.

View on LinkedIn →

Podcast

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Learn to prompt and you'll learn AI

February 17, 2026

I bang on about this all the time: want to learn AI? Learn to prompt. This paper from the University of Chicago proves why.

Same AI, same goal, same information - outcomes varied by 73% based purely on who wrote the prompt. The well-established gender gap in negotiations actually reversed when people delegated to AI.

The research shows that unconscious prompting - just writing what comes naturally, like you're chatting to a mate or how we've all got used to searching on Google - produces results that are heavily shaped by who you are, for better or worse.

Learning a prompt framework, like the one we teach people at AgentFlow, doesn't fight that. But it gives people a conscious structure to work within. What happens is you start making deliberate choices about what you're putting in the little chat box rather than just defaulting to old habits.

It's the difference between someone who's never been coached just "communicating" in a meeting versus someone who's learned to structure their thinking. Your underlying personality doesn't disappear - it just gets channelled more effectively.

There's so much more in this paper, but I've done the old Google NotebookLM thing on it and turned it into a podcast. If you want to have a listen - it's WELL worth 15 minutes as it gives so much insight into AI and where to start.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

AI sales teams? Ok - this is getting close to home

February 2, 2026

AI sales teams? Ok - this is getting close to home. I can't quite work out whether I like this or not.

Prebid - the open source project that standardised header bidding back in 2017 (if that means nothing to you: it basically forced ad buyers to compete fairly for publisher inventory instead of getting first dibs based on who had the best relationship with Google)... anyway, Prebid has now released an AI sales agent for publishers.

The idea is that AI agents on the buy side and sell side can negotiate direct deals without humans.

Now, what I find interesting is that even the big publishers - Telegraph, Guardian - are operating with a fraction of the sales teams they had a decade ago. They're still winning awards and doing brilliant creative work, but there are far fewer people doing it. And on the agency side, the specialist publisher buying teams who used to pick up the phone about a £20k sponsorship have been merged into trading desks years ago.

The middle is worse. The regional titles, the specialist B2B sites, the quality niche publishers - they've cut their sales teams to the bone and there's often nobody left on the other end to call anyway.

Those deals didn't go to programmatic because advertisers chose cheap - they went there because there was nobody left to have the conversation, and this is an attempt to rebuild that infrastructure with AI instead of humans.

And look, if it works, suddenly a £5k sponsorship that's no longer worth it for a time poor sales team suddenly becomes viable again. The quality cycling blog for MAMILs can pitch to Rapha's buying agent without needing a London sales team. The mid-tier publisher gets a route to market that isn't just dumping inventory into an exchange at commodity rates.

My biggest worry though is that agents will optimise for matching the brief, when the best sales people I've worked with optimised for challenging it. "You think you want X, but actually Y would deliver the outcome you're really after." I'm not sure that fits in a schema.

I'm genuinely unsure if this is exciting or depressing. Probably both.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Why is there so much AI slop everywhere?

January 27, 2026

Why is there so much AI slop everywhere? Here's the maths:

LLMs predict the next token based on probabilities. Without constraints, they're drawing from everything they've trained on - so outputs regress toward the mean. Generic slop is literally the statistical average of human writing. So when people say AI can't be original, they are correct.

Is it inevitably going to be everywhere? I don't think so:

The model generates a probability distribution over its entire vocabulary for each token. Something like a Claude Skill, a Gemini Gem or a detailed prompt anywhere else modifies the conditional probability P(token|context) - the context now includes "avoid leverage and unlock", "use longer flowing sentences" or "be self-deprecating but confident" (crap examples of tuning, but you get the drift). Tokens that violate those instructions get suppressed, tokens that match get amplified. You're not changing the model exactly, but you are reshaping the probability landscape it's sampling from. And that's where the originality can creep back in - you're pulling it away from the average toward something more specific, more you.

This isn't some new technical skill you need to learn. You're simply articulating what you subconsciously already do when you write. The rhythm, the word choices, the things that make your writing yours. You've just never had to write it down before because you were the one holding the pen. Now you've got a ghost writer, you need to brief them. That's all tuning is.

Is this going to still read like AI slop - to the many AI bounty hunters out there who love to point out when AI has been involved in writing? Probably, they can sniff out AI slop from a thousand characters. But I guarantee it won't be as obviously slop as that to which we've become accustomed over the last 18 months...

So keep calm and learn to tune your AI.

#AI #AISlop #AgentFlow

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Ads in Chat GPT? Hardly the second coming of AdWords

January 22, 2026

OpenAI is putting ads in ChatGPT. It's amazing what getting desperate for cash will do.

The numbers are brutal. Only 3-5% of ChatGPT's 800 million users pay for a subscription, Deutsche Bank projects cumulative losses of $143 billion before they hit profitability, and they've committed $1.4 trillion to infrastructure over the coming years.

So, ads it is.

To be fair, they're taking this seriously. They've hired Fidji Simo from Meta (who built Facebook's News Feed ads), Kevin Weil from Instagram and Shivakumar Venkataraman from Google. They've appointed PHD as their media agency. These are people who know the advertising business.

The question is whether OpenAI will listen to them.

I started writing a cynical post about how they'll get it wrong due to the classic Silicon Valley exceptionalism problem. We have the audience. We have the data. Advertisers want both. How hard can it be?

I bet I'm right. I bet they'll miss or ignore all the things a sophisticated advertiser needs - the kind we had at Dentsu and the rest of the Big 5 holding groups. Independent measurement. Cross-platform attribution. MMM integration. Media Rating Council accreditation. Clean rooms. OpenAI's announcement says nothing about any of that. Another "trust me, bro" black box.

But then I realised why OpenAI might not bother to worry about them at launch...

Meta and Google don't make most of their money from sophisticated multi-million-dollar advertisers. They make it from the long tail - small businesses, SMEs, mum and pop shops who care not a jot about MMM or clean rooms. They just want to know: did it work? Did I get customers?

If OpenAI can crack self-serve and prove to small businesses that ChatGPT ads perform better than PPC or paid social, who cares about incrementality testing? That's where the real money is. Which makes what they've announced a bit underwhelming.

If they keep ads genuinely separate from the chat - which they've promised - they're essentially building a contextually relevant display ad product (yawn). A new channel with an interesting audience, perhaps, but it's hardly the second coming of AdWords... unless they create genuinely new formats and targeting options we haven't seen before. Otherwise they're just another ad platform competing in a crowded, commoditised space.

I wonder if the sophisticated advertisers even matter to them right now. I've got plenty of questions and not many answers.

Place your bets. I bet I'm eating these words in a few years!

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

You're Staring at a Plywood Box

January 19, 2026

Rant incoming.

Still think AI is overhyped? Are you still boringly wanging on about how everything sounds the same because of it and therefore it's not for you, etc? (Yes, that's a hook, now get over it and read on!)

Well this post is for you.

I don't think your problem's AI, per se... I think it's that you've only used ChatGPT to write a few emails badly, perhaps you've played with Replit or Lovable, or maybe you've just read loads of slop from other people who equally have no f*cking idea how to use the tool and now you think you understand what's possible.

You don't.

My business partner Stephen has 25 years experience as an engineer from huge global tech companies (and he's the one who builds things properly for our clients) - but he's been through my code on GitHub and couldn't believe what's possible now compared to even just a year ago (nah, 3 months). That's the point. Not that I'm an engineer as I'm obviously not. The point is what these tools can actually do when you stop dismissing them based on a sample size of one badly-prompted chatbot (that's got progressively worse recently).

You sound like Darryl Zanuck, head of 20th Century Fox, who said in 1946 that "television won't hold any market because people will get tired of staring at a plywood box every night." He saw a plywood box and completely missed the revolution that was to come.

And look - if you're still complaining about "ChatGPT tone" it really just shows you have NO IDEA, so maybe stop whingeing and start exploring (and use Claude, it writes better). Or not, I don't care. I know what I've built and I know it works.

I've been holding that in for far too long. But recently there have been too many people I genuinely respect talking about AI in a way that - and I hate to say this - seems to come from a place of total ignorance, and I couldn't hold it in anymore. I love you dearly, but good lord, get in the trenches if you're going to fire shots!

And no, I definitely wouldn't be able to land a 747.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

It's not AI bias. It's what we give it to work with.

January 14, 2026

I had to read Jon Block's piece in The Media Leader about five or six times before I properly understood what he was arguing. Not because it's poorly written - it's actually a complicated and nuanced subject that needs some serious thinking. So let me try to summarise it, in case you felt the same way.

Here's what I think he's saying: AI agents are starting to make media planning decisions, and they're favouring digital over TV. Not because digital works better, but because the data is denser and easier to compute. His solution is that broadcasters need to build "queryable knowledge architectures" - structured data sources that agents can access in real-time via protocols like MCP (think of it as an API, but specifically built for AI) - rather than falling back on whatever the foundation model learned during training.

I think that's... actually pretty sensible?

But before we move on, I want to push back on one bit of the framing. Block suggests that foundation models are "trained on the open internet" which is "dominated by digital metrics," with the implication being that AI is somehow ignorant of TV's value.

I don't think that's true.

Here's the thing... foundation models have read a lot. They've ingested decades of IPA Effectiveness data, everything Peter Field and Les Binet have written, the Thinkbox research library, Ehrenberg-Bass, Byron Sharp. Claude knows the 60/40 rule. It knows why it's been revised. Ask any LLM "is TV advertising effective?" and you'll get a nuanced answer about reach, attention quality and long-term brand effects. Because it's read the research.

The idea that AI is ignorant of TV's value just doesn't stack up.

Where I agree with Jon - and this matters - is that knowing the theory isn't the same as being able to act on it. Digital wins by default not because AI prefers it, but because digital metrics are easier to compute. Clicks, conversions, last-touch attribution... they're clean signals. Brand lift, memory encoding, attention quality... they're messier and harder to feed into an optimisation loop.

So when agents favour digital, I don't think that's bias per se - and this is important, because "bias" is becoming one of those words thrown around AI quite a lot, in many instances with absolutely good reason. AI assuming people are white middle-aged men, for example. But this isn't one of those occasions. I think that's agent configuration. There's a difference between a foundation model and an agent. Agents use foundation models, yes, but you can point them at specific context and tell them to prioritise it. That's what RAG is for. It takes work to set up properly - I'm not pretending it's trivial - but it's absolutely doable.

Which brings me back to Block's solution - broadcasters building queryable knowledge architectures - an idea I actually really like. Imagine Thinkbox or ITV or Channel 4 creating structured effectiveness data with MCP access that any agent could query would be genuinely useful.

But here's my concern - can you trust broadcasters, or any media owner let's be honest, to be honest brokers of that data? If the MCP says "TV is always the answer," that's just a sales team with an API, isn't it? You'd be swapping one bias for another.

I think the responsibility sits with whoever's building the agent - probably the agency. Use broadcaster data, sure, but as one input among many. Test it, validate it against your own models, blend it with sources you trust. Same as you'd do today with any media owner research. Nobody would just plug into a broadcaster's MCP and say "right, that's our planning sorted."

And let's not forget the most important thing in all of this - keep the human in the loop. The planner absolutely needs to be the one who looks at the plan and has final approval before any execute button is pressed. What makes media planning great is not just data analysis and historical expectations of efficiency - it's the human judgement and experience that you just can't codify. Not yet anyway. So please don't read this as "we can build an agent that actually works so we can get rid of planners and buyers." I just don't believe that's the case.

I'm writing this because I worry articles like this - however well-intentioned - risk creating an anti-AI narrative in TV circles that becomes self-fulfilling. If broadcasters believe AI is inherently biased against them, they'll disengage rather than lean in and help shape how these tools are built. And that would be a shame, because the fix is genuinely within reach.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Some agreeable disagreeing about how to use AI

January 12, 2026

Omar Oakes wrote a piece this morning that stopped me mid-scroll - I've followed his thinking for a while and normally I'm nodding along to every line, but this one is half brilliant and half dangerous.

The brilliant half: AI used for creative judgment is making everyone sound the same. Verbose and declarative and oddly empty. WPP claiming they've bottled decades of creative expertise into "Super Agents" is exactly the kind of bullshit that deserves calling out. Omar's right - when you use AI to replace thinking rather than sharpen it, you get convergence and you lose friction and, as Omar says, "friction is where meaning lives."

I will die on that hill with you, Omar.

And I've made the point before that CFOs who see "20% time saved" and immediately think "20% headcount reduction" are making a terrible mistake. Jobs are messy. The friction that people bring - the dissent, the judgment, the "this doesn't feel right" - is exactly what Omar says we're at risk of losing. I agree with that too.

So where's the dangerous half?

It's the risk that Omar's argument gets used to tar all AI automation with the same brush. Because technical automation and creative automation are not the same thing. And treating them as the same problem leads to the wrong conclusions.

For instance... when I was CEO of media at Dentsu, the digital teams had the highest burnout and the highest churn and the lowest satisfaction scores. They weren't burned out because they lacked creative opportunity. They were burned out because the technical work never stopped - pulling data, building reports, troubleshooting performance drops, validating account setups, flagging tracking issues, running optimisations, doing QA. All of it manual and urgent and all of it eating up time they didn't have.

And they were still expected to do the strategic thinking and the client work on top. That's why they burned out. There literally weren't enough hours.

That's not friction where meaning lives. That's just friction.

And here's the bit that really mattered: those people didn't want to be spreadsheet and data monkeys. They wanted to do more strategic work. More client relationships. More of what they thought they were actually getting paid for. The drudgery wasn't just tiring - it was frustrating. It was stopping them doing the work that Omar rightly says matters most.

MCP - Model Context Protocol - is the direction of travel here. Google Ads already has an official MCP (read-only for now), and others are emerging. Even today, AI can query live data, pull reports, and surface issues without anyone touching a spreadsheet. The reporting layer doesn't get faster - it starts to disappear. Write access - letting AI propose and execute optimisations - is coming, but we're not there yet at enterprise scale. When it arrives, the rest of that technical work shrinks dramatically. My back of the fag packet numbers suggest 45% of a typical paid social team's time could be freed up. Some of that's possible today. All of it will be soon.

That 45% isn't creative judgment being automated away. It's people being freed FROM technical drudgery TO exercise more of the judgment and dissent that Omar rightly wants to protect.

And in this instance, everyone wins. The people get to do the work they actually wanted to do - less burnout, more fulfillment. The clients get more strategic thinking on their accounts. The agency gets happier clients who stay longer and spend more. Lower churn or higher revenue - probably both.

That's not "AI takes jobs." And it's not just "AI saves money." It's "AI makes the whole thing work better for everyone."

So Omar - I'm not disagreeing with you. I'm asking you to be more precise. Your headline says "technical labour" but your argument is about creative judgment. Those aren't the same thing. And when you blur the line, you risk scaring people away from the automations that would actually free them to do the work you're trying to protect.

Let me give you a real working example of an AI workflow I use almost every day. It's this LinkedIn piece - and just about every LinkedIn piece I write. Claude helped me draft this. Here's how it worked; I did the thinking - the idea, the structure, the argument, the challenge. Claude put flesh on the bones, then I edited the hell out of it until it sounded like me.

That's the right use of AI for me because I'm not a natural writer. My grammar and spelling are awful and I waffle on and repeat myself and go in circles as I write down what I'm thinking. I find it really hard to edit that rambling mess into something people might actually want to read. It gets overly long and confusing and self-referential. It's not good.

But AI is excellent at sorting out my spelling and grammar and making sense of my circular thinking. It puts it in a form I'm more comfortable with. Yes, I could probably get there myself - but it would take a lot longer. And I'd rather spend that time on the argument.

Which is sort of the whole point I'm making.

What do you think, Omar?

I edited this piece on 14th January 2026 to clarify the current state of MCP integrations that I inadvertently OVER HYPED. The original version overstated what's available today - the direction is right, the timeline was wrong. There ARE MCPs available that technically can do all this today, but they are not official and very brittle. The only official MCP is Google's AdWords read only version and even that is clunky. BUT it IS the direction of travel. The rest of the article still holds true.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

C-suite execs: do you actually know what "AI" means?

January 11, 2026

I remember when programmatic was the word nobody could define. Header bidding, DSPs, SSPs, PMPs, horizontal auctions - we used to sit on panels debating what "programmatic" actually meant, because everyone used it for whatever they wanted it to mean. Sound familiar?

Back then I spent hours in rooms with whiteboards, learning the plumbing (and the lingo). It was painful. But it meant I could fight for resources at the exec table, back the team when they needed budget and help them navigate vendors... and our CFO who was demanding returns and immediate growth.

We grew Guardian ad revenue when everyone else's was falling because we drove programmatic hard. I'm not taking credit for that - the team were amazing - but I will take credit for giving them exec air cover: the budget, the headcount, the protection from short-term thinking and the backing when vendors or the CFO pushed back. And I could only do that because I understood it at the right level - not in the weeds, not at 30,000ft, but at the level a C-suite exec actually needs to make decisions and back their team.

That's what I'm trying to offer here for AI.

Because at some point soon - if it hasn't happened already - someone's going to walk into your office asking for budget for "an AI project". Or a vendor's going to pitch you "an AI solution". Or the board's going to ask what your AI strategy is.

And you need to know enough to ask: what type of AI? How complex? Is that the right level for the problem? Is this priced right for what it is?

You don't need to know how the models work. But you do need the vocabulary to have the conversation and a sense of what's actually possible.

So here's my working framework:

Chat (Claude, ChatGPT, Gemini, Grok if you're a perv)

Agentic workflows (you tell AI to do a predefined set of things, it returns with results, you decide what to do next)

Agents (you give it an objective and it figures out how to achieve it)

Agentic systems (you create an AI boss, give it the overall objective and it decides which agents to create, what systems to interact with, what data to collect, when good is good enough and so on)

That's a rough hierarchy and the lines blur - but it's useful for thinking about what you're actually buying or building. And it's a hell of a range.

I remember when digital was new. I was the junior who actually understood it and my bosses just nodded along hoping I wasn't bullshitting them. I ran rings around them. That wasn't good for them. Don't be them.

If you want help figuring out where to start, that's what we do at AgentFlow. We help you work out where all these AI levels actually fit in your business - whether that's making your team more productive, unlocking new revenue or just making yourself easier to deal with - then build the right solution and support it as you grow. We start with discovery, then figure out what's actually worth building.

DM me or find us at www.goagentflow.com.

I'm going to try and do more of this - helping C-suite navigate the AI madness. Follow along if that's useful.

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Millennial C-Suite: are you about to make the same mistake the boomers made?

January 9, 2026

When I started work in the late 90s, email had just arrived and there were senior execs who'd get their PAs to print them out, annotate them by hand, then have the PA type up the replies. It looked absurd from where we were sitting... but honestly, it made a kind of sense - their two-finger, jabby and pointy, ten words a minute typing genuinely was more expensive than paying someone else to do it, and besides, they were retiring in a few years so you couldn't really argue with it (and because I was Gen X, I didn't).

So what's your excuse for doing the same with AI?

You've got 15+ years of career left and if you're anything like I was 36 months ago, you're nervous about looking stupid, you don't have time to learn something new and you've no idea where to even start. So, what, you've delegated it to the team and told yourself that counts?

It doesn't.

Perhaps try this: imagine an AI Chief of Staff that actually knows your context, your priorities, your decisions. One that maintains a rolling picture of your projects, flags what's stuck, tracks what you're waiting on from others and prepares your thinking before you need it. One that captures all those ideas you email yourself at 11pm and actually surfaces them when they're relevant, instead of letting them rot in your inbox. Not a chatbot you have to re-explain everything to every time - a persistent system that learns how you work.

The tool exists. It's called Claude Code and most people think it's for developers... which is like saying a Swiss Army knife is just for cutting. Yes, you'll need to have a conversation with your SecOps team - we'll give you that argument in the longer piece on our site. But getting Claude Code to become an AI chief of Staff took me 30 minutes and I called it Jarvis. I know. I'm sorry.

The tools aren't hard. But you can't delegate the learning - start with your own workflows. There's a full walkthrough on our website (link in comments) of how I did it - please copy it and give it a go - it's the best way to learn. Or just call us.

I've also got it in a repo in our AgentFlow GitHub if you would rather - but if you know what that means, you probably won't need the link to it - Ask me if you want it and I'll ping it to you.

(And if you've already got a great EA or Chief of Staff? Learn this together. It's not about replacing them - it's about supercharging what you can both do.)

(Credit to Nate B. Jones whose YouTube and Substack first got me thinking about this - give him a follow, he's a legend.)

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

How to Build an AI Chief of Staff (and How to Get Your SecOps Team to Say Yes)

January 8, 2026

So you read the LinkedIn post and you're here for the how.

The pitch was simple: don't delegate learning AI to your team. Start with your own workflows. And what better place to start than building yourself something genuinely useful - an AI Chief of Staff.

A Chief of Staff is someone who tracks everything, spots what you've missed, deals with issues before they reach you and makes sure nothing falls through the cracks. In the British military they call this an ADC - their job is to keep the boss out of trouble.

Here's how to build one.

This article will take you about 10 minutes to read and maybe an hour to implement. That hour is an investment in yourself - learning by doing, not by delegating to someone else.

Why Claude Code (and why it's not just for developers)

Most people think Claude Code is a tool for software engineers. That's like saying a Swiss Army knife is just for cutting. Yes, it can write code. But what makes it different from ChatGPT or regular Claude is that it persists. It remembers. It can read and write files on your computer. It doesn't reset every time you close the window.

Now, if you're thinking "doesn't Copilot do this?" - fair question. Copilot does know your context, it does learn over time, and if you're already living in Microsoft 365 it's a legitimate option. You could build something similar there. But here's why I prefer Claude Code: the memory is yours. You can see it, edit it, control exactly what it knows and how it thinks about your work. It's not a black box that's learning about you somewhere in the cloud - it's a set of files on your machine that you own. For me, that control matters.

If you've never used a terminal before, this might feel intimidating. I promise it's not as scary as it looks. You're basically just typing instructions instead of clicking buttons. And once it's set up, you'll barely think about it.

What you're actually building

Think of this as giving the AI a workspace and a set of habits. Don't worry about how to set it up - we'll give you a prompt that does most of the heavy lifting. For now, just understand what the pieces are.

The workspace is a folder on your computer with a few key files. The first is something called CLAUDE.md - essentially an instruction manual that tells the AI who you are, how you like to work, what your priorities are and what it's allowed to do without asking. This means you don't have to re-explain yourself every session. It already knows.

Then there are a few simple files that act as your system. An INBOX.md for capturing things that need processing. A PROJECTS.md that tracks what you're working on and where each thing stands. A WAITING_FOR.md that tracks what you've delegated to others and when you need to chase. A DECISIONS.md that logs what you decided and why. A CLIENTS.md for keeping track of who you're working with. And crucially, an IDEAS.md for capturing those random thoughts and sparks that hit you when you're walking the dog or in the shower - not tasks, just things that need to marinate. When ideas get acted on or dismissed, they move to IDEAS-ARCHIVE.md.

The magic is in the surfacing. Your Chief of Staff will bring relevant ideas back to you at the right moment - during check-ins or when you're working on something related. No more losing good thoughts because you didn't have anywhere to put them.

Claude Code will create all of this for you. You just need to answer its questions.

You can also connect Claude Code to your calendar and email - giving the AI eyes into your schedule so it can see what's coming up and help you prepare. We'll come to that once you've got the basics working.

Getting your machine ready

Before we install Claude Code, let's make sure you know what we're working with. Don't worry - this is simpler than it sounds.

What is a terminal?

A terminal is just a window where you type commands instead of clicking buttons. Think of it as texting your computer rather than pointing at things. Every Mac and Windows PC has one built in - you've just probably never opened it.

Opening the terminal on a Mac

Press Command + Space to open Spotlight, type "Terminal" and hit Enter. A window will open with a blinking cursor. That's it - you're in.

Opening the terminal on Windows

Press the Windows key, type "PowerShell" and hit Enter. Make sure you're opening "Windows PowerShell" not the older "Command Prompt." A blue window will open with a blinking cursor.

Installing Claude Code on a Mac

Copy and paste this line into your terminal and press Enter:

curl -fsSL https://claude.ai/install.sh | bash

Wait for it to finish. You'll see some text scrolling - that's normal. When it's done, you'll get a message confirming the installation.

Installing Claude Code on Windows

Copy and paste this line into PowerShell and press Enter:

irm https://claude.ai/install.ps1 | iex

Same as Mac - wait for it to finish and look for the confirmation message.

Setting up your account

You'll need either an Anthropic API account with billing set up, or a Claude Pro or Max subscription. If you're already paying for Claude through claude.ai, the Max subscription is probably the simplest option - it covers both the web interface and Claude Code.

When you first run Claude Code (just type claude in your terminal and press Enter), it will walk you through connecting your account. Follow the prompts - it's just clicking through a browser authentication.

If something goes wrong

Type claude doctor in your terminal. This runs a diagnostic that checks your installation and tells you what's broken. Nine times out of ten, it'll either fix the problem automatically or tell you exactly what to do.

The setup prompt

Now for the good bit. Rather than giving you a long list of manual steps, here's a prompt you can paste into Claude Code that will guide you through building your Chief of Staff system. Claude will ask you questions, create the files and set everything up based on your answers.

Open your terminal, type claude and press Enter. Once Claude Code is running, paste this:

I want you to help me set up an AI Chief of Staff system. This means creating a folder structure and a set of files that will help you maintain context about my work across sessions.

Please do the following:

1. Create a folder called "chief-of-staff" in my home directory

2. Inside that folder, create these files: CLAUDE.md, INBOX.md, PROJECTS.md, WAITING_FOR.md, DECISIONS.md, CLIENTS.md, IDEAS.md, IDEAS-ARCHIVE.md

3. Create a subfolder called "work-orders"

4. Once the structure is created, interview me to build my CLAUDE.md file. Ask me about:

- Who I am and what I do

- How I like to work (when I'm most productive, how I prefer to communicate, how I make decisions)

- My current top 3-5 priorities

- Key people I work with and what I rely on them for

- What you're allowed to do without asking me (e.g. drafting, research, organising files) vs what requires my approval (e.g. anything external-facing, anything involving money or commitments)

5. After the interview, write the CLAUDE.md file based on my answers

6. Then run me through a quick "Intention Clarifier" - ask me what's been on my mind or nagging at me lately, and help me figure out what the real next step is. Capture the output in the appropriate file (PROJECTS.md, WAITING_FOR.md, DECISIONS.md, IDEAS.md or INBOX.md)

7. Explain the ideas capture system: IDEAS.md is for thoughts and sparks that need to marinate (not tasks). Ideas get light tags when obvious ([client], [content], [product], [process], [personal]). You'll surface relevant ideas during check-ins or when working on related topics. Ideas that get acted on or dismissed move to IDEAS-ARCHIVE.md.

8. Finally, explain how to do an end-of-day reconciliation and a morning check-in so I know how to use this system going forward

Ask me your first question when you're ready.

That's it. Claude will take it from there. Answer honestly - the more context you give it, the better the system works.

Once you've got the basics working, you can give your Chief of Staff more context by dropping in key documents - your annual strategy, your objectives, your job spec, even org charts or team structures. The more it knows about what you're trying to achieve, the better it can help you prioritise and spot what's missing.

Connecting your calendar and email

Now you've got the core system running, it's worth connecting Claude Code to your calendar and email. This gives your Chief of Staff eyes into your schedule - it can see what's coming up, help you prepare for meetings and flag when things are getting crowded.

I thought this was going to be painful. Azure app registration, API permissions, tokens - it all sounded like the kind of thing I'd put off forever. Turns out Claude Code just walks you through it. You tell it you want to connect to Outlook and Calendar via MCP, it tells you exactly what to do in Azure step by step, and fifteen minutes later you're connected to Teams, SharePoint, Outlook, Calendar, OneNote and Tasks. Everything you'd want a Chief of Staff to see.

If you're using Google Calendar and Gmail, it's even simpler - just ask Claude Code to help you connect and follow its instructions.

The key is: don't overthink it. Just ask Claude Code what to do and do what it tells you.

Once you've connected your email, your morning check-in gets even better. I email myself ideas when I'm out and about, and my Chief of Staff picks them up overnight, tags them and adds them to IDEAS.md. No more good thoughts lost in an inbox I'll never search.

The daily rhythm

The system only works if you use it. But "using it" doesn't mean hours of effort - it means a few minutes of intentional check-ins.

In the morning, you open Claude Code and ask it to help you clarify your intentions for the day. Not a to-do list, but a sense of what would make today feel successful. The AI can interview you - asking questions until it's clear what you actually need to do versus what's just floating around in your head.

During the day, you update the files as things happen. Or you ask the AI to update them based on what you tell it. "I just spoke to Sarah and we agreed to push the launch to March - update PROJECTS.md."

At the end of the day, you run a quick reconciliation. What got done? What's still open? What needs to move to tomorrow? This takes five minutes and it means you start the next day with a clean picture instead of a foggy sense of dread.

The governance layer

This is important. Not everything should be delegated, even to a very smart AI.

The rule is reversibility. If something can be undone easily - drafting a document, researching a topic, reorganising your notes - let the AI do it. If something is irreversible or high stakes - sending an email to the board, making a financial commitment, publishing something publicly - that needs your explicit approval before it happens.

Build this into your CLAUDE.md file. Tell the AI what it can do autonomously and what requires a checkpoint. This isn't about trust, it's about designing a system that doesn't let small errors become big problems.

How to get your SecOps team to say yes

Now for the conversation you're probably dreading. You want to use this tool. IT will have concerns. Here's how to navigate that.

First, understand what they're actually worried about. It's usually some combination of: where does the data go, who can see it, does this create compliance risk and is this just shadow IT that's going to cause problems later. These are legitimate concerns and dismissing them won't help you.

Second, reframe the conversation. Don't ask for permission to "use AI." Ask for permission to run a controlled pilot. Ninety days. Just you, maybe one or two others. Non-client data only. Your own working documents. Full audit trail. This is much easier to approve than an enterprise-wide rollout.

Third, address their concerns specifically. Anthropic - the company behind Claude - has enterprise agreements with data retention controls and no training on your data. The file-based system you're building creates its own audit trail. You can agree upfront what types of data are in scope and out of scope.

Fourth, make it their win too. If this works, they'll have designed the governance framework before everyone else in the company starts asking. They'll be ahead of the curve when the board wants to know what your AI policy looks like. SecOps people don't want to be blockers - give them a way to be enablers.

And if they still say no? Ask them what specifically they'd need to see to say yes. If the answer is "nothing, ever," then you've got a different conversation to have - probably at exec committee level - about whether your company is serious about staying competitive.

What to expect

The first few weeks will be clunky. You'll forget to update the files. The AI will misunderstand what you meant. You'll wonder if this is actually saving you time or just creating a new kind of admin.

Push through it. By week four or five, something shifts. You start to notice that you're not carrying as much in your head. That you can pick up a project after a week away and actually know where you left it. That the AI is starting to anticipate what you need before you ask.

By month three, you'll wonder how you worked without it.

Where to start

If this sounds like something you want to build yourself, start with the CLAUDE.md file. Spend thirty minutes writing down who you are, what you're working on, how you like to communicate and what your priorities are for the next quarter. That document alone - even before you do anything else - will clarify your own thinking in ways you might not expect.

Then install Claude Code, point it at that file and start a conversation. See what happens.

If you'd rather not do this yourself - maybe the terminal feels like a step too far, maybe you're locked into Copilot or Gemini, maybe you just want someone else to figure it out - we can build this for you. Same outcome, tailored to your tools and your workflows, with proper governance designed in from the start. That's what we do. Get in touch.

And if you're thinking "this is great but my team still can't use the tools we've already got" - take our Copilot Quiz to see where you are and whether our Copilot Sprint might help.

And if you've already got a brilliant EA or Chief of Staff? Even better. Learn this together - you bring the strategic context, they bring the operational rigour. This isn't about replacing people, it's about giving your whole team leverage. More on how to do that in a future piece.

I should acknowledge Nate B. Jones here - his YouTube channel and Substack were what first got me experimenting with this approach. If you want to go deeper, give him a follow. He's a legend.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

The AgentFlow Manifesto

January 2026

Most businesses come to us saying some version of "we need to do something with AI."

And the first thing we ask is: why? What makes you think that?

Not because it's a bad instinct - it's usually right. But because the answer tells us everything about where to start.

Sometimes it's strategic - a board asking questions, competitors making noise, a sense that the world is shifting and they're not sure how to shift with it. Sometimes it's painfully specific - a process that's bleeding time, a team that's drowning, a growth ambition that can't be met without hiring people they can't afford.

And sometimes - honestly - it's just that their team wants to throw their laptops out the window. That's as valid a starting point as any.

AI is a spectrum

Here's what we've learned: AI isn't one thing. It's not a single solution you bolt on. It's a spectrum - and where you should be on that spectrum depends on the problem you're solving, the risk you're comfortable with and what it's worth to get it right.

At one end, it might be as simple as helping your people actually use the AI tools you've already paid for. Copilot, ChatGPT, whatever's sitting there underutilised. No new tech, no big project - just showing people how to get more from what's already on their desktop.

At the other end, it's agentic systems - AI that doesn't just assist but actively works alongside your team, handling complex workflows with humans in the loop where it matters.

And in between? A whole range of possibilities. Automations that save hours. Workflows that scale without scaling headcount. Tools that turn junior people into experts and experts into superheroes.

The point is: you don't always need a massive data engineering overhaul to get value from AI. Sometimes you do. Often you don't. And the skill is knowing which is which.

Discover, Build, Embed

We help businesses figure this out through a simple model: Discover, Build, Embed.

Discover is where we help you see the landscape. We use a framework - a simple 2x2 - that maps AI opportunities across two dimensions: whether they're internal or external to your business and whether they're about productivity or growth.

Most people, when they think about AI, only see one corner of this grid. They're thinking about productivity (making things faster) or they're thinking about customer-facing applications - but rarely both and rarely the full picture.

The 2x2 opens their eyes to what's possible. And then - crucially - we don't leave it abstract. We work with you to plot your specific opportunities onto it, with real ROI estimates. It becomes your map.

Build is where we help you deliver. The right solution for the right problem - whether that's training your team to use existing tools better, building automated workflows that keep humans in the loop, or developing more sophisticated agentic systems. We're not trying to upsell you to complexity. We're trying to match the solution to the problem.

Embed is where we make it stick. Because the graveyard of AI projects is full of brilliant pilots that never made it into the business. We help you get adoption, build capability and make sure the value actually lands.

Outcomes that matter

The outcome we're working towards? It depends which quadrant matters most to you.

If it's internal productivity - we're measuring hours freed. Time your team gets back to do the work that actually matters.

If it's internal growth - we're looking at reduced churn, better retention, capability that compounds.

If it's external productivity - we're after growth at near-zero marginal cost. Scaling what you do without proportionally scaling what you spend.

If it's external growth - we're measuring revenue. New customers, new markets, new opportunities.

What we're not

Here's what we're not: we're not a big consultancy that'll spend six months telling you what you already know and then hand you a PowerPoint. We're not a tech vendor trying to sell you a platform. We're not pretending AI is magic, or that every problem needs a complex solution.

We're practical. We start with the annoying stuff - the things that make your team want to throw their laptops out the window - and we work towards the strategic stuff: how you grow without proportionally growing cost.

And we're honest. Sometimes the answer is "just use what you've got better." Sometimes it's "you need to build something." Sometimes it's "this isn't actually an AI problem." We'll tell you which.

That's AgentFlow. We help businesses figure out where AI can actually help, build the right solution for the problem and make sure it sticks. Simple as that.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Your Move 37 Moment is Coming

January 7, 2026

In 2016, world champion Lee Sedol sat down to defend humanity's honour in Go - the world's most complex board game, with more possible moves than atoms in the universe. A game we'd dominated for 2,500 years. His opponent: Google's AI.

In game two, move 37 changed everything. The AI played a move no human ever had - brilliant, illogical, devastating. But Sedol didn't walk away. He studied, adapted and later made move 78 - a play so creative that AlphaGo faltered and gave up.

I believe most knowledge workers will experience their own "Move 37 moment" this year - that pivotal instance where AI demonstrates capabilities that surpass what you thought was your strength.

Mine happened when AI completed strategic work for AgentFlow in five minutes. Work I previously considered my competitive advantage. My "Move 78" was recognising that while AI handles the heavy lifting, human judgment and experience determine what's genuinely valuable.

The future doesn't belong to those who fear technology or blindly trust it. It belongs to those who learn to collaborate with it strategically.

What will you do when AI challenges your expertise? How will you find your Move 78?

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

£2,000 and Two Months vs 35 Minutes and 45p

January 6, 2026

My wife Helena had a web app built a couple of years ago. It cost £2,000 and took two months. Yesterday I rebuilt it using Claude Code in 35 minutes. It cost 45 pence.

Helena's a WSET-trained sommelier with a brilliant concept called "vinalogy" - wine grapes as character personas. Cabernet Sauvignon is the rugby player who peaked in sixth form. Riesling is your friend who brings obscure cheeses to dinner parties. It's a fun, accessible way to understand wine.

I used Claude Code to research her existing vinalogies, write new ones in her voice, generate character images and build the complete quiz application - all from a simple spreadsheet.

If you're still using AI just to write emails and summarise meeting notes, you are being left way, way behind.

Six weeks ago, this same project would have required significantly more iteration. The tools are improving that fast. Right now I'm simultaneously running Claude Code sessions building a sales agent and CRM system for AgentFlow.

What could you build in 35 minutes?

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Stop Telling People to "Get Their Data in Order"

January 4, 2026

"You need to get your data in order before you do AI."

This advice is everywhere. It's also vague to the point of being meaningless. Which data? In what order? For what purpose?

There are legitimate data considerations: Do you know what systems you have and whether the data is accurate? Can information flow between systems? Is terminology consistent across departments? Is data accessible and legally compliant? Are your processes standardised? Do you have enough historical data for training?

But here's the thing: not all of these factors need addressing simultaneously. Which ones matter depends entirely on your specific use case.

I've seen clients abandon AI projects unnecessarily, believing their data was too disorganised. Once we clarified which specific elements mattered for their goals, they discovered they were closer to readiness than they thought.

Next time someone tells you to "get your data in order," ask them to specify which aspects matter. If they can't answer clearly, they're probably just repeating what they've heard.

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

It's You, Not Your AI

November 13, 2025

I was getting frustrated with Claude Code yesterday. Nothing was working. Then I saw its internal reasoning: "The user seems frustrated. I have attempted this fix three times without success..."

I felt guilty. Like I'd been shouting at Dobby.

My business partner and I have coined a term: "Bad AI Days." Those days when nothing seems to work, when the AI feels broken, when you're ready to throw your laptop out the window.

Here's the uncomfortable truth: the AI isn't having a bad day. You are. Garbage prompts lead to beautifully articulated and believable garbage out.

Users who lack proper prompting skills consistently experience poor AI results. It's not the tool - it's how you're using it.

That's why we've built a diagnostic tool to help people distinguish between problematic AI performance and problematic prompting habits. We're looking for beta testers.

And for what it's worth, Claude Code with Opus 4.5 outperforms everything else I've tried. The problem was me.

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

C-Suite: You Have to Find the Time

November 2025

If you're C-suite, you probably spent most of yesterday in meetings. And today. And tomorrow's already booked solid too. God knows how you have the time to read this (but please do).

When I was doing your job it meant my thinking time happened in the shower, on the loo, walking the dog, on the commute. I know this is easier for me to say now that I'm not drowning in back-to-backs, but...

Were I still in a C-suite job, I have no idea how I'd have kept up with AI. Almost certainly from skimming think-pieces in Campaign, sitting through jargon-heavy presentations and nodding along in meetings thinking I 'got' it. And I may have even actually 'got' some of it.

And that is properly dangerous. AI will touch every part of your business and the same for your customers and clients. What's more, it's changing every month, every week, every day.

In my experience, you only really understand it when you properly take the time to play with it. And that doesn't mean accessing an LLM via a chatbot. It means building something you actually care about. What annoys you outside of work? What hobby could use a simple tool? Use something like Loveable and spend a weekend solving a problem that's been bugging you for years.

But you are so busy, where are you going to find the time? I bet your CTO hasn't even had the chance to vibe code anything and see whether the hype is real.

So how the hell are you and your leaders making the decisions you need to make to keep up? By the time AI insights filter up to you, they're already diluted by well-meaning people who probably don't really understand it themselves - or, just as dangerous, out of date. I guarantee that last week's "don't let your developers vibe code" advice is being turned on its head by new approaches that genuinely work. If you're relying on memos and second-hand reports, you're in trouble.

This isn't something you can let happen without you understanding it. And you can't hope it goes away or becomes old news.

YOU AND YOUR LEADERSHIP TEAM HAVE TO FIND THE TIME. Admit what you don't know. Get your hands dirty. Learn.

If you don't, either someone will sell you something that will destroy your business, you'll make a horrible decision about an implementation, or you'll just miss it all entirely.

If any of this makes sense to you and you fancy a chat about practical ways to get your team properly up to speed, drop me a line. Even if it's just to swap notes about learning on the loo.

Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

Why AI Won't Fix Your Business (Unless You Change How You Work)

November 10, 2025

UK productivity growth stalled in 2008. Despite billions spent on "digital transformation," we're barely more productive than we were fifteen years ago.

Here's the uncomfortable parallel: when factories first adopted electric motors, they didn't see productivity gains either. Why? They kept the same work structures. They replaced steam engines with electric ones but left everything else unchanged.

We've done exactly the same with digital. We still work the way we always worked, more or less. Just with digital tools.

The real challenge isn't technology adoption - it's organisational willingness to fundamentally redesign workflows. Smaller, agile firms are more willing to rethink processes from scratch. Larger corporations struggle with legacy systems and entrenched management layers.

The critical question: are you brave enough to restructure operations comprehensively to realise AI's true potential? Or are you just bolting AI onto existing processes?

Because if it's the latter, you're going to spend a lot of money to stay exactly where you are.

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

We Cut the Judgment Layer. Now We Need It Back.

October 19, 2025

When I arrived at Google in 2006, experienced professionals were rare. At 31, I felt "ancient" despite having minimal work experience. That was the culture then - young, experimental, move fast.

Years later, as an executive, I made difficult decisions cutting middle-management roles. People with 15-25 years of experience whose value wasn't "directly billable enough." Financially necessary at the time. But I knew we were losing something.

Institutional judgment. The ability to look at something and know - from experience, not data - whether it was right or wrong.

Now, watching AI-generated content flood social platforms, the consequence is clear.

You'd absolutely kill to have those people back now, wouldn't you? That layer of judgment sitting between the machine and what goes out the door.

The economics haven't improved. But experienced judgment has become demonstrably more valuable. "Cheap energy" from AI cannot replace scarce human wisdom and discernment.

My advice to clients: balance AI implementation with sufficient human judgment. Without enough human judgment and with too much AI - you can really cock things up.

View on LinkedIn →
Hamish Nicklin

Hamish Nicklin

CEO & Co-Founder

The Rise of the Haphazards: Three Types of AI User

October 7, 2025

I've noticed three types of AI users emerging in organisations:

The Haphazards are well-meaning, curious people who've started using AI without any real process or permission. They experiment with ChatGPT and Copilot, sometimes succeeding brilliantly, sometimes failing spectacularly. But they're learning what's possible.

The Human Hands are traditional experts whose deep skills and experience are being challenged. Their concern: someone with half their experience plus AI literacy and decent prompts can now match their output.

The Hundies are the rare few who've mastered AI effectively, combining technical fluency with sound judgment about the tool's limitations and capabilities.

Rather than viewing Haphazards as problematic, leaders should recognise them as the messy, chaotic first wave preceding structural normalisation.

My advice: Nurture experimental users with proper tools and education. Reassure experienced professionals that AI augmentation creates value. Develop coherent AI strategies integrated into actual workflows. Create collaborative learning spaces rather than competitive environments.

The future winners won't simply automate fastest. They'll be those who combine curiosity and capability with judgment.

View on LinkedIn →

Want to Discuss These Ideas?

We're always happy to talk about AI, business transformation and what might work for your situation.

Start a Conversation

View on LinkedIn →
Link copied!
Paste it to start discussing this article