When Not to Use AI: 5 Tasks I Now Do Manually on Purpose

Updated: April 8, 2026
8 min read
When Not to Use AI: 5 Tasks I Now Do Manually on Purpose

I use AI every day. ChatGPT and Claude are open in my browser before my coffee finishes brewing. I've written about how I use them daily and which tools matter most. I'm not anti-AI. I'm not even AI-cautious.

But over the last six months, I've started pulling specific tasks back from AI—on purpose, with full awareness that the model could do them faster. Not because I'm being precious about "craft." Because I noticed that outsourcing them was costing me something I needed to keep.

Here are the five tasks I now refuse to delegate, and the principle that connects them.

A laptop closed on a desk next to a paper notebook and pen

1. The First Draft of Anything I'll Sign My Name To

If my name is going on it—an article, a proposal, a strategy memo, an important email—the first draft is mine. Not a model's. Not even a model's I edit "heavily."

The reason isn't quality. AI first drafts are often technically fine. The reason is what happens to me when I skip the draft.

Writing a first draft is how I find out what I actually think. The act of converting fuzzy mental shapes into specific sentences forces decisions I would otherwise dodge: do I really believe this? What's the real argument? What's the example? When AI writes the first draft, those decisions get made by something that doesn't have my taste, my context, or my reputation on the line. I'm now an editor of a stranger's opinions instead of an author of mine.

The output may look similar. The thinking is completely different. And the second one is the one I'm actually being paid for.

I use AI heavily after the draft exists—to pressure-test arguments, to spot weak transitions, to suggest cuts. But the first pass is a tax I pay willingly. It's where I figure out what I have to say. Taste only develops if you keep writing the first draft.

2. Decisions That Will Still Matter in Six Months

I ask AI for input on small decisions all the time—what to title an article, which framework to evaluate, how to phrase a tricky email. Those are tactical choices where speed beats deliberation.

But for any decision I'll still be living with in six months—career moves, relationships, money over a few thousand euros, what to take on or turn down—I keep the deliberation manual. I write it out by hand in a decision journal. I list the options, the costs, the things I'm afraid of, the data I'm missing. Slowly. With pauses.

AI can synthesize my situation in thirty seconds and give me a balanced answer. The problem is that I never sat with the tradeoffs. I never felt the pull of one option over the other. I got an answer without doing the thinking, which means I didn't update my judgment—I just borrowed someone else's. The next time I face a similar decision, I'll have to ask the model again. The skill never compounds.

Slow, hand-written deliberation on important decisions is how decision-making becomes a skill instead of a query.

3. Reading Things I Want to Actually Understand

AI summarization is a miracle for low-stakes reading. Industry news, long press releases, articles I'm sampling rather than studying—"give me the gist" works fine.

But for anything I want to genuinely understand—a book that matters, a paper I'll cite, a piece of writing in my field—I read it. The whole thing. Without summary.

Here's what summaries take from you: the texture. The argument's order. The specific sentence that lands. The aside that turns out to be the real thesis. The places the author hesitated and revised. A summary is the conclusion without the journey, and you can't operate on a topic from conclusions alone—you need the supporting structure that summaries flatten.

This is the same principle behind active recall over passive review. Real understanding requires the friction of working through the material yourself. AI summaries feel productive—you "covered" the article in 90 seconds—but you didn't learn anything that will still be there next week. You just got a feeling of having read.

The books on my list of seven only changed me because I read them slowly, not because I summarized them efficiently.

4. Quick Math and Mental Arithmetic

This sounds petty, but it's not. I no longer ask AI (or even a calculator, for small things) for arithmetic that fits in my head.

What's 18% of 240? How many days until June 14? What's the monthly cost if a year is €1,800? My brain can do all of these in under ten seconds. Asking AI takes longer, breaks my flow, and—the actual point—lets a small mental skill atrophy a little more each time.

The compound effect is the issue. Every shortcut you take with mental arithmetic, your numerical intuition gets dimmer. After a year of outsourcing, you don't trust your own answer anymore. Pricing decisions, budget reviews, sanity-checking a forecast—all of these get worse when your gut for numbers is dead.

I keep a small set of "never delegate" mental tasks: percentages, basic time math, rough order-of-magnitude estimates, and "does this number look right?" gut checks. They take seconds, they keep the muscle alive, and they pay back in moments where the number you need isn't worth opening another app for.

5. Difficult Conversations and Personal Messages

This is the one I've changed my mind on most strongly. A year ago I was happy to ask AI to draft hard messages—the apology email, the boundary-setting note to a client, the difficult message to a family member. The drafts were polished. The recipients accepted them.

The recipients accepted a stranger's words from me, and I never had to feel what I was actually saying.

That last part is what I missed. The friction of drafting a hard message yourself—deleting three versions, sitting with how it sounds, wondering if you're being unfair—is part of what makes the message accurate. You write yourself into clarity. AI hands you a clean draft built from the average of how people handle situations like yours, which is exactly why it lands as faintly hollow even when it's grammatically perfect.

For hard messages now: I write them myself. I might paste them into Claude after, asking only "is anything in here unkind I didn't notice?" That's a reasonable use—a sanity check from outside. But the words are mine, including the awkward ones, because the awkwardness was the actual content.

The Pattern: Skills That Compound vs. Tasks That Don't

Look at the five tasks. They're not random.

  • First drafts → builds writing and thinking
  • Important decisions → builds judgment
  • Real reading → builds understanding and memory
  • Mental math → builds numerical intuition
  • Hard conversations → builds emotional clarity

Every one is a skill that compounds. Every one gets weaker the more you outsource it. And every one is a skill where the output is less valuable than the process of producing it.

That's the principle. AI is brilliant at tasks where you only want the output: research synthesis, formatting, code boilerplate, translating, scheduling, summarizing things you don't care about. It's a bad bargain for tasks where the output is a byproduct of becoming the kind of person who can produce that output.

This is what people miss when they ask "will AI replace X?" The replacement question is the wrong frame. The real question is which tasks you'd lose something specific by handing over—not productivity, but yourself. AI doesn't replace your job; it replaces how you think—but only if you let it.

How to Decide Which Tasks Are Yours

For anyone trying to draw the line, the test I use:

  1. Does the skill compound? If doing this task makes me better at it (and at related tasks), it's a candidate to keep.
  2. Is the process load-bearing? Some tasks have value only because you went through them—deliberation, drafting, reading. Outsourcing skips the part that mattered.
  3. Will I need this skill in five years? If yes, exercise it. If no, delegate freely.
  4. Would the output be worse if I didn't care about it? If a task only succeeds because you are the one doing it—your taste, your context, your reputation—an LLM can't substitute, only impersonate.

Most tasks fail this test, and AI rightly takes them. A handful pass, and those are the ones worth defending.

The Quiet Trade

Using AI for everything feels like a free upgrade. It isn't. There's a meter running, and the cost shows up in skills that quietly stop working—your draft prose gets blander, your decisions get more borrowed, your reading gets shallower, your gut for numbers goes dull, your messages start to sound like everyone else's.

You don't notice any single instance. You notice it eighteen months in, when you sit down to write something important and realize you've forgotten how.

The five tasks above are mine. Yours might be different—maybe coding without autocomplete for the parts you want to actually understand, maybe sketching by hand before reaching for Figma AI, maybe planning a week without asking the model to structure it for you. The list isn't the point. The principle is.

Use AI for almost everything. But pick the few things that build the person you're trying to become, and do those yourself. On purpose. Even when the model could do them faster.

Especially then.

Share:

Was this article helpful?

M

Written by

MindTrellis

Helping you build better habits, sharper focus, and a growth mindset through practical, actionable guides.

Related Posts