Is it Foolish to Focus on What AI Still Can't Do?
Aaron on how we should think about letting AI into our writing lives
I sometimes joke that I have a tendency to fall in love with the people who I help with their books. I become taken with their research, end up seeing everything around me through their unique lens, and often the collaboration grows into a friendship. A few years ago, I worked with an Australian lawyer and academic named Anthea Roberts on the early stages of a book idea, and this process of professional and intellectual enamorment felt more accelerated than usual. This was partly because we had a fun rapport on Zoom, and also because we had a lot of interests in common—one of which was the mental mechanics behind creative thinking. She talked about writing a future book on metacognition and decision-making that she planned on calling Dragonfly Thinking. She hasn’t written that book, at least not yet. Instead, in 2023 Anthea co-founded an AI startup with that name.
Anthea, her co-founder Miranda Forsyth (also a wonderful former client), and their team are doing all sorts of fascinating things, and I hope to write more about their company in the future, since their focus is on using AI to enhance human decision-making, and ultimately this Substack is about how to make tricky decisions related to the complex impact of AI on our writing lives. But today I want to focus on something that Anthea wrote earlier in the year that has been buzzing annoyingly in the back of my mind ever since, like a fly that you can’t manage to swat or shoo out a window. She publishes an occasional newsletter about AI, and the edition with the line that got in my head was titled “Riding the 100-Foot Wave: Adaptability in the Age of AI.”
Like the Aussie that she is, Anthea employed surfing as a metaphor for adapting one’s career to the AI revolution. (As a surfer myself, albeit a mediocre one, this made the newsletter feel specifically intended for me.) She wrote about a Microsoft engineer named Brian Krabach who, instead of letting AI’s coding abilities relegate him to obsolescence, learned to harness them to work at a higher level and do new things he never could have before. As she described it, his thinking changed from, “What have I lost?” to “What can I do now that I couldn’t before?”
Here’s how she summed up Krabach’s experience:
This shift—from seeing AI as a threat to treating it as a powerful collaborator—exemplifies the resilient mindset needed to thrive in our changing landscape. It’s about developing comfort with uncertainty, the ability to read shifting conditions, and the courage to change one’s approach in response.
And a bit further down came the part that’s been on my mind:
Those thriving in this period of technological change aren’t just maintaining relevance—they’re finding ways to expand their impact and creativity in unprecedented ways. They’re asking not “What tasks can I still do that AI can’t?” but rather “What can we accomplish together that neither of us could achieve alone?”
To be clear, when I say that this passage irked me, I mean it in a positive sense in that it challenged me in a way that wasn’t easy for me to dismiss. Why? Because a lot of the thinking Lauren and I have been doing (publicly on Substack, and also in private) related to AI’s impact on writing, collaboration, and books has been precisely about what tasks we can still do that AI can’t. Sure, we’re thinking a bit about what we can achieve with AI as a helper, but our main focus has been where we can maintain our relevance in the process of book creation. We’re not preaching dynamic adaptation to create new opportunities. We’re preaching slow-thinking intentionality and the friction of human-centered ethics. And we’re owning that we like the way things have always been when it comes to reading and writing. Which feels a bit like paddling into a massive wave.
Obviously, Anthea is coming at these questions from a very different perspective than literary book-fetishists like us (though Anthea reads tons of books), whose lens is the historical tradition of the written word. She works with governments, companies, and other institutions, where ignoring the potential of AI would be a self-destructively myopic move. But she did make me wonder if holding on to ways of doing things that are slipping away could be hazardously shortsighted for people like us who are balancing our love of books with our need to make a living for many years to come.
Are Lauren and I, and people who think similar to us, willfully stubborn idealists blinding ourselves to a creatively enhanced future? Are we the equivalent of Kodak refusing to embrace digital photography and sinking the company? Or are we doing something valuable in trying to deliberately map out boundaries in a warp-speed moment of unprecedented change? And what do I even mean by “valuable?” Economic value, artistic value, ethical value, or something else?
Ultimately, it comes down to each person’s goals and priorities, which means there is no easy general-purpose answer. But I’d argue that a good place to begin thinking through these questions is to carve the issue up into three areas: Money, Likes/Dislikes, and Identity.
Dollar Dollar Bills Ya’ll
If your goal as a writer is to make as much money as possible—which it probably isn’t, since you, erm, decided to become a writer—then I think Anthea’s call to harness AI to accomplish things you never could have before makes perfect sense. (Or, alternately, if you’ve lost your writing-related job recently and are trying to figure out how to reinvent yourself without completely changing fields.) You could reorient your career around increased speed and scale of writing output, and if you produce content that others need, well, you might be able to make good money. People are already trying one version of this by posting new AI-generated novels daily on Amazon, but there are surely less mercenary and slop-driven ways of fusing your writing career with AI and embracing its potential as a “powerful collaborator.” You could, say, run newsletters for different companies or individuals, interviewing people for topics to cover and then using Claude to generate first drafts of posts you then improve. Or you could even launch your own AI startup based on your unique skills (maybe more on this in a future post).
I suspect, however, that few people reading this Substack are looking to become AI startup founders or wholly fuse their writing careers with AI toward a maximization of profit. Instead, I’d wager that most writers who haven’t taken a Peak Ick stance on AI are experimenting with using the technology to either shave time off tasks that otherwise eat into their writing time or enlisting it as an intern of sorts to assist them on writing work they’re not truly invested in, like penning corporate website copy. This usage doesn’t have the inspiring ring of Anthea’s “What can we accomplish together that neither of us could achieve alone?” but it is adaptive and subtly resilient: Rather than focusing on what AI can’t do that you still can (but for how long?), you’re recognizing where its capabilities match and exceed yours, and doing things more efficiently.
If this approach leads to a bit more money—which could mean regained time you devote to your own writing practice—this seems like a decent outcome. But do you actually like this style of work?
Can You Fall in Like with AI?
I’m not the first person to say this, but I worry that one version of the economy of the future for writers and other creatives is just checking AI-generated content for mistakes or things that can be improved. AI gets a book on, say, managing your anxiety in the age of AI 80% of the way there, then a human polishes it the final 20% to the finish line. The same might go for AI illustrations or AI music. In this dystopic scenario, AI is no longer the intern. We are.
Would you enjoy spending your days playing cleanup crew to a statistical model? Maybe you wouldn’t mind; honestly, it’s probably better than a lot of writing-related jobs in the pre-AI job market. My bigger point is that, however you fuse your career with AI, either by choice or out of necessity, you should try to do it in ways that you don’t hate, and maybe even enjoy, if that’s a possibility. So you may want to start identifying the uses of AI that you like, or at least can tolerate. And also identify the ones you don’t like and want to keep professional distance from.
When Lauren and I started experimenting with AI tools (for example, seeing how good GPT was at critiquing a book proposal), this was primarily out of a sense of survival. We wanted to understand what they could do to see how they might threaten our livelihood. We’ve also tried to understand how they could expedite or enhance our work, although as I mentioned above, we’ve leaned more toward using them to understand what we can still do better than AI, even if Anthea might warn us that we’re choosing to play our violins on the Titanic career-wise. Yet a byproduct of this process, at least for me, has also become discovering where I enjoy using AI.
Admittedly, there are few areas where I don’t get bored or run into my Ick Line or feel I could spend my time better elsewhere. That said, for investigation (research) on topic areas where I’m out of my depth, I do enjoy AI. I find doing a Deep Research inquiry on something that I’m curious about but don’t have the time or know-how to immerse myself in is genuinely engaging and very helpful, and even a little exciting as I wait for the output. The same goes for galaxy-brain ideas I’d have previously disregarded as impossible reveries, which I can now examine more seriously. For example, I produce podcasts for some clients, and I’ve been playing with ChatGPT’s coding capabilities to explore developing a predictive algorithm to better anticipate what engages listeners. This goes beyond investigation; I’m using generative AI (in a domain that isn’t, it should be noted, writing). I never would have gone down a rabbit hole like that before AI because it would have been totally impracticable. Meanwhile, there are other uses of AI I wouldn’t say I actively enjoy but definitely appreciate because they save me time, like searchable transcriptions (preservation) of Zoom calls.
That being said, this is all on the very career-y side of my career. When it comes to my own personal writing, which is all about feeling my mind play on the page, I wouldn’t enjoy having AI involved. (I’m aware we’ve already said things to this effect in other posts, so apologies for repeating myself.) This makes me think that the uniquely bifurcated nature of most creative careers complicates Anthea’s question. As writers, we often have one side of our career that is about money and another that is about self-expression. On the money side, we might actually like responding to Anthea’s call to make AI a collaborator, whereas on the self-expression side we probably won’t want to.
This brings us to the question of identity, which is already complex to begin with for most writers, and could get even peskier in the age of AI.
Is Being a Writer Just an Ongoing Identity Crisis?
Most of us writers have an overlapping set of writerly identities that we’ve improvised into a unicycle juggling act that we call our career.
To make ends meet, you might teach, work in communications, or produce content of different sorts, but you might also write fiction, poetry, or nonfiction. Even though most of us probably aren’t living our dream career—if I told my 22-year-old self that, instead of writing one Great American Novel after another I spent a lot of my time helping other people write their books, and that I really enjoy this, he would disown me—we’ve hopefully grown comfortable with the built-in competition between our co-existing identities, even if the tension between them can occasionally feel torturous. But what would happen to this delicate interplay of identities if you incorporated AI into your career to the degree that Anthea suggests? Perhaps nothing at all, depending on who you are, but over time, for better or worse, it would likely confuse the elements that have constituted at least part of your writerly identity until now. And this might destabilize your sense of self and purpose.
I have several friends who teach undergraduate writing, and a few are really struggling with what AI has done to their jobs. Part of this is seeing how it’s affecting student effort and learning, but for some another part is knowing that they pretty much have no choice but to bring AI into their teaching practice in some way, and they are understandably down about that. If they become a person who teaches writing with AI, then who are they? Not themselves, is how it feels.
This is partly a question of like/dislike (ie, hating AI), but I think it’s also about identity. I imagine medical professionals affected by AI, like surgeons and radiologists, might be feeling something similar. Even just writing this Substack about the impact of AI on the writing life—something I didn’t think much about a year ago—fucks with my identity a bit. Like, “When did I become a guy who spends so much time thinking and writing about this strange shit?” Lauren and I felt like we had to better understand AI out of necessity, and even though we’re making the choice not to adapt our careers the way Brian Krabach did (whatever that would look like), it’s forced me to reckon with an emerging new layer in my identity. What would that emotional process feel like if I went full AI to the extent that Anthea suggests? Perhaps it wouldn’t be much different from when I leaned into helping other people write their books and not just write my own, which itself was a complex shift in my early 30s.
We frequently don’t have full control over our professional identities. AI will clearly exacerbate this.
The two writers I’m aware of who seemed to have cracked the tripartite money/like/identity challenge are the screenwriter David Goyer and the journalist and media-startup founder Bradley Hope. I’ll let you look into what these two are doing on your own, but maybe we’ll cover them more in the future. And please share in the comments any people you know of who are merging their writing careers with AI in interesting ways.
Jobs vs. Tasks
It’s worth noting that the question of whether or not to embrace AI adaptation isn’t all or nothing. It’s a matter of degree. Anthea herself used the word “task,” not “job,” and as
recently wrote, a job is a collection of tasks and AI will do some of these better than humans but never all of them. Each of us will just have to figure out how our financial needs/goals, preferences, and identities fit into this brave new world. Hopefully my heuristic will help you think about AI’s role in the tasks that make up your job or jobs now and in the future, which in turn become the thing you’ll call your career.In our next post we’ll be sharing a conversation I had with Anthea about all of this.
Sort of kinda maybe but not really adapting,
Aaron



Gaining insights into current limitations is not foolish if you're currently using LLMs to help your writing. You want to understand the "psychology" of your helper to best use its strengths. And I have a related question:
In https://www.ian-leslie.com/p/why-are-llms-fixated-on-the-number they say that LLMs usually say "7" when asked for a random number. They explain the possible reason why and then they jump to a way broader claim:
"LLMs often exaggerate patterns that are common, salient, or meme-like, flattening diversity and negating subtlety. Ask it to write verse in the style of Shakespeare and it will ladle on the ‘thous’ and ‘thees’ because those are the salient markers of “Shakespeare” in the data. LLMs are drawn to the loudest cues; to the topline and the cliché."
My question is whether the "cliché" aspect is something you've noticed as a writer in the 2025-era chatbot experiments you described in previous posts.