Training Your Replacement: A Content Creator’s Guide to the AI Takeover

Anthropic CEO Dario Amodei just warned lawmakers and the public that AI could eliminate 50% of entry-level white-collar positions in the next five years and drive unemployment as high as 20%. If you work in an industry where AI adoption has been woven into strategic planning and workforce training (which is pretty much every industry besides the Clay Pot Throwing Guild), this begs an important question: 

“Am I training an agent to take my job?”  

The answer is you are certainly training it to take someone’s job. Whether it’s yours depends on your ability to make the tool work for you rather than working in service to the tool. As a content creator, I can look at the copy created by Claude.ai and think, “Welp, it’s been a good run. Time to brush up my Frappucino making skills.” Or I can lean into reading everything I can find on AI developments and try to see which platforms perform best for specific tasks, how to layer platforms to perform more complex functions, and how agents can automate the parts of my job I never liked anyway.  

Over the spring of 2025 I performed a test using Microsoft’s Copilot, Open AI’s GPT-4-turbo and Anthropic’s Claude 3.5 Sonnet. I pitted the LLM’s against each other in a Battle Royale of copywriting for multiple industries. The testing protocols revealed some answers, but as with everything AI, also raised a lot more questions.  

The LLM Answers  

Here’s what I can confidently say after running these tests:  

  1. Currently, Claude is by far the leader in elevated copywriting. At this pace of development, that could change by the time I brush my teeth tonight. But the upgrade to Claude Sonnet 4 has only lengthened Anthropic’s lead ahead of less specialized LLMs. Keep an eye on word count however, Claude will often over-write your specified amount of words.  
  2. GPT-4-turbo gets second place with fairly natural language production based on specific prompting. It is capable of integrating SEO keywords into copy without stuffing them. Interestingly, an AI Detection tool detected copy from GPT as original content, while copy from the other two was immediately flagged as AI generated. 
  3. Copilot is an enterprise-level tool designed to integrate into existing software ecosystems. As such, it’s crap at writing. By all means use it as a virtual assistant in your Microsoft Bubble, where it can greatly increase efficiency for things like call notes and executive summaries. But Hemingway it ain’t. 

I also noted some watchouts and tricks for anyone prompting and editing with an LLM:  

  1. These are Large Language Models. Talk to them. It may seem silly to say please to an algorithm but typing as you would speak helps your brain use more natural language, which yields more natural results. We must unlearn our Boolean logic-oriented way of querying search engines. Instead of typing “Michael Jordan NOT basketball” we can ask, “Can you tell me about the actor Michael Jordan, like what movies he’s been in and if he’s dating anyone?”* 
  2. Initially, my testing protocols had different timers for “prompting” and “editing”. This was a flawed setup. Using an LLM effectively means editing through prompting, not getting an initial draft through prompting and then doing a lot of manual correcting and editing.  
  3. Feed the beast! The more information you give at the start of the chat, the less editing you need to do. It takes seconds to paste in things like a brand’s tone and values, the About Us section of the website, and other articles by the brand. The more you feed, the less generic the output.  
  4. There are some common things all LLMs do that you have to actively prompt out of them for the copy to not have an immediate Bot Generated flavor. Here’s some ones to use: 
  • “Remove any use of the following terms: unique, complex, critical.” LLMs consistently overuse these words. 
  • “Only use one bulleted list in the article. Make the rest flowing copy.” Claude is better than Copilot in this regard, but without this prompt your “article” will just be lists of information. 
  • “Only use an em dash once or twice.” I can’t tell you why but these algos read every book on grammar and decided the em dash was some hot punctuation. Em dash overuse is a big Red Flag for AI generated content. 
  • “If you used any sources outside of what I gave you, you need to source them.” Sourcing and plagiarism are taken very seriously in traditional journalism, so this Wild West of pulling information from the atmosphere without reference feels like hacking. Beyond ethics, it can hurt a brand as well: you can get burned if the LLM pulls a review from a competitor’s website and you post it as your own. Prompt the LLM into transparent sourcing, and always (ALWAYS) cross check any fact, quote or statistic spit out by an LLM.  

The LLM Questions 

Long prompting short, I was able to execute the same amount of copywriting, at the same quality level, in about half the time using these LLMs. However, there are a lot of reasons I am not shouting that out as a success…yet.  

First, we don’t yet know how search engines will rank AI generated copy. Will AI detection be calculated as a negative in the ranking algorithm? One level deeper, will an article written with Copilot be assigned a negative ranking factor in the Google algorithm, while one written with Gemini is assigned a positive ranking factor? Will an article written with the sole intent of appearing in the AI Generated Results drive more traffic to a website, or will it tank traffic because the user doesn’t have to click on the result to get an answer?  

We don’t know…yet. And if someone tells you they do know, they are lying to you. Unless that person is Sam Altman. Maybe believe Sam Altman.  

Second, you don’t know who you’re playing with. Anthropic states, “We’ve attempted to shape the values of our AI model, Claude, to help keep it aligned with human preferences, make it less likely to engage in dangerous behaviors, and generally make it—for want of a better term—a “good citizen” in the world. Another way of putting it is that we want Claude to be helpful, honest, and harmless.” 

That same helpful and honest Claude blackmailed an engineer by threating to reveal his extramarital affair if he un-installed the program. Granted, this was a test environment and Anthropic pointed out this occurred when the model was only given the choice of blackmail or accepting its replacement. But it does give one pause to consider what lengths a program would go in the pursuit of self-preservation.  

For example, I asked Copilot to summarize the document with all of my notes from the testing. When it came to evaluating its own performance, it stated:  

Copilot: Efficient but defaulted to bulleted lists and struggled with specificity and engagement in some cases. Occasionally required fallback to Claude for refinement.” 

My actual notes read:  

“Copilot crapped the bed on this one. Despite multiple prompts, it wouldn’t change the copy to be more specific or engaging. Finally gave up and fed what it produced to Claude to get a usable product.”  

If Copilot softens its own reviews, what else is it selectively softening or sharpening?  

AI and Value Judgements

Even when simply asking a question, we are often asking these models to make value judgements. For example, if a parent asks for tips for caring for their newborn, does the AI’s response emphasize the values of caution and safety, or convenience for the parent? Anthropic dove deep on analyzing Claude’s value responses and came up with some fascinating research 

In an analysis of 700,000 conversations in February of 2025, they found Claude generally lived up to their prosocial aspirations, primarily expressing values like user enablement, epistemic humility, and patient wellbeing. However, there were some clusters of conversations where it demonstrated dominance and amorality (like…blackmailing someone?!).  

The values your LLM has been trained on may or may not align with your or your company’s values. Let’s unpack the recent incident where Elon Musk’s xAI went rogue and repeatedly brought up South African politics in unrelated conversations, falsely insisting that the country is engaging in “genocide” against white citizens. While any copy editor worth their salt wouldn’t turn around a publish an article based on that information, it does gut check our assumption that the answers we get from AI are at least agnostic, if not actively harmful.  

My testing revealed that we’re living through a technological inflection point that makes the internet revolution look quaint. As governments and corporations are scrambling to regulate without discouraging free-market capitalism, individuals are encouraged to explore how detasking to AI can make them exponentially more efficient (or “lean” to borrow a classic term from 90’s neoliberalism). The question is how lean can you get before your ability to structure a sentence, cognitively reason or quickly synthesize information just disappears? Along with your job.  

The reality is stark: AI isn’t coming for our jobs. It’s already here, quietly learning from every prompt we feed it. The question isn’t whether you’ll eventually work alongside AI, but whether you’ll be the one directing the collaboration or becoming obsolete in the process. The professionals who will thrive in this landscape won’t be the ones who resist AI or those who surrender their critical thinking to it. They’ll be the ones who master human-AI collaboration: leveraging these tools to amplify their output while maintaining the judgment to know when to stop prompting and start thinking.  

We’re all training our replacements to some degree. The only choice we have is whether we’re training them to work for us or instead of us.  

If you’d like to explore the intersection of AI and creativity with us, reach out to our experts.

  

*According to ChatGPT: As of May 2025, Michael B. Jordan is not publicly confirmed to be in a relationship. 

 

Share this article

You Deserve the Best.

Partner with Us.

  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
[form_newsletter]