Did You Know You Can Automate Your Newsletter with AI? Here’s How We Did It.
Which Model Mastered Our Newsletter Game? We will show you the models and the methods we used, and who came out on top.
If you’ve ever managed a newsletter, you know the grind. Sourcing articles, writing summaries, crafting subject lines it’s a lot. We used to have a standard operating procedure (SOP) that was a million steps long. Now, our process is just one word: “Go.”

How did we do it? We built custom AI agents, gem, and GPT to automate the entire process. But as anyone working with AI knows, it’s not a “set it and forget it” situation. These tools are constantly changing, and what worked last week might not work today.
That’s why we regularly test our AI agents across different platforms ChatGPT, Claude, and Gemini to see which one performs best. We recently ran one of these experiments, and the results were surprising. We’re sharing our findings to show you how different AI models handle the same task and why continuous testing is so important.
The Great AI Newsletter Experiment
Our goal: generate a complete, five-article newsletter. The AI needed to find recent, relevant articles from specific industries, write engaging summaries, craft catchy subject lines, and provide accurate links. We gave GPT built on ChatGPT, Gem on Gemini, and in Claude the exact same set of instructions. Then, we just typed “Go.”
Here’s a look at how each platform performed.
Claude: A Trip Back in Time
We started with Claude, testing its different models, including the most powerful one, Opus 4.1. We had high hopes. The output for the subject line and preview text was actually pretty good. The call to action at the end was solid, too.

There was just one massive problem: the articles were old.
One of the articles it found was from December 2024 a date that hasn’t happened yet while others were from weeks or even months ago. For a newsletter that relies on timely content, this was a dealbreaker. No matter how well-written the summaries were, we couldn’t send our subscribers outdated news.
Despite a few promising elements in its writing style, the inability to source current articles meant Claude was out of the running for this specific task.
Gemini: Fast but Flawed
Next up was Gemini. Right away, we noticed it was significantly faster than the other models, especially when using the 1.5 Flash model. It produced different articles and some decent subject lines. The initial results looked promising.
However, Gemini had its own set of issues. The first time we ran it, the AI failed to provide any links to the articles it summarized. This is a recurring problem we’ve seen with Gemini sometimes it provides direct links, and other times it just gives you a search query.

When we ran the test again using the 1.5 Pro model, which is designed for more complex tasks, it finally gave us links. But they weren’t direct links to the articles. Instead, they led to a search results page. That’s an extra step we don’t want our readers to take.
While Gemini showed potential with its speed and content generation, its inconsistency with linking made it unreliable for our newsletter workflow.
ChatGPT: The Clear Winner (This Time)
Finally, we tested our custom agent on ChatGPT, using the GPT-5 model with its “thinking” mode activated. This setting allows the model to take more time to generate a more thorough and accurate response. It took about eight minutes to run, which was longer than the others, but the results were well worth the wait.
The output was exactly what we were looking for.
- Current and Relevant Articles: It found five articles that were published that same day.
- Accurate Links: Every link led directly to the source article. No extra steps, no search pages.
- Great Copy: The subject lines were engaging, the blog post title was catchy, and the overall tone matched our brand voice. It even avoided the cheesy, overused AI phrases we specifically instructed it to avoid, like “game-changer.”
Interestingly, when another team member ran a separate test on their own custom GPT (using our same instructions), it pulled some of the same top stories. This consistency across different accounts showed that ChatGPT was reliably identifying the most important news of the day.
Based on this experiment, ChatGPT was the undisputed winner for our newsletter creation task.
Why You Can’t “Set and Forget” AI
This experiment highlights a crucial lesson about working with AI: it’s not always about your instructions. We used the exact same prompt for each model, yet we received wildly different results.
What works perfectly one day might not work the next. A few weeks ago, Gemini was our go-to because it was consistently delivering great results with accurate links. Today, it’s struggling. This isn’t because the AI is “breaking”; it’s because these models are constantly being updated.
This constant evolution can be frustrating for those who want steady, predictable results. But for those of us who enjoy experimenting, it keeps things interesting. You can’t get too attached to one tool. The key is to have multiple options and test them regularly to see which one is performing best at that moment.
Your Next Steps
Automating tasks like a newsletter is no longer a unrealistic dream; it’s a practical reality that can save you countless hours. Our simple “Go” command is the result of testing, tweaking, and understanding the strengths and weaknesses of different AI tools.
If you’re looking to integrate AI into your own workflows, start by experimenting.
- Pick a Repetitive Task: Identify a task you do regularly that could be automated.
- Build a Simple Prompt: Write clear, step-by-step instructions for the AI.
- Test Across Platforms: Run your prompt on different AI models to compare the results.
You might be surprised by what you find. And if you need a hand building your own custom AI agents, you know who to call.

