Six Journalists, Six Different Lines. Where Do You Draw Yours?
Two articles, the same day, about the same taboo. What they reveal about the future of editorial work.
WSJ and WIRED both just published pieces about journalists using AI to write their stories. Same day. Same taboo cracking open.
I’ve been watching this unfold from a specific vantage point: I help newsrooms to work with AI, and I’m building a product in this space. So when I read these pieces, I wasn’t surprised by the confession. I was surprised by the architecture.
The AI Red Line?
A year ago, admitting you used AI to write was career poison in most newsrooms. Now it’s a feature story in the Wall Street Journal. And the profiles read like a spectrum. Each journalist landed in a different place on the question of what AI should and shouldn’t do.
Nick Lichtenberg at Fortune prompts AI with headlines and source documents, then rewrites, fact-checks, and adds original reporting. 600+ stories since July. His editor says “more than 50% is Nick.” He also writes features entirely on his own, manages a team of six, and edits others’ work. AI accelerates his output, but he’s doing real editorial work on every piece.
Alex Heath, now independent on Substack, uses Claude as what longtime journalists called his “rewrite desk.” He dictates ideas through Wispr Flow, an AI agent drafts, and he goes back and forth for up to 30 minutes refining. He still writes parts himself. “I never did this because I liked being a writer,” Heath says. “I like reporting, learning new things, having an edge.”
Jasmine Sun tells Claude it “should never write a sentence for her.” She uses it strictly as an editor - one she’s instructed to focus on developing her voice and taste, and never to be sycophantic. She says Claude forces her to work harder, not less. “With a human editor, they’re calling you on your bullshit.”
Casey Newton isn’t using AI to write Platformer today, but he’s rethinking his approach. “I actually need to shift the balance,” he says. “I need to do less news analysis and more original reporting.” His logic: if AI is getting good at analysis, his value needs to be in information others can’t get.
Kevin Roose built a team of Claude agents to edit his book - a “Master Editor” with sub-agents for fact-checking, style matching, and feedback. He’s still working with human editors too. And he’s clear-eyed about it: “I am not under some romantic illusion that I possess a special, irreplaceable perspective. But what I am is a person.”
Taylor Lorenz uses AI for business tasks - SEO, data - but won’t touch it for writing or editing. “I am a journalist because I like to help people understand the world and bring light to different issues. I don’t want the AI to do that.”
The two articles:
WIRED: Meet the Tech Reporters Using AI to Help Write and Edit Their Stories by Maxwell Zeff
WSJ: An AI Upheaval Is Coming for Media. This Journalist Is Already All In. by Isabella Simonetti
What they’re actually building
What’s striking: each of these journalists has independently rebuilt a piece of a traditional newsroom.
Think about what a newsroom used to provide. A wire desk that scanned incoming information and flagged what mattered. A rewrite desk that turned raw reports into publishable stories. Copy editors who caught errors and tightened prose. Fact-checkers who verified claims. An assignment editor who decided what to cover and why.
Now map the AI workflows in these articles onto those roles:
Lichtenberg uses Perplexity and NotebookLM as his wire desk and rewrite desk. He feeds them source documents, gets drafts back, then does the editorial work himself.
Heath has Claude Cowork as his rewrite desk, connected to Gmail, Calendar, Granola transcripts, and Notion. He’s built a custom skill with “10 commandments” for writing in his style.
Sun has Claude as her copy editor and developmental editor - one that won’t write for her, only challenge her.
Roose built an entire editorial team: Master Editor, fact-checker, style-matcher, feedback agents.
Newton is experimenting with Sun’s AI editor approach, recreating it with his own articles as the style guide.
They’re all assembling the same thing: editorial infrastructure. They’re just doing it with duct tape - a Perplexity prompt here, a Claude skill there, a Notion integration somewhere else.
The automate / co-pilot / never AI framework
In workshops or conversations about this, I like to ask people to map their tasks into three buckets:
Automate: AI handles it end-to-end. You review the output. Routine monitoring, data aggregation, formatting, first-pass summaries.
Co-pilot: You and AI work together. AI drafts, you rewrite. AI suggests, you decide. AI fact-checks, you verify. The human stays in the loop, but the loop is tighter and faster.
Never AI: Tasks where the human IS the value. Source relationships. Editorial judgment on sensitive stories. Voice. The decision to publish or kill a story. Ethics calls.
What’s striking about these six journalists is that they each drew those lines differently, and they can all articulate why. Lichtenberg has more in the “automate” bucket. Sun has almost everything in “co-pilot” or “never AI.” Lorenz keeps writing firmly in “never AI.”
None of them is wrong. The framework isn’t prescriptive. It’s a forcing function: you have to decide, for each task, what your role is. And that only works if you understand the technology well enough to know what it can actually do, and if you can articulate what you uniquely bring.
This isn't just a journalism question. Every newsletter writer, podcaster, and YouTuber faces the same three buckets.
More broadly, a recent HBR/KPMG study found the same pattern at scale: 1.4 million prompts, 2,500 employees. The top 5% don’t use AI more. They use it differently. They treat AI as a reasoning partner and delegate complex tasks, not just simple ones. They know what to hand off and what to protect.
From DIY to infrastructure
As you know if you’re a regular reader of this Substack, I’m obsessed with the question of how to build systems, beyond prompt tweaking or one-off experiments. And you might be as well, wondering how to try it on your own.
If you want to start building your own system, I wrote a step-by-step guide to setting up your first AI assistant workspace. It’s the foundation for exactly this kind of editorial infrastructure.
At Mizal, we’ve made it evolve to what we call the “newsroom in a box”. Not talking a chatbot that writes articles here -- we’ve all seen how that goes. Real editorial AI infrastructure, that gives everyone the evolving power a team of agents can bring. Systems that handle or assist the wire desk, the fact-checking assistance, the style enforcement, the source monitoring, so people can focus on the work only they can do.
I’ve been really surprised by the results we can get. Not saying it’s perfect and can do everything from day one, but the shift from one-off conversations to an autonomous workflow opens up radically new perspectives (if you want in on what we’re building, grab a spot).
The reason we’re building this isn’t that I think AI should replace journalists or content creation. But because:
The biggest problem of the industry is the resource-scarcity one.
I’ve watched enough journalists and creators either cobble together their own version with five different tools and a prayer, or struggle with products built for technical people.
If you’re already building AI workflows, drop what’s working (and what isn’t) in the comments. I genuinely want to know.


