Large language models (LLMs) and other AI tools can act as accelerators for streamlining research, enhancing narrative and refining material content while preserving the human elements of voice, judgment and storytelling that define meaningful journalism.
Despite the advantages these tools offer, our writers always base their stories on firsthand research and carefully review every AI-assisted output for accuracy, tone and originality before publication.
Blockworks discloses AI involvement in generated content when appropriate, particularly when its contribution is material. Blockworks publishes no AI-generated content without human review.
Phases of publication
We can break our editorial process into four distinct phases where AI may play a supporting role. Each phase carries its own opportunities and guardrails for AI assistance.
Planning and Research
Writers begin with a key source or other primary material. They are expected to investigate all source content directly and gain a firsthand understanding of the context and implications therein.
Once foundational research is complete, writers may input relevant background information into an LLM to establish a working knowledge base to assist in the iteration of a given piece. This is an opportunity to pair AI efficiency with human discernment. LLMs are especially useful here for quickly parsing dense materials.
From there, have the option to use AI to engage with their research conversationally and construct a story outline. This might include exploring narrative order, flagging gaps in explanation, or working through logic and flow. This interactive method allows us to rapidly test our understanding, refine angles and explore story directions through fluid dialogue with the accumulated material.
At Blockworks, we expect writers and editors to lead with human intuition, identifying what’s relevant and recognizing where either source material or AI output may have baked in their own biases.
Drafting
Once a writer has established their outline, all first drafts are fully human-authored unless otherwise specified. Writers may use AI to check phrasing against their pre-loaded research context, helping confirm factual accuracy or improve structural clarity. They can also query the AI for alternative sentence constructions, but always with the intent to sharpen their own human-written material.
AI can assist with refinement, but it does not write for us.
Our writers are the originators of every story’s language, perspective, and rhythm. AI contributions are always clearly tagged in internal planning documents to maintain transparency and editorial accountability.
Editing
Form follows function. An author’s style and voice are essential, but accuracy and reader understanding come first. That’s why we encourage editors to identify key pockets of important information within a piece and treat those moments as opportunities to maximize information density and clarity.
LLMs can help rephrase or condense segments to improve reader digestion without flattening the broader narrative.
We may use AI to iterate variant copy in specific parts of a story where comprehension is most critical — typically moments that front-load new information, offer crucial context, or tie together a narrative thread. AI suggestions may help editors tighten language, clarify logic, smooth transitions, or flag logical and factual inconsistencies.
The idea is to accelerate the mechanical side of editing so that editors can invest more attention in preserving the author’s voice and narrative flow throughout the other areas of a story; the majority of each piece remains grounded in the writer’s original voice.
Every article receives a final human review to ensure that each piece reflects Blockworks’ standards of journalistic integrity.
Publication
Once an editor has finalized a piece, AI can support the editorial team in adapting it for broader distribution.
We may use AI tooling to iterate headline and subline options swiftly, generate SEO summaries, or draft supporting assets like social copy, newsletter blurbs, or repackaged summaries for syndication.
AI may accelerate the tactical work of packaging a story, but editors remain responsible for approving all outputs to ensure that the tone, framing, and facts remain consistent with the work of the original author.
We may also use AI tools to translate long-form pieces into alternate formats like audio scripts, infographic copy, or social video captions, helping editorial team members adapt content for multiple audiences with limited turnaround.
Additional uses
Blockworks permits the use of AI tools outside the core writing process for tasks like interview preparation, source summarization, technical comprehension, and fact retrieval, so long as all outputs are grounded in firsthand research and verified against primary sources. Tools used across our team commonly include products such as ChatGPT, Grammarly, and Otter AI.
Policy updates and future use cases
Blockworks reserves the right to update this policy over time to reflect its editorial standards, technical capabilities, and ethical considerations. Any future updates will be approved by the team’s Managing Editor, with input from the editorial team to ensure that changes reflect real-world newsroom needs and shared values.
Blockworks encourages team members to experiment with emerging tools that may assist their reporting, analysis, or storytelling. However, they must disclose these proposed use cases to leadership first to ensure alignment with the principles outlined in this policy.
What AI may not be used for
Blockworks enforces clear boundaries on where and how editorial staff may not use AI:
- Bypassing primary research
Writers may not use AI to write about material they have not personally reviewed. All facts, quotes, and source material must be validated directly by writers and editors from trusted, original sources.
- Generating full-length stories or opinions
Writers may not use LLMs to write entire articles, columns, or opinion pieces. Writers must lead the narrative, structure, and tone of every piece.
- Fabricating sources, quotes or data
AI-generated facts, statistics, quotes, or citations are not acceptable. Writers and editors must independently confirm even the most plausible-sounding material.
- Writing on sensitive or high-stakes topics
Writers may not use AI to draft or summarize stories involving legal, political, or reputational risk without elevated human oversight. AI tools should never be used in a way that could expose private or privileged information, including off-the-record remarks, confidential sources, internal documents, or embargoed material.
- Evading attribution or editorial oversight
Writers must not use AI to create content and pass it off as entirely human-written. Significant AI input must be disclosed to editors. Hidden or unacknowledged use of AI is prohibited.
- Bypassing editorial review
No AI-generated content is exempt from human review. There are no AI-only pipelines. Every sentence published by Blockworks must be read and approved by a human editor.