Build an AI Content Pipeline with n8n: From RSS to Blog Post
Build an AI Content Pipeline with n8n: From RSS to Blog Post
One of our most-used internal workflows at Vorlux AI is the Content Pipeline — an n8n workflow that monitors RSS feeds, summarizes articles with a local LLM, generates blog post drafts, and queues them for review. Here’s how to build your own.

What You’ll Build
A workflow that:
- Monitors 3-5 RSS feeds for AI industry news
- Filters for relevance using keyword matching
- Summarizes each article using Ollama (local LLM)
- Generates a blog post draft with SEO metadata
- Saves to a Google Sheet or Notion database for review
Estimated API cost: EUR 0 — everything runs locally.
What You’ll Build: The End Result
Before diving into the setup, here is exactly what the finished pipeline produces every day, without any manual effort:
- Automated monitoring of 3-5 RSS feeds, polling every 30 minutes for fresh AI industry news
- Smart filtering that discards irrelevant articles using keyword matching, so only on-topic content enters your pipeline
- AI-generated summaries of each article in 3 concise bullet points, written for a business audience
- Complete blog post drafts of 300+ words with practical takeaways, ready for human review and light editing
- SEO metadata including suggested title, tags, and meta description for each draft
- Organized review queue in Google Sheets or Notion, with status tracking (Draft / Reviewed / Published)
- Full audit trail showing the original source URL for every generated piece, ensuring proper attribution
In practice, this pipeline generates 5-15 draft blog posts per week depending on how many feeds you monitor and how broad your keywords are. A single human reviewer can process the entire weekly batch in under 30 minutes, turning raw AI output into publish-ready content. That replaces approximately 8-12 hours of manual research, reading, summarizing, and writing per week.
flowchart LR
RSS["📡 RSS Feeds"] --> FILTER["🔍 Keyword<br/>Filter"]
FILTER --> OLLAMA["🤖 Ollama<br/>Summarize"]
OLLAMA --> FORMAT["📝 Format<br/>Draft"]
FORMAT --> SHEETS["📊 Google Sheets<br/>Review Queue"]
SHEETS --> PUBLISH["🚀 Publish"]
style RSS fill:#F5A623,color:#0B1628
style OLLAMA fill:#059669,color:#FAFAFA
style PUBLISH fill:#059669,color:#FAFAFA
Prerequisites
- n8n installed (self-hosted or cloud)
- Ollama running locally with
llama3.1:8bor similar - RSS feed URLs for your target sources
Step 1: RSS Feed Trigger
Create an RSS Feed Trigger node. Add your sources:
https://news.ycombinator.com/rss(Hacker News)https://techcrunch.com/category/artificial-intelligence/feed/(TechCrunch AI)https://feeds.feedburner.com/TheHackersNews(Security news)
Set the polling interval to every 30 minutes.
Step 2: Keyword Filter
Add an IF node to filter articles. Check if the title or description contains your target keywords:
{{$json.title.toLowerCase().includes('ai') ||
$json.title.toLowerCase().includes('llm') ||
$json.description.toLowerCase().includes('edge computing')}}
This ensures you only process relevant articles, saving compute time.
Step 3: AI Summarization with Ollama
Add an HTTP Request node pointing to your local Ollama instance:
- URL:
http://localhost:11434/api/generate - Method: POST
- Body:
{
"model": "llama3.1:8b",
"prompt": "Summarize this article in 3 bullet points for a business audience:\n\nTitle: {{$json.title}}\n\nContent: {{$json.description}}",
"stream": false
}
Since Ollama runs locally, this costs EUR 0 per request with ~12ms latency.
Step 4: Blog Post Generation
Add another Ollama call to generate a full blog post draft:
{
"model": "llama3.1:8b",
"prompt": "Write a 300-word blog post about this topic for a Spanish SME audience interested in AI deployment. Include practical takeaways.\n\nTopic: {{$json.title}}\nSummary: {{$node['Ollama Summary'].json.response}}",
"stream": false
}
Step 5: Save for Review
Add a Google Sheets or Notion node to save the output:
- Title
- Original URL
- AI Summary (3 bullets)
- Draft blog post
- Suggested tags
- Status: “Draft”
Download This Workflow
We’ve built a production-ready version of this pipeline that includes error handling, deduplication, and multi-language support.
Download the n8n workflow JSON — import directly into your n8n instance.
Running Costs
| Component | Monthly Cost |
|---|---|
| n8n (self-hosted) | EUR 0 |
| Ollama + Llama 3.1 8B | EUR 0 (local) |
| Hardware (Mac Mini M4) | EUR 5/mo electricity |
| Total | EUR 5/mo |
Compare this to using GPT-4o API for the same workflow: approximately EUR 50-200/month depending on volume.
Scaling and Quality Assurance
Once your pipeline is running, consider these enhancements to maintain content quality at scale:
- AI Evaluations: n8n’s built-in AI Evaluations feature lets you run test datasets through your workflow and measure output quality scores automatically — essential for catching quality drift before it reaches production.
- Multi-model routing: Use a Code node to route tasks to different models based on complexity. Simple summaries go to Phi-4 (fast), research-heavy content goes to Llama 3.3 70B (deep reasoning).
- Human-in-the-loop: Add an approval step via email or Slack before the publish node. Automation handles 90% of the work; a human validates the final 10%.
- Scheduled runs: Use n8n’s built-in Cron trigger instead of webhooks for daily or weekly content batches. This is more reliable than webhook-based triggers for content pipelines.
Related reading
- Build a Self-Improving AI Agent with n8n: The Learning Loop Workflow
- Guide Workflows Gstack Workflow
- n8n Workflow Automation: Complete Guide for AI-Powered Businesses
Related Resources
- 230 Downloadable Workflows — browse our full n8n + ComfyUI library
- ROI Calculator — compare cloud vs local AI costs
- Hardware Catalog — 13 devices for running n8n + Ollama locally
- Edge AI vs Cloud Cost Analysis — detailed cost breakdown
Need help implementing AI workflows in your business? Schedule a free consultation to design a pipeline tailored to your needs.