Illustrated Storytelling with AI on Mac Mini

In an era where AI tools are increasingly cloud-dependent, there's something empowering about running everything on your own hardware. No subscriptions, no data privacy concerns, and full control over the process. Today, I'm sharing how I built an automated system to create illustrated short stories—complete with narrative text, dialogues, and custom images—using only local software and hardware. The backbone is n8n, an open-source workflow automation tool, combined with local AI models like Ollama for text generation and a diffusion model Flux for images.

This setup runs on my home server, triggered automatically at night to generate a fresh sci-fi storybook. It outputs a polished HTML file with 7 illustrated panels. Best of all? It's 100% offline after initial setup. Let's dive into how it works, based on analyzing the n8n workflow I use.

Why Go Local for AI Storytelling?

Before we get technical, here's the motivation:

  • Privacy and Cost: No sending data to remote servers; everything stays on your machine.
  • Customization: Tailor models and prompts without API limits.
  • Sustainability: Use efficient local models to avoid the energy footprint of cloud AI.
  • Fun Factor: Automate creative output—wake up to a new story every day!

Tools involved:

  • n8n: Orchestrates the workflow.
  • Ollama: Runs local LLMs like Gemma and Qwen for text generation.
  • Flux Model: A local image generation model "x/flux2-klein:9b" via a local API endpoint.
  • Local Storage: Files are written directly to disk.

If you're new to this, install n8n (self-hosted) and Ollama (and pull all models listed in this post).

Hardware-wise, the Mac Mini M4 Pro is ideal for these AI tasks, leveraging its powerful integrated GPU for efficient image generation, while CPU handles text seamlessly.

Workflow Summary: How the Magic Happens

The workflow, titled "Generate Storybook," is a linear yet branched automation that starts with a trigger, generates content step-by-step, and ends with output and tracking. It's designed for 7-panel stories.



1. Triggering the Workflow

  • Schedule Trigger: Fires automatically at 11:00 PM daily. This ensures a "story of the day" without manual intervention.
  • Manual Trigger: For testing—click "Execute Workflow" in n8n to run it on demand.
  • Optional Custom Story Input: If manual, you can set a predefined story prompt (e.g., "First Human colony on Mars discovers hidden caves...").

2. Generating the Story Title

  • SciFi Title Randomizer (Ollama Node): Uses the Qwen 0.8B model to generate one random sci-fi short story title. Prompt: "give me ONE random title for a science fiction short story."
  • Example output: Something like "Echoes of the Void" or whatever the model dreams up.
  • Settings: Temperature 1 for creativity, no persistent context (keep_alive: 0m).

3. Creating Panel Descriptions and Dialogues

  • Generate Descriptions (Agent Node with Ollama): Takes the title as input and uses Gemma 12B to create a 7-panel script.
  • System Prompt: Instructs the AI to output valid JSON only: A title and an array of 7 panels, each with a number, visual description, and dialogue.
  • Rules: Always 7 panels; no extra text.
  • JSON Parser (Code Node): Cleans the AI output.

4. Enhancing the Narrative

  • Generate Flowing Story (Agent Node with Ollama): Uses Gemma 12B again to rewrite the script into flowing prose.
  • Input: The 7-panel descriptions and dialogues.
  • System Prompt: Rewrite as exactly 7 paragraphs (one per panel), each 6 sentences long, in a "storybook style" with emotion and transitions. Output: JSON array of 7 strings.
  • Temperature: 0.7 for balanced creativity.
  • Merge Outputs (Merge Node): Combines the parsed script with the flowing story paragraphs.
  • Attach Story to Panels (Code Node): Attaches each paragraph to its corresponding panel. Fallbacks to original description if parsing fails. Logs for debugging.

5. Preparing for Image Generation

  • Split Out Panels (SplitOut Node): Breaks the 7 panels into individual items for parallel processing.
  • Generate Seed for Panels (Code Node): Creates a single random seed (0 to 2^32-1) for the entire story and attaches it to every panel. This ensures consistent style across images (e.g., same artistic theme).
  • Generate Prompt for Images (Set Node): Builds image prompts: "Consistent theme throughout: Photo realistic of [panel description]. No text, no speech bubbles... purely visual image."

6. Generating Illustrations Locally

  • HTTP Request Node: Sends a POST to a local server (http://host.docker.internal:11434/api/generate—Ollama's API for images).
  • Payload: JSON with model ("x/flux2-klein:9b"), prompt, width/height (512x768), steps (10 for quick gen), and the shared seed.
  • Timeout: 1 hour (images can take time on local hardware).
  • Non-streaming for full output at once.
  • base64 Converter (Code Node): Extracts base64-encoded images from the response (handles various formats), cleans them, and prepares binary data for each panel (e.g., panel-01.png).
  • Write Images to Disk (ReadWriteFile Node): Saves PNG files locally (e.g., /files/panel-01.png).

7. Assembling the Storybook

  • Restore Story Data (Set Node): Reattaches story paragraphs, dialogues, and title to the image data.
  • Generate HTML (Code Node): Builds a beautiful HTML page:
  • Styles: Vintage storybook aesthetic (Georgia font, parchment background, bordered images).
  • Layout: Each panel as a "page" div with image (base64-embedded) and text column (story paragraph + optional dialogue in a styled box).
  • Randomly alternates image left/right for visual interest.
  • Outputs as binary data with a safe filename (e.g., Purr_fect_Pursuit.html).
  • Write HTML to Disk (ReadWriteFile Node): Saves the HTML locally (e.g., /files/index.html).

8. Storybook Examples

The following short stories were written and illustrated 100% by LOCAL AI on my Mac Mini M4 Pro using the workflow described above:


This page contains affiliate links — purchases made through them may earn a commission at no extra cost to you.