How I Built GanderAI in a Weekend — Multi-Agent AI, Security-First, and Zero Busywork
March 15, 2026. I built GanderAI — a full-stack portfolio site with security hardening, real-time bi-directional chat, AI-generated blog content, and an interactive learning platform — in a couple of hours for the first working draft, and refined it over a weekend.
Not by working nights. Not by cutting corners. By using four AI platforms as a delivery team, each doing what it does best.
This is the full story of what was built, how the AI contributed, and what I learned about the real state of multi-agent development in 2026.
The Speed That Made Me Reconsider Everything
The first deployable version of GanderAI — homepage, about page, project showcases, blog system, contact form — took roughly two hours to go from empty directory to live on Vercel.
That number is not an exaggeration. It is the central insight of this entire case study.
Two years ago, a comparable site would have taken me a working week. The design system alone — colour tokens, responsive layouts, typography, dark mode — would have consumed a full day. With AI, that became a 20-minute conversation with Claude about positioning and visual hierarchy, followed by Antigravity implementing the entire Tailwind design system across all components.
The remaining time was not coding. It was deciding what the site should say, how the navigation should flow, and which projects deserved emphasis. The actual implementation — the files, the routes, the components — was handled by agents.
The Multi-Agent Workflow (Not a Gimmick)
Too much AI discourse still treats this as a novelty. "I used ChatGPT to write my website" is not interesting. What is interesting is orchestrating different AI systems for different cognitive tasks, the same way you would run a delivery team.
Here is how the four platforms contributed:
Claude Opus 4.6 — Strategy and Content Shaping
Claude was the strategic thinker. Before writing a single line of code, I used Claude to work through:
- Positioning: How should a 35-year enterprise architecture career translate into a modern portfolio? What is the proposition?
- Information architecture: What pages exist, what goes on each one, how does the navigation reinforce credibility?
- Content: The About page copy, the project descriptions, the blog voice. All drafted collaboratively with Claude, refined by me, and then handed to Antigravity for implementation.
Claude is not a coding tool. Trying to use it as one misses the point. It is a thinking partner — and that is the most leveraged use of AI in any delivery cycle.
Google Antigravity — The Agentic Builder
Antigravity was the engine room. Powered by Gemini 3 Pro, it operates as a full IDE with multi-agent swarms that can browse files, run terminal commands, control browsers, and verify their own work.
It handled:
- Multi-file implementation: Entire page routes created end-to-end — layout, components, content, metadata, SEO
- Design system build: The full Tailwind design system — glassmorphism cards, gradient tokens, responsive breakpoints, micro-animations
- Codebase refactoring: Restructuring shared content models, updating imports, resolving build failures across files
- Security implementation: The entire contact API security layer (more on this below)
- Feature builds: The Odyssey Kit interactive learning platform, the chat widget, the PDF export system
The difference between Antigravity and a simple autocomplete assistant is that it plans before executing. It creates implementation plans, proposes changes grouped by component, and verifies its own output. That is materially different from prompt-and-paste development.
OpenAI — Targeted Coding Assistance
OpenAI was the second opinion. When I needed to isolate a specific problem — a TypeScript type error, a Framer Motion animation quirk, a Next.js App Router edge case — I would bring it to OpenAI for a focused, fast answer.
This is a practical pattern: one tool drives coordinated change across the codebase, another handles targeted debugging passes. You would not ask your architect to fix a CSS bug. You would not ask your CSS developer to redesign the data model.
Google Stitch — Design Direction
Stitch handled visual exploration. What should the portfolio feel like? How should capability cards render? What layout communicates enterprise credibility while still feeling modern?
The output from Stitch — mockups, component layouts, design direction — fed directly into Antigravity's implementation. This pipeline (Stitch → Antigravity) is faster than any design-to-dev handoff I have seen in two decades of enterprise delivery.
Security Was Not an Afterthought
Too many portfolio sites treat security as optional. Mine was hardened from day one, and I enforced this as a principle throughout the build.
The contact API alone has seven layers of protection:
- Origin/CORS enforcement — Blocks requests missing both
originandrefererheaders (curl/Postman scripts). Validates that the requesting hostname matches the deployed site. - Distributed rate limiting — Uses Upstash Redis for per-IP rate limiting (5 requests/hour) that persists across serverless cold starts. Falls back to in-memory rate limiting when Redis is unavailable.
- Honeypot field — A hidden form field that catches automated bots. If filled, the request is silently discarded (returns success to avoid tipping off the bot).
- Cloudflare Turnstile — Invisible CAPTCHA verification on every submission. No user friction, but blocks automated abuse.
- Strict input validation — Message length capped at 2,000 characters. Email and name fields validated for type and length. No injection vectors.
- Multi-channel dispatch — Messages are routed simultaneously to Email (via Resend), Discord (webhook), and Telegram (bot API). If one channel fails, the others still deliver. I get notified on my phone within seconds.
- Server-side only — No API keys or secrets are ever exposed to the client. All sensitive operations happen server-side in a Next.js API Route.
This is not paranoia. This is how production systems should work. The AI implemented all of it — I described the security requirements and Antigravity built the full 216-line API route with every protection layer in place.
The Chat Widget — Real-Time Bi-Directional Communication
The site includes a floating chat widget (bottom-right corner) that enables genuine two-way conversation between visitors and me — not just a contact form, but a live chat experience.
When a visitor opens the widget, a conversational intro flow collects their name and email naturally before forwarding. Their message is then dispatched simultaneously to:
- Telegram via bot API — delivered to my phone instantly. When I swipe-reply in Telegram, my response routes back to the correct visitor's chat widget within seconds, using session-threaded
[ref:sessionId]tags to match replies to the right conversation. - Email via Resend (with full conversation context formatted in HTML)
- Discord via webhook (embedded with name, email, and session fields)
The backend is powered by Upstash Redis for persistent session storage. Each visitor gets a unique session ID stored in localStorage, with messages persisted in Redis (48-hour TTL). The widget polls for new admin replies every 4 seconds while open, and shows a notification badge when a reply arrives while the widget is closed.
Security is built into every layer: origin/CORS enforcement, per-IP rate limiting (distributed via Redis with in-memory fallback), honeypot spam trapping, Cloudflare Turnstile invisible CAPTCHA, strict input validation, and Telegram webhook secret validation on the inbound reply path.
The entire system — the conversational intro flow, the session-aware API, the Redis-backed message store, the Telegram webhook handler, and the polling mechanism — was built by Antigravity in a single session.
LLM-Powered Auto-Blogging
The site includes a GitHub Actions workflow that runs every Sunday evening. It:
- Pulls the last week of git commits
- Sends them to the Gemini API with a carefully crafted prompt
- Generates a full MDX blog post summarising the engineering work
- Creates a pull request for review
The key word is review. The posts are not published automatically. They land as PRs with the automated-content label, where I can review, edit, and approve them before they go live.
This ensures the blog stays current without me having to write a weekly update from scratch. The AI drafts, I refine. That is the right division of labour.
The workflow uses Gemini 1.5 Pro as the language model, with a system prompt that focuses on AI engineering, agentic systems, and performance improvements — fitting the site's technical voice.
The Neon AI Odyssey Kit — From Lead Magnet to Learning Platform
The latest addition is the most ambitious: a full interactive learning platform at /odyssey that replaced the original generic AI starter kit.
It includes:
- An 8-week roadmap with 18 numbered steps across three expandable phases (Foundations → Playground → Portfolio), each with specific instructions, terminal commands, and tool recommendations
- A resource vault with six curated links — the Python Tutorial, OpenAI Cookbook, Google AI Studio, Hugging Face, LangChain, Fast.ai — each with context on when to use them
- An ethics compass covering bias awareness, data privacy, transparency, and when not to use AI
- A 7-day ignition challenge with expandable daily guides. Day 3 includes a complete working Python chatbot using the Gemini API. Day 6 includes a weather-to-outfit-recommendation script that combines a public API with an LLM.
- Three mini-tutorials (Sentiment Analyser, Pet Photo Classifier, Playlist Roast Bot) with step-by-step build guides, difficulty ratings, and time estimates
- An interactive passport widget with 12 clickable milestones, 3D tilt effects (Framer Motion), ink-splash animations, and localStorage persistence
- Server-side PDF export via
@react-pdf/renderer— a downloadable A4 document with cosmic dark styling
The entire Odyssey Kit was built in a single evening. Again: the effort was in the content decisions and quality review, not in the implementation. Antigravity built the six components, the API route, the PDF generation, and the homepage integration.
The Tech Stack
For those who want specifics:
- Framework: Next.js 15 (App Router, server components, server-side API routes)
- Styling: Tailwind CSS with custom design tokens, glassmorphism, and gradient system
- Animations: Framer Motion (3D tilt, scroll-triggered reveals, micro-interactions)
- PDF:
@react-pdf/renderer(server-side generation) - Chat backend: Resend (email), Discord (webhook), Telegram (bot), Upstash Redis (rate limiting), Cloudflare Turnstile (CAPTCHA)
- Auto-blog: GitHub Actions + Gemini API +
peter-evans/create-pull-request - Hosting: Vercel (edge deployment, serverless functions)
- Source: GitHub (private repo)
What I Actually Learned
1. The bottleneck has moved. It is no longer typing or implementation. It is decision-making: what should exist, what should it say, who is it for. AI handles the "how" faster than any human team. The value is in the "what" and "why."
2. Multi-agent beats single-agent every time. Claude for thinking, Antigravity for building, OpenAI for debugging, Stitch for design. Each platform has a cognitive sweet spot. Using one tool for everything produces mediocre results.
3. Security must be a first principle, not a follow-up. If I had not enforced security from the start, the chat widget would have shipped without rate limiting, without CAPTCHA, and without origin validation. AI will build what you ask for — if you do not ask for security, you will not get it.
4. "Good enough" is the enemy of useful. The first version of the Odyssey Kit had beautiful animations and zero substance. No actual code snippets, no real tutorials, no working links. It looked impressive in a screenshot and taught nothing. I caught it, rewrote all the content, and now it has actual step-by-step guides with working Python code. AI is fast at generating polish. It is your job to ensure there is substance underneath.
5. The review step is non-negotiable. Auto-generated blog posts create PRs, not live content. Code changes get build-verified before merge. Every AI output gets human review. This is not a workflow optimisation; it is a quality gate.
The Broader Point
This site is not impressive because it uses AI. Every developer's portfolio uses AI in 2026.
It is useful as a case study because it demonstrates how to use AI well: with structure, with multiple agents, with security built in, with quality gates, and with human judgement directing every decision.
The tools are available to everyone. The difference is in the orchestration.
Want to see the result? You are looking at it.
Interested in applying this workflow to your team? Let's talk →