Development

How I Used AI to Build Ginbok.com — From Idea to Production in My Spare Time

By Ginbok12 min read

Ginbok.com started as a side project I spun up in my spare time — a personal blog powered by Optimizely CMS 12 on the backend and Next.js 14 on the frontend. What made this build different from my previous projects wasn't the tech stack — it was how heavily I leaned on AI throughout the entire development process. This post is an honest account of what that actually looked like.

The Stack at a Glance

  • Backend: Optimizely CMS 12 (ASP.NET Core 6, C#)
  • Frontend: Next.js 14 with App Router, deployed on Vercel
  • AI Layer: Google Gemini API for content generation and enrichment
  • Integration: ginbok-mcp — a custom MCP server letting AI agents talk directly to the CMS
  • DevOps: Azure DevOps for repos, pipelines, and work items
AI Dev Tools CURSOR + CLAUDE AI coding tools GINBOK-MCP publish-post · list-posts get-post · update-post create-post · PAT auth stdio transport AZURE REPOS + PIPELINES push → build → deploy Core System GEMINI API Content enrichment · SEO · localize VI OPTIMIZELY CMS 12 ASP.NET Core 6 · C# · IIS BlogGenerator · EN/VI branches SQL Server · McpApiController IndexNow · Google Indexing API REST API NEXT.JS 14 — VERCEL App Router · i18n · SSR · SEO Current: Optimizely CMS + Next.js + ginbok-mcp (stdio) publish IIS ginbok.com — system architecture

Building the Next.js Frontend with AI

The frontend was where AI saved me the most time. I use Cursor (with Claude 3.5 Sonnet and later Gemini) as my primary editor, with AI autocomplete on for essentially everything.

Specific things AI handled well:

  • Component scaffolding: Described a broadsheet-style blog card grid, got working React skeletons in seconds.
  • CSS module generation: I don't use Tailwind — AI generated well-structured vanilla CSS modules I then tuned.
  • i18n routing: Setting up [locale] path structure in App Router with next-intl middleware — AI mapped the correct file structure on the first attempt.
  • SEO metadata: A reusable generateMetadata() helper with canonical URLs, hreflang alternates, and OG tags — one prompt, 20 minutes of review.

Where I had to step in: performance. AI-generated components often caused Core Web Vitals issues — layout shift from unoptimised images, unnecessary client-side state. The review step caught all of these.

Optimizely CMS 12 Backend

Optimizely is where generic AI knowledge starts to break down. The platform has its own content repository patterns and language branch APIs that LLMs don't have deep training data on.

Where AI struggled

Page type definitions required constant correction — AI misused XhtmlString vs string, and suggested IContentRepository method overloads that don't exist. I always verify against the actual SDK.

Where AI genuinely helped

The logic layers. Service classes, LINQ queries, slug generation, and tag normalisation helpers are all pure C# — and AI handles those well. Here's the actual GetUniqueSlug() from McpApiController.cs — AI drafted this, I reviewed:

private string GetUniqueSlug(string title, ContentReference parentLink, LanguageSelector language)
{
    var baseSlug = Slugify(title);
    if (string.IsNullOrEmpty(baseSlug)) baseSlug = "post";

    var existingSlugs = _contentRepository
        .GetChildren<BlogDetailPage>(parentLink, language)
        .Select(p => p.URLSegment)
        .Where(s => !string.IsNullOrEmpty(s))
        .ToHashSet(StringComparer.OrdinalIgnoreCase);

    if (!existingSlugs.Contains(baseSlug))
        return baseSlug;

    var counter = 2;
    while (existingSlugs.Contains($"{baseSlug}-{counter}"))
        counter++;

    return $"{baseSlug}-{counter}";
}

BlogMaintenanceJob — One Job to Rule Them All

I originally had two separate scheduled jobs — BlogAuditAndEnrichmentJob and BlogSeoEnrichmentJob — that handled different concerns independently. Over time the overlap grew: both touched the same posts, both made Gemini API calls, and running them separately meant double the rate-limit waits and duplicate CMS saves on the same content.

I consolidated them into a single BlogMaintenanceJob that runs four phases in one pass:

  • Phase 1 — Localize + Enrich (single pass per post): For each BlogDetailPage, the job checks whether the VI language branch exists. If not, it calls Gemini to localise the English body into Vietnamese and creates the branch. In the same pass, it checks for missing SEO fields (MetaTitle, MetaDescription, MetaKeywords, Category, Tags) and fills them with a single GenerateSeoAsync() call — avoiding a second AI round-trip on the same post.
  • Phase 2 — Cleanup sweep: Iterates all posts and normalises Category/Tags formatting on both EN and VI branches using HashtagHelper.
  • Phase 3 — Taxonomy sanitization: Walks the CategoryRepository under "Blog Categories" and "Blog Tags" roots, renames dirty items in place, and deletes duplicates where a clean-named version already exists.
  • Phase 4 — Author normalisation: Ensures every post on every language branch has Author = "Ginbok" — a detail that was inconsistent before.

The orchestration logic was a genuine collaboration with AI. I designed the phase structure and the data contract (what each phase reads and writes), and AI implemented the per-phase loops, the WaitForRateLimit() helper (8-second spacing between Gemini calls), and the HTML + CSV report builder that saves a timestamped file to CMS Global Assets after every run.

The consolidation reduced average job runtime by roughly 40% — fewer total AI calls, fewer CMS saves, and one HTML report instead of two.

The MCP Layer — An Experiment That Worked

The most experimental part was building ginbok-mcp — a TypeScript MCP server that exposes the CMS as callable tools. This lets me say "publish a blog post about X" in a chat interface and have it create bilingual EN/VI content and push it to Optimizely — no CMS admin UI needed.

Here's how a tool definition looks in the MCP server — AI drafted the schema, I designed the API contract:

export const publishPostSchema = z.object({
    TitleEn: z.string().describe("English title. Required."),
    TitleVi: z.string().optional().describe(
        "Vietnamese title. RECOMMENDED — if omitted, the VI branch won't be created."
    ),
    BodyEn: z.string().describe("English HTML body. Required."),
    BodyVi: z.string().optional(),
    Category: z.string().optional(),
    Tags: z.string().optional(),
    Author: z.string().optional().describe("Defaults to 'MCP Agent'"),
});

export async function publishPost(input: PublishPostInput): Promise<string> {
    const response = await apiPost("/api/mcp/posts/publish", input);
    return response.data?.success
        ? `✅ Published: ${input.TitleEn} (ID: ${response.data.postId})`
        : `❌ Failed: ${response.error}`;
}

The HTTP client used PAT authentication — reading GINBOK_PAT from environment variables and attaching it as a Bearer token on every request. Simple, but it keeps the MCP server stateless and composable.

MCP Tools That Changed My Dev Workflow

Beyond the custom ginbok-mcp, I integrated two third-party MCP servers into my daily development workflow — and both made a noticeable difference to how efficiently I could work with AI.

DBHub MCP — AI That Understands Your Database

One of the biggest friction points when using AI for backend work is that the AI has no idea what your database actually looks like. You end up pasting schema snippets into the chat, describing table relationships, or getting suggestions that don't match your real data model.

I solved this by connecting DBHub MCP to the ginbok.com SQL Server instance. DBHub exposes the database as a set of MCP tools — the AI can query table structures, inspect column types, run exploratory SELECTs, and read actual data samples directly from the chat interface.

The practical impact:

  • Debugging became faster. Instead of copy-pasting a stack trace and manually explaining the schema, I just say "check the DB and debug this error" — the AI reads the schema itself and gives a grounded answer.
  • AI-generated queries were actually correct. When AI can see the real column names and types, the generated SQL matches the actual schema on the first attempt.
  • Onboarding context is instant. At the start of a session, instead of re-explaining the data model, I let the AI read it directly.

Azure DevOps MCP — Closing the Loop Between Chat and Work Items

The second workflow improvement was connecting Azure DevOps MCP to my Cursor workspace. This lets the AI create, read, and update work items directly from the chat interface — no tab switching to the DevOps portal.

How I actually use it:

  • Work item creation from conversation. When I identify a bug or feature during a coding session, I describe it to the AI and it creates a properly structured work item — title, description, acceptance criteria, area path, tags — without me leaving the editor.
  • Clear measurement. Every feature and bug fix gets a work item. Development history is traceable: I can look back at Azure Boards and see what was built, when, and why.
  • AI curriculum planning. I've used Azure DevOps MCP to map out learning curricula as structured epics and issues — breaking down AI agent learning paths into deliverables tied to ginbok.com content goals, with the AI creating the full epic → feature → task hierarchy in one session.

The Combined Effect

DBHub MCP and Azure DevOps MCP address two different failure modes of AI-assisted development:

  • DBHub solves the context gap — the AI not knowing your system well enough to give accurate answers.
  • Azure DevOps MCP solves the traceability gap — work happening in AI chat sessions that never gets formally logged.

Together, they make the AI feel less like an isolated chat tool and more like a development environment that actually knows your project.

The Real Workflow: Idea → Spec → AI Draft → Review → Deploy

  1. Idea: Identify a feature or problem (e.g. "posts need canonical URLs")
  2. Spec: Write a short description of behaviour, edge cases, files involved
  3. AI Draft: Prompt Cursor/Claude/Gemini. First pass is usually 70–80% correct
  4. Review: Read every line. Fix the 20–30% that's wrong — type safety, platform-specific patterns, edge cases
  5. Deploy: Push to Azure Repos → pipeline → IIS/Vercel

The spec step is non-negotiable. The quality of AI output is directly proportional to the quality of the prompt. Vague prompt → rewrite. Precise prompt → review only.

AI Suggestions I Took vs. Decisions I Made Myself

AI suggestions I kept

  • Using next-intl for i18n routing in Next.js 14 App Router
  • Sequential slug suffixes (-2, -3) over timestamp suffixes — cleaner URLs
  • HashSet-based duplicate slug detection (shown above)
  • IndexNow + Google Indexing API on publish, with retry mechanism

Decisions I made myself

  • Optimizely as headless CMS — AI suggested Contentful or Sanity. I chose Optimizely to deepen professional expertise.
  • MCP over plain REST for AI integration — AI didn't suggest this; I researched MCP independently.
  • No Tailwind CSS — AI always defaults to Tailwind. I override this every time.
  • Consolidating two jobs into one BlogMaintenanceJob — AI didn't flag the redundancy; I noticed the overlap and designed the four-phase structure myself.
  • 8-second rate limit per Gemini call — based on observed API quota behaviour, not something AI could know.

The Honest Results

Time saved: ~40–50%

Biggest savings: repetitive structural code (controllers, DTOs, utility helpers), CSS scaffolding, documentation. Smallest savings: Optimizely-specific patterns and anything requiring production judgment.

Where it slowed me down

AI hallucinations on platform-specific APIs. Confidently suggested methods that don't exist in the Episerver SDK. Lesson: trust AI for general C#/.NET patterns, but verify everything against real platform docs when working with specialist frameworks.

What surprised me

The MCP workflow. Being able to say "write and publish a blog post about X" in a chat and have it appear on ginbok.com — bilingual, SEO metadata included, properly tagged — is smoother than I expected going in. And connecting DBHub + Azure DevOps MCP on top of that made the whole system feel genuinely integrated rather than a collection of isolated tools.

Closing Thoughts

AI didn't build ginbok.com — I did. But it meaningfully changed how I worked on it. The best mental model: a very fast, somewhat unreliable junior developer. It drafts quickly, follows patterns well, excels at boilerplate. But you're the one who understands the full system, the constraints, the platform quirks. The review step is non-negotiable.

If you're thinking about incorporating AI into a side project: start where you're most confident you can review the output. Use it as an accelerator on territory you know, not a replacement for knowledge you don't have yet. And look seriously at MCP servers — connecting AI to your actual database and project management tools changes the quality of assistance you get far more than switching between models does.

What's Next — Architecture Roadmap

The current setup works well, but there are several architectural directions I'm planning to evolve toward. The biggest shift is moving to a monorepo — consolidating the CMS backend, Next.js frontend, ginbok-mcp, BFF API, and future apps into a single repository managed by Azure Repos with unified pipelines.

The most significant additions planned:

  • BFF API — a dedicated Backend for Frontend layer (ASP.NET Core minimal API) sitting between Optimizely CMS and all frontend apps. It handles aggregation, response shaping, caching, and rate-limiting, so each app gets exactly the data shape it needs without duplicating CMS query logic.
  • ginbok-mcp as a remote HTTP + SSE server — moving from a local stdio transport to a proper hosted MCP server, accessible by any AI client over the network. This makes ginbok-mcp a first-class service in the architecture rather than a local dev tool.
  • App B — a second frontend application sitting alongside Next.js, both consuming the BFF API from the same monorepo.
  • Notification Layer — an event-driven distribution system triggered on post publish via Azure Functions, pushing new content to email newsletters and social channels automatically.

Here's what the target architecture looks like:

AZURE REPOS + PIPELINES monorepo · push → build → deploy all services AI Dev Tools CURSOR + CLAUDE AI coding tools uses MCP Layer GINBOK-MCP HTTP + SSE · remote server publish · list · get · update API calls Notification Layer WEBHOOK TRIGGER on publish · Azure Function EMAIL Newsletter · Resend SOCIAL LinkedIn · X Core System — monorepo GEMINI API Content enrichment · SEO · localize VI OPTIMIZELY CMS 12 ASP.NET Core 6 · C# · IIS · SQL Server BlogGenerator · EN/VI · McpApiController IndexNow · Google Indexing API on publish BFF API ASP.NET Core minimal API · aggregate · transform cache · rate-limit · single entry point shaped response NEXT.JS 14 ginbok.com · EN / VI APP B future · TBD Roadmap: monorepo · BFF · MCP remote · App B · Notification

The goal is a system where every piece — AI tooling, content pipeline, frontend apps, and distribution — operates as a coherent whole rather than a collection of loosely connected parts. The monorepo structure enforces this: one repo, one pipeline, one place to reason about the entire system.

#AI#Next.js#Optimizely CMS#MCP#Side Project#Developer Workflow#Gemini#Azure DevOps#DBHub#Cursor#Claude
← Back to Articles