How We Ship AI Products: The Virtual Minds Build Stack

Seven consumer AI apps. Over one million users. A small platform team. Building AI products at this pace requires a deliberate engineering stack — one optimized for shipping speed, model flexibility, and unit economics that work at scale.
The constraint that shapes everything
Every architectural decision at Virtual Minds starts with one question: does this scale across products? Our seven apps span interior design, landscape design, automotive photography, professional headshots, chart analysis, video generation, and conversation analysis. They all live on the same underlying platform — what we call Cortex.
Cortex is not a framework you can npm install. It is the shared discipline of building each product to feed back into a single intelligence platform that gets smarter with every shipped feature, every user interaction, every model call.
The stack, layer by layer
1. Application layer — Next.js 15 + React 19
Every public-facing surface is Next.js with the App Router. Server Components by default, client components only where interactivity demands it. We standardize on TypeScript strict mode, Tailwind CSS 4, and a shared design system that lets a new app spin up in days, not weeks.
This blog post itself is rendered through that same stack — content stored in Firestore, fetched on the server, revalidated every 60 seconds.
2. AI orchestration — Claude as the default brain
Claude (currently Claude Opus 4 and Sonnet 4) handles the majority of our reasoning workloads: prompt engineering, content generation, agentic workflows, and structured output extraction. For image generation we use a mix of providers depending on the use case. For chart analysis and computer vision we use specialized vendor models.
The key insight: do not lock into one model. Our integration layer abstracts the provider, so when a better model launches we can swap it without a product rewrite.
3. Data layer — Firestore + Cloud Functions
All product data — user accounts, content libraries, app metrics, customer support tickets — lives in Firestore. We use Cloud Functions for server-side compute, including model orchestration, webhook handling, and async pipelines. This gives us global low-latency reads at consumer scale without operating database clusters.
4. Distribution — iOS, Android, web, extensions, API
Each product ships across as many channels as makes economic sense. Room AI lives on iOS, web, and is heading to Android. Reshot AI is iOS, Android, web, and a public API. The decision to add a channel is always a unit-economics call — does the channel pay for itself within 12 months?
The shared services every product gets for free
Building seven apps with a small team only works because every new product inherits:
- Authentication — Firebase Auth wired into Apple, Google, email/password
- Subscriptions — RevenueCat-based subscription management with cross-platform receipt validation
- Analytics — Unified event pipeline feeding our Cortex dashboard
- Customer support — Shared help center infrastructure and inbound ticket routing
- Push notifications — FCM-based delivery with per-product topic management
- Content management — Shared blog, help articles, and announcement system
The implicit message to every product team: do not build any of these yourself. Ship the AI feature, not the auth screen.
What we are building toward
The endgame is Cortex as a public AI operating system — a platform where the cross-product learning, the shared intelligence, and the unit economics of seven shipped products combine into something a single-product company cannot match. Each app we ship today is both a real consumer product and a piece of training data for what comes next.
If you are building an AI application company in 2026, the question is not which model do I use? It is which compound platform am I building toward?
