Engineering Case Study
The Matter & Gas
Platform.
A serverless AI workflow platform that evaluates product concepts and transitions qualified founders into structured project workspaces where software products can be designed and built.
Last updated: March 2026
Context
Problem
Many founders begin building software products before understanding the true technical implications of their ideas.
Common blind spots include:
- Engineering complexity
- Infrastructure requirements
- Operational risks
- Realistic development timelines
- Where AI can and cannot provide leverage
As a development firm, Matter & Gas frequently encountered founders arriving with partially formed ideas and no reliable way to evaluate feasibility before committing to development work.
The goal of this system was to create an infrastructure layer capable of:
- Evaluating proposed software product concepts using structured LLM reasoning
- Producing feasibility analyses in real time
- Converting promising concepts into authenticated project workspaces where products can be designed and built
This required bridging anonymous AI interaction with persistent project infrastructure, while maintaining operational oversight and reproducible AI outputs.
Architecture
The platform is implemented as a serverless AWS architecture designed to support product discovery, project collaboration, and controlled onboarding into development engagements.
User
Next.js Frontend
Diagnostic Interface, Project Workspaces, Admin System
Lambda Application Layer
DynamoDB Data Layer
The system is intentionally designed around managed services to minimize operational overhead while allowing the platform to scale automatically with usage.
Frontend
The frontend is a Next.js / React application responsible for:
- The public AI diagnostic interface
- Progressive rendering of analysis results
- Authenticated project workspaces (/lobby)
- Internal administrative tooling
AI responses are streamed to the browser using Server-Sent Events (SSE). This allows analysis results to render progressively as they are generated, giving users immediate feedback during longer model executions.
The streaming protocol supports:
- Cancellation handling
- Structured error events
Application Layer
The backend consists of 20 AWS Lambda functions plus a Cognito authentication trigger.
These functions power the platform's core capabilities:
- AI workflow execution
- Project workspace lifecycle
- Messaging infrastructure
- Scheduling and booking operations
- Administrative system operations
- Analytics aggregation jobs
Supporting infrastructure includes:
- Cognito authentication trigger (PreSignUp)
- EventBridge scheduled jobs
- DynamoDB Streams for event-driven operations
- SES transactional email delivery
A DynamoDB Stream on the Booking table triggers the bookingNotification Lambda, which sends confirmation and cancellation emails when booking records change. This allows operational side effects to be handled asynchronously without complicating booking writes.
Data Layer
Application state is stored in DynamoDB using a deliberately denormalized data model optimized for query patterns and UI performance.
The system currently defines 11 primary application models:
These models support several subsystems within the platform.
Project System
Projects serve as the central object within the system. Each project stores:
- Project metadata
- AI analysis results (analysisJson)
- Collaboration context
Projects transition users from idea exploration into structured product workspaces where development planning and communication occur.
Messaging System
Messaging uses a denormalized structure optimized for inbox performance. Key models include:
- ConversationMessage
- ConversationReadState
- ProjectConversationSummary
Maintaining conversation summaries avoids scanning large message collections when rendering inbox views.
Onboarding System
The platform includes a lightweight onboarding pipeline designed to transition founders from diagnostic users into active project participants. This pipeline is supported by:
- LeadIntake
- ContactProfile
- ContactSubmission
These models allow the administrative system to track engagement and maintain communication history with founders.
Scheduling System
Scheduling is implemented through:
- AdminScheduleSettings
- BlockedSlot
- Booking
Administrators configure availability windows and booking capacity through the administrative interface. Booking operations rely on DynamoDB conditional writes to guarantee slot exclusivity.
Infrastructure
Infrastructure is deployed using Amplify Gen 2 (CDK-backed) and includes:
- AWS Lambda for application logic
- DynamoDB for application state
- Cognito authentication
- AppSync real-time messaging subscriptions
- EventBridge scheduled jobs
- SES transactional email
- CloudFront + WAF edge protection
- S3 artifact storage
System Snapshot
AI Execution
Workflow Engine
AI analysis is executed through a configurable workflow engine.
Workflows are registered through a configuration registry (analysisConfig.ts) that defines the structure of each analysis pipeline.
Each workflow specifies:
- Analysis sections
- Schema validators
- Stage execution dependencies
- Prompt templates
- Section validation rules
This allows new analysis workflows to be introduced without modifying the core orchestration logic.
Two workflows currently exist:
Both workflows run through the same orchestration engine while defining their own section structures and prompts.
Operational Reality
System in Production
The Matter & Gas platform is a live production system deployed on AWS serverless infrastructure.
The platform supports:
- Anonymous AI diagnostic runs with real-time streaming analysis
- Authenticated project workspaces for product development planning
- Real-time messaging between founders and the development team
- Scheduling and engagement management
- Internal administrative tooling for operational visibility
AI execution is handled through Lambda-based orchestration with real-time streaming. Safeguards exist for:
- Cost control
- Concurrency management
- Idempotent execution
- Reproducible AI outputs
The system is designed to tolerate partial failures while maintaining consistent project state and auditability of AI-generated results.
Trade-offs
Engineering Decisions
Several architectural decisions shaped the system. Each involved trade-offs between complexity, reliability, and user experience.
Real-Time AI Streaming (SSE)
AI analysis responses are streamed to the client using Server-Sent Events rather than background job polling.
Benefits:
- Progressive rendering of analysis output
- Improved user feedback during long model runs
- Simplified client/server synchronization
The streaming protocol supports cancellation and structured error signaling.
Passwordless Authentication
Authentication uses Cognito EMAIL_OTP passwordless login.
Benefits:
- Reduced signup friction
- Simpler account recovery
- No password management overhead
Signup is gated through a Cognito PreSignUp trigger which controls account creation.
Demo-to-Authenticated Transition
Users can run the AI diagnostic anonymously. Results are cached locally in the browser.
After signup, the workspace converts the cached analysis into a persistent project record.
This allows founders to explore ideas before committing to creating an account.
Real-Time Messaging Architecture
Messaging uses AppSync subscriptions for real-time updates. The system includes:
- Subscription buffering during fetch cycles
- ULID-based message identifiers
- Client-side deduplication
Conversation read state is tracked per user and per project to support accurate unread detection.
Race-Safe Scheduling
Booking operations rely on DynamoDB conditional writes:
attribute_not_exists(slotId)This guarantees slot exclusivity without requiring transactions.
Prompt Versioning
AI prompts are versioned using an ACTIVE pointer system. This allows:
- Reproducible outputs
- Safe prompt iteration
- Controlled system evolution without redeploying infrastructure
Token estimation utilities support cost analysis of inference workloads.
Behind the scenes
Operational Infrastructure
Scheduled Jobs
EventBridge scheduled tasks perform operational maintenance tasks including:
- Booking reminder emails (every 30 minutes)
- Analytics aggregation jobs (hourly)
Analytics Pipeline
Administrative analytics are powered by scheduled aggregation jobs and DynamoDB analytics models. AppSync query resolvers serve:
- Summary metrics
- Funnel analysis
- Time-series metrics
- Categorical breakdowns
Abuse Protection
The diagnostic endpoint includes IP-based rate limiting implemented through shared Lambda utilities and edge protection.
The platform is protected by:
- CloudFront
- AWS WAF
Administrative System
The platform includes an internal administrative interface consisting of nine operational pages, including:
- Member management and per-user detail views
- CRM-style contact notes
- Project and booking history
- Inbox and conversation management
- Scheduling configuration
- Analytics dashboards
- AI run audit logs
- Diagnostic state monitoring
Administrative operations include cascade member deletion affecting multiple DynamoDB tables and Cognito user records.
Retrospective
Lessons Learned
If you're building a new software product and want to see how modern serverless AI systems can support product discovery and development, Matter & Gas builds platforms like this for founders bringing new software to market.