Design Partner Program · Phase 1

Building Intelligence
Together

We're launching the Intelligence module with a select group of Design Partners and Subject Matter Experts. Together, we'll make Synapse the second brain every founder deserves.

Intelligence
Analytics
Actions
Konnect
Build
The Vision
Intelligence that becomes your co-founder

The Intelligence module discovers, structures, enriches, and refines everything about your venture. A living system that compounds intelligence with every interaction.

Feedback → Product Pipeline
How Feedback Becomes Shipped Improvements
Design partners onboarded 5 founders test features using real venture data Bug reports Screen, console, network, metadata Test sessions NPS, think aloud, page tasks TESTIMENT AI ingests & structures results Sessions tagged, categorized, ref-coded automatically SME reviews raw data Watches recordings, reads transcripts, reviews logs Improvement Requirement Document (IRD) generated by SME Issue, hypothesis, priority, ref code, sprint target Product team reviews & approves Impact vs effort scored, sprint assigned Agents build the fix AI implements, SME retests with original DP
Design Partners
Our Founding Partners

10Demo

Validation

Articos

Ideation

PureVPN

Growth

Paio

Traction

Zamanat

Validation
Who's Involved
Two Distinct Roles, One Mission

Design Partners and Subject Matter Experts work in tandem — but never in the same session. This ensures unbiased feedback and expert-grade improvement documents.

🎯

Design Partners

Real founders using Synapse with their own ventures. They represent the authentic user voice — testing features in context, surfacing friction through genuine usage.

  • Real founders at different venture stages (Ideation → Growth)
  • Participate in structured test sessions: usability tests, guided reviews, workshops, interviews
  • Provide feedback on their own ventures' data — not synthetic examples
  • NPS scored at every session close to track sentiment over time
  • Never told what to think — observed in how they navigate and react
🧠

Subject Matter Experts

Domain specialists assigned per feature. Never present during DP sessions. After each session, compiled feedback is handed to the SME who writes the Improvement Requirement Document (IRD).

  • Domain experts in AI quality, UX, market intelligence, analytics, and more
  • Receive raw recordings, transcripts, and timestamped notes — not summaries
  • Write IRDs that translate friction into specific, actionable improvement hypotheses
  • Each SME owns specific features and is accountable for their improvement arc
  • Coordinate retests after fixes ship to confirm hypotheses
Tooling
Testiment — Testing & Feedback Platform

Testiment powers the entire DP & SME feedback loop with two core functions: Bug Reporting and Test Management.

🐛Bug Reporting

Testers report bugs with a single click — the system records the screen session and automatically collects all context engineers need.

  • Screen recording captured on submission
  • Console log captured automatically
  • Step-by-step reproduction path
  • Network tab activity snapshot
  • Browser & device metadata
  • Direct link to the exact state

🧪Test Management

Structure and run tests for Design Partners — NPS, think-aloud recording, and page-by-page guided tasks compiled for SME review.

  • NPS surveys with automatic scoring & tracking
  • Think Aloud — record and talk through your experience
  • Page-by-page tasks the tester completes step by step
  • All session data collected and structured for SME handoff
  • Results tagged with ref codes for full traceability
End-to-End Pipeline
From Bug Report to Shipped Feature
01
Bug & Test
DPs report bugs & run tests
02
Sessions Recorded
Screen, console, steps, network
03
AI Ingests
Results auto-structured
04
SME Review
Solution via IRD
05
IRD Submitted
Issue + hypothesis + priority
06
PO Review
Accept or refine
07
Agents Build
AI agents implement fix
Sample IRDs
What Improvement Requirement Documents (IRDs) Look Like in Testiment

Each IRD captures the issue, the SME's hypothesis for fixing it, what they did to validate, and the results. Use the arrows to browse examples.

1 / 4
Master Rollout
Phase-by-Phase Feature Testing

Every feature tested with specific methods, assigned to SMEs, tracked through a complete feedback lifecycle.

P1Intelligence7 features · 23 Mar
CodeFeatureFeedback AreasMethodsSME
F1Canvas — Venture OverviewUX · Output Quality · Data Presentation · Feature Value · Time Saved · MethodologyUTGRNPSSME-01
F2Market Research — MonitorUX · Output Quality · Data Presentation · Feature Value · Time Saved · ReliabilityUTASLOGSME-02
F3Competitor Research — MonitorUX · Output Quality · Data Presentation · Feature Value · Reliability · Missing Cap.UTINTASSME-02
F4Product ResearchUX · Output Quality · Data Pres. · Feature Value · Time Saved · Methodology · ReliabilityUTGRNPSSME-02
F5Brand PageUX · Data Presentation · Feature Value · Reliability · Onboarding5SUTNPSSME-03
F6Target Audience — ECP / ICPUX · Output Quality · Agent Interaction · Feature Value · Methodology · ReliabilityUTINTCOGSME-04
F7Doctrine — Venture DocsUX · Agent Interaction · Feature Value · Methodology · Onboarding · Stage Approp.UTWSLOGSME-01
P2Analytics9 features · 1 Apr
CodeFeatureMethodsSME
F1War ViewUTGRLOGSME-06
F2Theatre ViewUTASLOGSME-06
F3Battles ViewUTINTNPSSME-07
F4Battle ZonesUTCOGLOGSME-07
F5Journeys ViewUTWSASSME-06
F6Profiles ViewUT5SNPSSME-03
F7StrikesUTASLOGSME-07
F8IntegrationsUTASLOGSME-11
F9Event MappingUTCOGGRSME-11
P3Actions5 features · 8 Apr
CodeFeatureMethodsSME
F1Strike ValidationUTINTNPSSME-07
F2Atomic ActionsUTCOGLOGSME-04
F3Squad SetupUTGRNPSSME-04
F4TroopsUTINTLOGSME-04
F5PowerUpsUTASNPSSME-11
P4Konnect5 features · 15 Apr
CodeFeatureMethodsSME
F1ContactsUTASLOGSME-09
F2OutreachUTA/BLOGSME-09
F3InboxUT5SNPSSME-09
F4CalendarUTINTNPSSME-09
F5Interview ToolkitUTWSINTSME-09
P5Build2 features · 22 Apr
CodeFeatureMethodsSME
F1Spec GenerationUTCOGINTSME-10
F2Product BuilderUTINTLOGNPSSME-10
SME Registry
Domain Expert Assignments

Each SME owns specific features and writes IRDs that translate friction into actionable hypotheses.

SME-01
Output Quality & Methodology
P1-IN-F1, P1-IN-F7
Prompt tuning, agent output quality, stage-gating
SME-02
Market Intelligence & Data Accuracy
P1-IN-F2, F3, F4
Research output accuracy, data completeness
SME-03
UX & Onboarding
P1-IN-F5, P2-AN-F6
Onboarding flows, learnability, first-impression clarity
SME-04
AI / Agent Quality
P1-IN-F6, P3-AC-F2, F3, F4
HITL, agent trust, directive quality
SME-05
Knowledge Mgmt & Stage-Gating
P1-IN-F7
Folder structure, agent-to-folder mappings
SME-06
Analytics & Data Visualisation
P2-AN-F1, F2, F5
Charts, data density, cross-theatre consistency
SME-07
Strike Intelligence & Battle Zones
P2-AN-F3, F4, F7, P3-AC-F1
AAARRR mapping, lifecycle transitions
SME-08
Outreach & Messaging Quality
P3-AC-F5 (Outreach PowerUps)
Copy quality, A/B signal interpretation
SME-09
Konnect & Outreach Intelligence
P4-KO-F1 through F5
Contact enrichment, scheduling, synthesis
SME-10
Build & Prototype Quality
P5-BD-F1, F2
SDD accuracy, scaffold success rate
SME-11
Integrations & Data Connectivity
P2-AN-F8, F9, P3-AC-F5
Connector coverage, event mapping accuracy
Taxonomy
Test Methods, Feedback Areas & Reference System

The complete classification system for collecting, categorizing, and tracing every piece of feedback.

10 Test Methods

UT
Usability Test
Live moderated task walk-through
GR
Guided Review
Demo + structured Q&A
AS
Async Survey
Structured feedback form
INT
In-depth Interview
1:1 deep-dive session
5S
5-Second Test
First-impression snap test
COG
Cognitive Walkthrough
Expert heuristic evaluation
LOG
Usage Log Analysis
Behavioural data from sessions
A/B
A/B Experiment
Variant comparison test
NPS
NPS / CSAT Pulse
Post-session satisfaction
WS
Workshop / Co-design
Collaborative design session

12 Feedback Areas

A1
User Experience
Navigation, cognitive load, hierarchy
A2
Output Quality
Accuracy, relevance, completeness
A3
Data Presentation
Charts, tables, info density
A4
Agent Interaction
Directives, HITL friction, trust
A5
Feature Value
ROI, JTBD fit, willingness to pay
A6
Time Saved
Speed vs manual, efficiency delta
A7
Methodology
Prompt structure, step sequencing
A8
Reliability & Trust
Consistency, confidence in outputs
A9
Onboarding
Time-to-value, discoverability
A10
Integration Fit
Workflow compatibility, data I/O
A11
Stage Appropriateness
Right feature at right stage
A12
Missing Capability
Gaps vs JTBD not served

Reference Code Anatomy

P1-IN-F3-A4-10DEMO-UT-SME2
P1
Phase 1
IN
Intelligence
F3
Competitor Research
A4
Agent Interaction
10DEMO
Design Partner
UT
Usability Test
SME2
Assigned SME