The Math Behind Monce Suite

Charles Dana — Monce SAS, April 2026

1. Architecture

Monce Suite is an orchestrator that sits in front of 10 independent Snake SAT classifiers. Each classifier runs on its own EC2 instance with no LLM dependency in the default path.

  User text
      │
      ▼
 ┌──────────┐
 │  Monce   │ POST /monce-haiku or /monce-sonnet
 │  Suite   │ ─── fan-out to 10 /comprendre ──▶ all with anthropic=False
 └────┬─────┘
      │
  ┌───▼────┐
  │ Trust  │  aggregate trust score from 10 responses
  │ Assess │  high (≥65): return deterministic
  └───┬────┘  low (<65): one LLM call to synthesize
      │
  ┌───▼────┐
  │ Claude │  dynamic prompt built from what the
  │ (1x)   │  deterministic layer actually found
  └────────┘

2. The Economics

Without the suite, each service with anthropic=True makes its own LLM call. 10 services = 10 LLM calls per user input. With the suite:

The expected cost is 0.3x of a single LLM call (70% of inputs get deterministic-only).

3. Trust Score Assessment

Each downstream classifier returns a trust score (0-100) measuring extraction quality, signal density, classification confidence, and model agreement.

ZoneThresholdAction
High trust≥ 65Return deterministic. No LLM.
Grey zone35-64LLM enhances weak areas.
Low trust< 35LLM full synthesis.
Too few OK< 3 servicesLLM with whatever is available.

4. Dynamic LLM Prompt

The LLM prompt is not static. It is constructed dynamically from whatever the 10 classifiers actually returned. Each classifier's highlights are extracted programmatically:

The LLM acts as a synthesizer and tie-breaker, not a re-classifier. It never re-analyzes the raw text from scratch — it works from the structured outputs of 10 specialized models.

5. The 10 Classifiers

ServiceDomainModelsBest Metric
emailclassifierEmail triage10 Snake0.958 AUROC
quoteclassifierQuote analysis10 Snake0.894 AUROC
requestclassifierComplaint routing4 Snake0.990 AUROC
furnitureclassifierField sales scoring1 Snake (27 feat)0.860 AUROC
businessclassifierTask dispatch1 Snake (8 feat)0.726 AUROC
selfserviceProduct search10 Snake99.4% typo acc
salesagentclassifierProspect scoring5 Snake93.3% acc
negociationclassifierOffer analysis5 Snake0.936 AUROC
procurementclassifierInvoice control2 Snake98.6% acc
benchmarkclassifierSupplier benchmark10 Snake0.959 avg AUROC

6. Theoretical Foundation

All classifiers use Snake SAT v5.2.1, based on the Dana Theorem (2024): any indicator function over a finite discrete domain can be encoded as a SAT instance in polynomial time. Training complexity: O(L × n × m × b). Inference: O(L × clauses). No backpropagation, no neural networks.

7. Latency Budget

PhaseTimeCost
Fan-out to 10 services50-300ms$0 (CPU only)
Trust assessment<1ms$0
LLM fallback (if needed)1-4s1x Haiku or Sonnet
Total (deterministic)50-300ms$0
Total (LLM-enhanced)1.5-4.5s1x call