Monce Suite is an orchestrator that sits in front of 10 independent Snake SAT classifiers. Each classifier runs on its own EC2 instance with no LLM dependency in the default path.
User text
│
▼
┌──────────┐
│ Monce │ POST /monce-haiku or /monce-sonnet
│ Suite │ ─── fan-out to 10 /comprendre ──▶ all with anthropic=False
└────┬─────┘
│
┌───▼────┐
│ Trust │ aggregate trust score from 10 responses
│ Assess │ high (≥65): return deterministic
└───┬────┘ low (<65): one LLM call to synthesize
│
┌───▼────┐
│ Claude │ dynamic prompt built from what the
│ (1x) │ deterministic layer actually found
└────────┘
Without the suite, each service with anthropic=True makes its own LLM call.
10 services = 10 LLM calls per user input. With the suite:
The expected cost is 0.3x of a single LLM call (70% of inputs get deterministic-only).
Each downstream classifier returns a trust score (0-100) measuring extraction quality, signal density, classification confidence, and model agreement.
| Zone | Threshold | Action |
|---|---|---|
| High trust | ≥ 65 | Return deterministic. No LLM. |
| Grey zone | 35-64 | LLM enhances weak areas. |
| Low trust | < 35 | LLM full synthesis. |
| Too few OK | < 3 services | LLM with whatever is available. |
The LLM prompt is not static. It is constructed dynamically from whatever the 10 classifiers actually returned. Each classifier's highlights are extracted programmatically:
The LLM acts as a synthesizer and tie-breaker, not a re-classifier. It never re-analyzes the raw text from scratch — it works from the structured outputs of 10 specialized models.
| Service | Domain | Models | Best Metric |
|---|---|---|---|
| emailclassifier | Email triage | 10 Snake | 0.958 AUROC |
| quoteclassifier | Quote analysis | 10 Snake | 0.894 AUROC |
| requestclassifier | Complaint routing | 4 Snake | 0.990 AUROC |
| furnitureclassifier | Field sales scoring | 1 Snake (27 feat) | 0.860 AUROC |
| businessclassifier | Task dispatch | 1 Snake (8 feat) | 0.726 AUROC |
| selfservice | Product search | 10 Snake | 99.4% typo acc |
| salesagentclassifier | Prospect scoring | 5 Snake | 93.3% acc |
| negociationclassifier | Offer analysis | 5 Snake | 0.936 AUROC |
| procurementclassifier | Invoice control | 2 Snake | 98.6% acc |
| benchmarkclassifier | Supplier benchmark | 10 Snake | 0.959 avg AUROC |
All classifiers use Snake SAT v5.2.1, based on the Dana Theorem (2024): any indicator function over a finite discrete domain can be encoded as a SAT instance in polynomial time. Training complexity: O(L × n × m × b). Inference: O(L × clauses). No backpropagation, no neural networks.
| Phase | Time | Cost |
|---|---|---|
| Fan-out to 10 services | 50-300ms | $0 (CPU only) |
| Trust assessment | <1ms | $0 |
| LLM fallback (if needed) | 1-4s | 1x Haiku or Sonnet |
| Total (deterministic) | 50-300ms | $0 |
| Total (LLM-enhanced) | 1.5-4.5s | 1x call |