๐Ÿค–

LLM Multi-Provider

OpenAI + Claude behind a common surface. Structured outputs, token-usage logged per report.

Part of the StackSwap Intelligence Ecosystem โ€” software adoption intelligence for the AI era.

What Is the StackSwap LLM Layer?

StackSwap calls multiple LLM providers (OpenAI and Claude) behind a common internal surface for recommendation narrative and the guided advisor. Requests use structured outputs (JSON schema) so the response plugs directly into the report pipeline without brittle string parsing. Every call logs token usage to the report's audit columns, so cost per report is observable โ€” not a black box.

How It Fits the StackSwap Intelligence Ecosystem

The LLM layer powers the `justification` field on every transformation plan row and the ask-your-report chat. It runs behind server-side API routes so API keys never ship to the client, and behind a provider abstraction so we can swap or A/B providers per call type. Deterministic fallbacks exist for every LLM output โ€” if a call fails, the report still renders with the rules engine's baseline justification.

Why This Matters for Quality and Cost Control

Multi-provider + structured output + per-report token logs means StackSwap can move between OpenAI and Claude for cost or quality reasons without rewriting the recommendation pipeline. The architecture treats LLMs as a swappable input, not a core dependency.