Corpus Intelligence AI Assistant — Claude 2026-04-26 00:39 UTC
AI Assistant — Claude
Not yet connected
🛡️ Public data only — no PHI permitted on this instance.
Anthropic Claude API
NOT CONFIGURED
Backs the AI-assist features on this platform — IC memo drafting, document Q&A, conversational portfolio queries. Every feature degrades gracefully when the key is absent.
Total calls
0
since install
Est. cost
$0.00
USD
Cached
0
prompts re-served
Models in use
0
seen in history
what Claude powers on this platform
FEATURES THAT USE CLAUDE
FeatureEndpointWhat it does
Partner review confirmation/deal/<id>/partner-reviewRuns a cached Claude second-pass over the PE verdict so the UI can show a concise confirm / watch-items summary.
IC Memo drafting/api/deals/<id>/memo?llm=1Generates the memo narrative sections with fact-checking against packet dollar amounts and percentages.
Document Q&A/api/deals/<id>/qa?q=…Search indexed deal documents and return answers with confidence scores.
Multi-turn chatPOST /api/chatConversational interface with tool-calling — asks the platform for portfolio data and synthesizes answers.
Every feature falls back to a non-LLM template when the key is not set, so the platform keeps working — just without synthesized prose.
Claude 4 lineup
AVAILABLE MODELS
Model IDNameNotes
claude-opus-4-7Opus 4.7Highest-capability; deep reasoning.
claude-sonnet-4-6DEFAULTSonnet 4.6Balanced cost / quality. Default.
claude-haiku-4-5-20251001Haiku 4.5Fast / cheap. Good for bulk fact-checking.
Override per-feature via the ``model`` kwarg on LLMClient.generate() or set the ANTHROPIC_DEFAULT_MODEL env var.
how to connect your key
HOW TO CONNECT
  1. Get an API key from console.anthropic.com.
  2. Export it in the shell before launching the server:
    export ANTHROPIC_API_KEY="sk-ant-…"
    .venv/bin/python seekingchartis.py --port 8090
  3. Reload this page. The status badge will flip to CONNECTED and the key fingerprint will show at the top of this panel.
The key is read from process env only — never stored on disk or logged. Cost tracking in llm_calls records model + token counts, not prompts.
Related
IntegrationsAutomation RulesAPI DocsAudit LogSystem Info