v1.0.0 RELEASE

STOP GUESSING.
START MEASURING.
OPTIMIZE.

The definitive LLM analytics layer for your browser. Track tokens, score efficiency, and debug your prompt engineering workflow.

prompt_pal_cli
claude-3-opus
0.85
gpt-4o
0.42
grok-beta
0.60
> optimizing_context...
> token_reduction: 12%
> status: OPTIMIZED
ANALYTICS FOR BUILDERS /// LOCAL STORAGE ONLY /// NO TRACKING /// OPEN SOURCE /// ANALYTICS FOR BUILDERS /// LOCAL STORAGE ONLY /// NO TRACKING /// OPEN SOURCE ///

SYSTEM CAPABILITIES

📊

TOKEN METRICS

Granular usage tracking across all LLM providers. Know exactly what you're spending.

🎯

EFFICIENCY SCORE

Proprietary algorithm rates prompt density. Lower score = Higher signal-to-noise ratio.

MULTI-PLATFORM

Unified layer for ChatGPT, Claude, Grok, and OpenRouter. Context switching is for humans, not data.

🛡️

LOCAL FIRST

Zero cloud dependency. Data lives in your browser's local storage. We can't see your prompts.

💾

JSON EXPORT

Standardized data export. Pipe your usage stats into your own dashboards or Excel.

💎

NO BLOAT

Minimal footprint. No background processes when you aren't chatting. 50kb package size.

INTEGRATION FLOW

01

INJECT EXTENSION

Load the unpacked extension into Chrome/Brave/Edge Developer Mode.

02

EXECUTE PROMPTS

Use your LLM of choice. The observer layer runs silently in the DOM.

03

ANALYZE DATA

Open the dashboard to view aggregated stats, trends, and optimization opportunities.