PRISMpowered by VelarIQ™
PRISMpowered by VelarIQ™
DOCUMENTATION

Developer
Documentation

Everything you need to integrate PRISM into your AI applications.

Quick Example

import { Prism } from '@prism/sdk';

const prism = new Prism({
  apiKey: process.env.PRISM_API_KEY
});

// Validate an AI response
const result = await prism.validate({
  input: "What is the capital of France?",
  output: "The capital of France is Paris.",
  model: "gpt-4"
});

console.log(result.isValid); // true
console.log(result.confidence); // 0.99

Install with: npm install @prism/sdk

Frequently Asked Questions

General

PRISM is an AI validation layer that eliminates hallucinations from LLM outputs through hierarchical processing. It sits between your application and any AI provider, validating every response before it reaches your users.

PRISM uses a multi-tier validation architecture that cross-references AI outputs against verified data sources and applies logical consistency checks. Our hierarchical processing ensures that false or fabricated information is caught and corrected before delivery.

Yes, 100%. PRISM works with any LLM or AI provider including OpenAI, Anthropic, Google, Azure, Cohere, and others. You're never locked into a single vendor, and you can switch providers at any time without changing your PRISM integration.

Pricing

We charge 25% of your verified cost savings. If PRISM doesn't save you money, you don't pay. There are no upfront costs, no minimum commitments, and no hidden fees.

Savings include reduced API costs from eliminated retry loops, fewer tokens wasted on hallucinated outputs, and decreased operational costs from manual verification. We provide a transparent dashboard showing exactly how your savings are calculated.

We recommend PRISM for organizations spending at least $10,000/month on AI infrastructure to see meaningful ROI. However, we're happy to discuss your specific use case regardless of current spend.

Technical

PRISM integrates via a simple API wrapper. Most teams are up and running in under an hour. We provide SDKs for Python, Node.js, Go, and REST APIs for any other language.

PRISM adds minimal latency—typically under 50ms per request. Our preprocessing layer runs in parallel with your LLM calls, so validation happens without blocking your response times.

Yes. PRISM is built for enterprise scale with 95K+ requests per second capacity. We process over 74.3 billion queries monthly across our customer base.

Absolutely. PRISM processes data in-memory and never stores your prompts or responses. We offer VPC deployment options for enterprises requiring additional data residency controls.

Support

All customers receive priority support with dedicated Slack channels, email support, and access to our technical team. Enterprise customers get a dedicated solutions architect.

Yes. We guarantee 99.99% uptime with our standard SLA. Custom SLA terms are available for enterprise customers.

Still Have Questions?

Our team is ready to help you understand how PRISM can work for your specific use case.