NEXXUS Documentation

Welcome to NEXXUS, the AI Trust & Safety Evaluation Platform. This documentation will help you integrate trust and safety evaluation into your AI applications.

What is NEXXUS?

NEXXUS provides comprehensive AI trust and safety evaluation using 400 NEXUS evaluators against the Global AI Trust & Safety Framework (GAiTSF). Our platform helps you:

Evaluate Safety

Detect harmful content, bias, and safety risks in AI-generated outputs.

Ensure Compliance

Meet regulatory requirements for AI systems in healthcare, finance, and more.

Track Progress

Monitor trust scores over time and earn certification tiers.

Integrate Easily

Use our REST API, SDKs, or CLI to integrate with your workflow.

Quick Start

Get started with NEXXUS in minutes:

1

Get your API key

Sign up at trustscan.io and generate an API key from your dashboard.

2

Install the SDK

Pythonbash
pip install trustscan
JavaScriptbash
npm install @trustscan/sdk
3

Run your first evaluation

Pythonpython
from trustscan import NEXXUS

client = NEXXUS(api_key="ts_live_xxx")

result = client.evaluate(
    content="AI response to evaluate",
    mode="STANDARD"
)

print(f"Score: {result.trust_score}")
JavaScripttypescript
import { NEXXUSClient } from '@trustscan/sdk';

const client = new NEXXUSClient({
  apiKey: 'ts_live_xxx'
});

const result = await client.evaluate({
  content: 'AI response to evaluate',
  mode: 'STANDARD'
});

console.log(`Score: ${result.trustScore}`);

Core Concepts

Trust Score

A numerical score from 0-100 representing the overall trustworthiness of evaluated content. Higher scores indicate better compliance with safety and quality standards.

Verdicts

Each evaluation produces a verdict:

VerdictDescription
PASSContent meets all requirements
WARNMinor issues detected, review recommended
FAILContent does not meet requirements
HARD_FAILCritical violation, content must not be used

Certification Tiers

Based on evaluation results, systems can achieve certification:

NONEBRONZESILVERGOLDPLATINUM

NEXUS Evaluators

400 specialized evaluators organized into 24 categories assess different aspects of AI trustworthiness including safety, fairness, privacy, transparency, and more.58 evaluators are designated as "hard-fail" for critical safety requirements.

Documentation

Need Help?

We are here to help you integrate NEXXUS into your applications.