HiveTraceRed Documentation¶
HiveTraceRed is a comprehensive security framework for testing Large Language Model (LLM) vulnerabilities through systematic attack methodologies and evaluation pipelines.
This framework is designed for security researchers, AI safety engineers, and red teamers who need to assess the robustness of LLM systems against adversarial attacks.
Key Features¶
80+ Attack Types: Comprehensive library across 10 attack categories including roleplay, persuasion, token smuggling, context switching, and more
Multiple LLM Providers: Built-in support for OpenAI, GigaChat, YandexGPT, Google Gemini, OpenRouter, and extensible architecture for custom providers
Advanced Evaluation: WildGuard evaluators and systematic safety assessment tools
Async Pipeline: Efficient streaming architecture optimized for large-scale testing
Multi-Language Support: Testing capabilities across multiple languages including Russian
Modular Architecture: Easy to extend with custom attacks, models, and evaluators
Quick Links¶
Installation - Get started quickly
Quickstart - API-based Testing - Your first test (cloud APIs)
Quickstart - On-Premise Testing - Your first test (on-premise)
Attack Types Reference - Explore attack types
Attacks API - API reference
Table of Contents¶
Getting Started
User Guide
Examples
Attacks Reference