LLM Guard: Open-Source Toolkit for Securing Large Language Models

The open-source toolkit provides evaluators for inputs and outputs of LLMs, offering features such as sanitization, detection of harmful language, data leakage prevention, and protection against prompt injection and jailbreak attacks.

>>More