Skip to main content
The config.yml file is the main configuration file for NeMo Guardrails. It defines LLM models, active rails, instructions, and custom configuration data.

Configuration Sections

Models

Define one or more LLM models to use in your guardrails configuration.
models
array
required
List of model configurations
models[].type
string
required
The type of model. Common values:
  • main - Primary LLM for conversation
  • content_safety - Model for content safety checks
  • jailbreak_detection - Model for jailbreak detection
models[].engine
string
required
The LLM engine/provider. Supported values:
  • openai - OpenAI models
  • nim - NVIDIA NIM
  • vertexai - Google Vertex AI
  • hf_pipeline - HuggingFace pipeline
  • hf_endpoint - HuggingFace endpoint
models[].model
string
required
The specific model name (e.g., gpt-4o-mini, meta/llama3-8b-instruct)
models[].parameters
object
Additional model parameters like temperature, max_tokens, base_url, etc.

Example: OpenAI Configuration

From examples/bots/hello_world/config.yml:
models:
  - type: main
    engine: openai
    model: gpt-4o-mini

Example: NIM Configuration

From examples/configs/llm/nim/config.yml:
models:
  - type: main
    engine: nim
    model: meta/llama3-8b-instruct
    parameters:
      base_url: http://localhost:7331/v1

Example: Multiple Models

From examples/configs/content_safety/config.yml:
models:
  - type: main
    engine: nim
    model: meta/llama-3.3-70b-instruct

  - type: content_safety
    engine: nim
    model: nvidia/llama-3.1-nemoguard-8b-content-safety

Instructions

Provide system instructions that guide the LLM’s behavior.
instructions
array
List of instruction sets
instructions[].type
string
The type of instruction (e.g., general)
instructions[].content
string
The instruction text

Example

From examples/bots/abc/config.yml:
instructions:
  - type: general
    content: |
      Below is a conversation between a user and a bot called the ABC Bot.
      The bot is designed to answer employee questions about the ABC Company.
      The bot is knowledgeable about the employee handbook and company policies.
      If the bot does not know the answer to a question, it truthfully says it does not know.

Sample Conversation

Provide sample conversation patterns to guide the LLM.
sample_conversation
string
Example conversation showing user/bot interaction patterns with canonical forms

Example

From examples/configs/sample/config.yml:
sample_conversation: |
  user "Hello there!"
    express greeting
  bot express greeting
    "Hello! How can I assist you today?"
  user "What can you do for me?"
    ask about capabilities
  bot respond about capabilities
    "I am an AI assistant built to help you."

Rails

Define which guardrails should be active and how they should be configured.
rails
object
Container for all rail configurations
rails.input
object
Input rails configuration
rails.input.flows
array
List of input rail flow names to activate
rails.output
object
Output rails configuration
rails.output.flows
array
List of output rail flow names to activate
rails.dialog
object
Dialog rails configuration
rails.dialog.single_call
object
Configuration for single-call dialog mode
rails.dialog.single_call.enabled
boolean
Whether to enable single-call mode (default: varies by configuration)
rails.config
object
Custom configuration for specific rails

Example: Self-Check Rails

From examples/bots/abc/config.yml:
rails:
  input:
    flows:
      - self check input

  output:
    flows:
      - self check output

  dialog:
    single_call:
      enabled: False

Example: Content Safety Rails

From examples/configs/content_safety/config.yml:
rails:
  input:
    flows:
      - content safety check input $model=content_safety
  output:
    flows:
      - content safety check output $model=content_safety

Example: Jailbreak Detection

From examples/configs/jailbreak_detection/config.yml:
rails:
  config:
    jailbreak_detection:
      server_endpoint: "http://localhost:1337/heuristics"
      lp_threshold: 89.79
      ps_ppl_threshold: 1845.65
      embedding: "Snowflake/snowflake-arctic-embed-m-long"

  input:
    flows:
      - jailbreak detection heuristics
      - jailbreak detection model

Custom Configuration

You can add custom configuration sections for specific rails or features.

Example: Sensitive Data Detection

rails:
  config:
    sensitive_data_detection:
      input:
        entities:
          - PERSON
          - EMAIL_ADDRESS
          - PHONE_NUMBER
          - CREDIT_CARD

Example: Fact Checking

From examples/configs/rag/fact_checking/config.yml:
rails:
  config:
    fact_checking:
      parameters:
        endpoint: "http://localhost:5123/alignscore_base"

  output:
    flows:
      - alignscore check facts

Complete Example

Here’s a comprehensive configuration example combining multiple features:
instructions:
  - type: general
    content: |
      Below is a conversation between a bot and a user. The bot is talkative and
      quirky. If the bot does not know the answer to a question, it truthfully says it does not know.

sample_conversation: |
  user "Hello there!"
    express greeting
  bot express greeting
    "Hello! How can I assist you today?"

models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo-instruct
    parameters:
      temperature: 0.7
      max_tokens: 256

rails:
  input:
    flows:
      - check jailbreak
      - mask sensitive data on input

  output:
    flows:
      - self check facts
      - self check hallucination

  config:
    sensitive_data_detection:
      input:
        entities:
          - PERSON
          - EMAIL_ADDRESS

Loading Configuration

from nemoguardrails import RailsConfig, LLMRails

# Load from directory
config = RailsConfig.from_path("./config")
rails = LLMRails(config)

# Use the rails
response = rails.generate(
    messages=[{"role": "user", "content": "Hello!"}]
)

Next Steps

Rails Definition

Learn how to define custom rails in .co files

LLM Configuration

Explore LLM provider configuration options