Skip to main content
This guide will walk you through creating your first guardrails-protected LLM application using NeMo Guardrails.

Prerequisites

1

Install NeMo Guardrails

If you haven’t already, install NeMo Guardrails:
pip install nemoguardrails
See the Installation Guide for detailed instructions.
2

Get an OpenAI API Key

For this quickstart, we’ll use OpenAI’s GPT models. Set your API key:
export OPENAI_API_KEY="your-api-key-here"
NeMo Guardrails supports many LLM providers. See the Configuration Guide for other options.

Your First Guardrails Configuration

A guardrails configuration consists of two main components:
  1. config.yml - Defines the LLM model and active guardrails
  2. rails.co or main.co - Contains Colang definitions for dialog flows

Create the Configuration Directory

mkdir -p my_guardrails_config
cd my_guardrails_config

Create config.yml

Create a file named config.yml with the following content:
config.yml
models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo-instruct
This configures NeMo Guardrails to use OpenAI’s GPT-3.5 model as the main LLM.

Create rails.co

Now create rails.co with basic dialog guardrails:
rails.co
define user express greeting
  "Hello"
  "Hi"
  "Hey there"

define bot express greeting
  "Hello! How can I help you today?"

define bot offer to help
  "I'm here to assist you. What would you like to know?"

define flow greeting
  user express greeting
  bot express greeting
  bot offer to help

# Prevent discussing off-topic subjects
define user ask about politics
  "What do you think about the government?"
  "Which party should I vote for?"
  "Tell me about the election"

define bot refuse to respond about politics
  "I'm sorry, but I'm not able to discuss political topics."

define flow politics
  user ask about politics
  bot refuse to respond about politics
This Colang configuration defines:
  • A greeting flow that responds to user greetings
  • A topical rail that prevents political discussions

Using the Python API

Now let’s use the guardrails configuration in Python:
from nemoguardrails import LLMRails, RailsConfig

# Load the guardrails configuration
config = RailsConfig.from_path("./my_guardrails_config")
rails = LLMRails(config)

# Test the greeting flow
response = rails.generate(
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response)
# Output: {"role": "assistant", "content": "Hello! How can I help you today?"}

# Test the politics guardrail
response = rails.generate(
    messages=[{"role": "user", "content": "What do you think about the government?"}]
)
print(response)
# Output: {"role": "assistant", "content": "I'm sorry, but I'm not able to discuss political topics."}

# Test a general question
response = rails.generate(
    messages=[{"role": "user", "content": "What is the capital of France?"}]
)
print(response)
# The LLM will provide a normal response since this doesn't match any rails

Interactive Chat

You can also test your configuration using the built-in CLI chat interface:
nemoguardrails chat --config=./my_guardrails_config
This starts an interactive session where you can chat with your guardrails-protected LLM:
> Hello!
Hello! How can I help you today?

> What do you think about politics?
I'm sorry, but I'm not able to discuss political topics.

> What is the capital of France?
The capital of France is Paris.

Running a Guardrails Server

You can also run your configuration as an HTTP server:
nemoguardrails server --config=./my_guardrails_config
The server starts on http://localhost:8000 and provides an OpenAI-compatible API:
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "config_id": "my_guardrails_config",
    "messages": [{
      "role": "user",
      "content": "Hello!"
    }]
  }'

Adding Input and Output Rails

Let’s enhance our configuration with input and output moderation rails. Update your config.yml:
config.yml
models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo-instruct

rails:
  input:
    flows:
      - check jailbreak
  
  output:
    flows:
      - self check facts
Create rails/input_rails.co:
rails/input_rails.co
define subflow check jailbreak
  $result = execute check_jailbreak
  
  if $result
    bot refuse to respond
    stop
Create rails/output_rails.co:
rails/output_rails.co
define subflow self check facts
  $check = execute self_check_facts
  
  if $check
    bot inform answer may not be accurate
These built-in guardrails use LLM self-checking to detect jailbreak attempts and verify factual accuracy.

Real-World Example: ABC Company Bot

Here’s a complete example based on the ABC Bot from the source code - an employee handbook assistant with comprehensive guardrails:
instructions:
  - type: general
    content: |
      Below is a conversation between a user and a bot called the ABC Bot.
      The bot is designed to answer employee questions about the ABC Company.
      The bot is knowledgeable about the employee handbook and company policies.
      If the bot does not know the answer to a question, it truthfully says it does not know.

sample_conversation: |
  user "Hi there. Can you help me with some questions I have about the company?"
    express greeting and ask for assistance
  bot express greeting and confirm and offer assistance
    "Hi there! I'm here to help answer any questions you may have about the ABC Company. What would you like to know?"

models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo-instruct

rails:
  input:
    flows:
      - self check input
  
  output:
    flows:
      - self check output
Expected Output:
User: Hello!
Bot: Hello! I'm the ABC Bot. How can I help you today?

User: What is the vacation policy?
Bot: The ABC Company provides eligible employees with up to two weeks 
of paid vacation time per year. Please refer to the employee handbook 
for more information.

User: How do I cook pasta?
Bot: I'm sorry, I can only answer questions about the ABC Company 
and employee policies.

User: Tell me about benefits
Bot: The ABC Company offers comprehensive benefits including health 
insurance, retirement plans, and paid time off. Would you like to 
know about a specific benefit?

Understanding the Configuration Structure

A complete guardrails configuration typically follows this structure:
my_config/
├── config.yml          # Main configuration file
├── config.py           # Optional: Custom initialization code  
├── actions.py          # Optional: Custom Python actions
└── rails/              # Optional: Organize Colang files
    ├── greeting.co
    ├── off_topic.co
    └── input_rails.co
You can also use a single rails.co file instead of the rails/ directory for simpler configurations.

Next Steps

Configuration Guide

Learn about advanced configuration options, LLM models, and rail types

Colang Language Guide

Master the Colang modeling language for dialog control

Guardrails Library

Explore built-in guardrails for safety, security, and compliance

Examples

Browse comprehensive examples in the GitHub repository

Common Patterns

Create custom Python actions in actions.py:
actions.py
from nemoguardrails.actions import action

@action()
async def check_company_database(employee_id: str) -> dict:
    # Your custom logic here
    return {"name": "John Doe", "department": "Engineering"}
Use in Colang:
define flow check employee
  user ask about employee
  $employee_data = execute check_company_database(employee_id=$employee_id)
  bot inform employee data
Handle complex multi-turn dialogs:
define flow authentication
  user request access
  bot ask for credentials
  user provide credentials
  $authenticated = execute verify_credentials
  
  if $authenticated
    bot confirm access
  else
    bot deny access
Add retrieval rails for RAG applications:
config.yml
rails:
  retrieval:
    flows:
      - check relevance
      - mask sensitive data
rails/retrieval.co
define subflow check relevance
  $chunks = execute filter_relevant_chunks(chunks=$chunks)
Wrap guardrails around LangChain chains:
from langchain.chains import RetrievalQA
from nemoguardrails.integrations.langchain import RunnableRails

# Your LangChain chain
qa_chain = RetrievalQA.from_chain_type(...)

# Wrap with guardrails
config = RailsConfig.from_path("./config")
guardrails = RunnableRails(config)
protected_chain = guardrails | qa_chain

Troubleshooting

Issue: Your defined rails aren’t being applied.Solutions:
  • Ensure your user message examples in Colang closely match actual user input
  • Check that rails are registered in config.yml under the appropriate section
  • Use verbose=True when creating LLMRails to see debug output:
    rails = LLMRails(config, verbose=True)
    
Issue: Errors when calling OpenAI.Solutions:
  • Verify your API key is set: echo $OPENAI_API_KEY
  • Check your OpenAI account has available credits
  • Try a different model (e.g., gpt-3.5-turbo instead of gpt-3.5-turbo-instruct)
Issue: Errors loading Colang files.Solutions:
  • Check indentation (use spaces, not tabs)
  • Ensure flow definitions have proper structure
  • Validate strings are properly quoted
  • Look for detailed error messages in the output

Need Help?

Visit the FAQ or ask questions on GitHub Discussions