How to Use AI Agents in SightLab VR

October 11, 2025

Create interactive AI avatars and intelligent assistants for immersive VR experiences, education, and research.




Overview

SightLab (the VR Experiment Generator for Vizard)’s AI Agent example lets you place intelligent, conversational avatars directly into your VR or XR scenes.
These agents can use cloud-based large language models (like Chat GPT, Claude, or Gemini) or offline models via Ollama, allowing flexible deployment even without internet access or sharing data.

They can act as:

  • Conversational avatars or digital confederates for social research
  • Educational assistants in virtual classrooms
  • Interactive characters in multi-agent simulations or training scenarios



Step 1 — Install the Required Libraries

💡 Use the Vizard Package Manager for installation.

Core libraries

openai

anthropic

google-generativeai

elevenlabs

SpeechRecognition

sounddevice

faster_whisper

numpy

pyttsx3

ollama

Optional:

  • Install FFmpeg (for ElevenLabs TTS)

  • Install Ollama for local/offline models: ollama.com




Step 2 — Set Up Your API Keys

You’ll need API keys if using online models.

Type CMD in Windows search to open a command line window and enter (depending on which api key you are using):

setx OPENAI_API_KEY your-openai-key

setx ANTHROPIC_API_KEY your-anthropic-key

setx GEMINI_API_KEY your-gemini-key

setx ELEVENLABS_API_KEY your-elevenlabs-key

Then restart Vizard.
Offline Ollama models require no API key.




Step 3 — Run the AI Agent

You can start with:

  • AI_Agent.py (single agent)

  • AI_Agent_GUI.py (with a simple GUI)

  • multi_agent_interaction.py (multiple interacting agents)

  • VR Presentation Tool for Educational Multi-User AI Agents

Press and hold “C” or your controller grip to talk (“2” key on the VR Presentation Tool)
Release to let the agent respond.

Optional:

  • Press “h” (Gemini models) to capture a screenshot for vision-based reasoning.




Step 4 — Add an AI Avatar to a Scene

  1. Place your avatar model (Avaturn, ReadyPlayerMe, Mixamo, etc.) into:
    Resources/avatars
    See this link for some avatar library options (click on the “Avatar Libraries” option”) or contact sales@worldviz.com  for avatar suggestions.

  1. Copy one of the existing config files from:
    configs/AI_Agent_Config.py

  2. Rename and modify:

    • Path to your avatar model

    • Voice type (OPENAI, ELEVENLABS, or PYTTSX3)

    • Talking and idle animations

    • Avatar position and rotation

Example:

AVATAR_POSITION = [0, 0, 2]

AVATAR_EULER = [180, 0, 0]

TALK_ANIMATION = 1

IDLE_ANIMATION = 0

  1. Add to your experiment script:

from configs.My_Avatar_Config import *

import AI_Agent_Avatar

avatar = AI_Agent_Avatar.avatar

sightlab.addSceneObject('AI_Avatar', avatar, avatar=True)




Step 5 — Placing the AI Agent Avatar in a Scene Using Inspector

To place the avatar location in your scene:

  1. Open the Inspector from the Tools menu.

  2. Open your specific Environment Scene

  3. Add an avatarStandin object from the sightlab_resources (C:\Program Files\WorldViz\Sightlab2\sightlab_resources\objects) by using File-Add.
  4. Adjust its position and orientation visually.

  5. Save your .osgb model




Step 6 — Multi-Agent Interactions

You can run multiple AI agents in the same scene using the multi_agent_interaction.py example.

from AI_Agent import AIAgent

agent1 = AIAgent(config_path='configs/Agent1_Config.py', name='Tutor')

agent2 = AIAgent(config_path='configs/Agent2_Config.py', name='Student')

Agents can converse with each other or respond to users collaboratively, ideal for:

  • Multi-character roleplays

  • Training simulations

  • Group educational scenarios

09-18-2025-13-25-08_image.jpg



Step 7 — Integrating with the VR Presentation Tool

In Multi-User SightLab, AI agents can serve as:

  • Virtual lecturers or assistants that present material

  • Guides that respond to students’ questions

  • Confederates in behavioral or social experiments
  • Easily add a prompt to any slide by creating a new text file, adding it to the AI Prompts library and dragging it into your slide/scene

Combine with the VR Presentation Tool to display media, slides, or 3D objects while the AI agent narrates or answers in real time.




Step 8 — Tips and Troubleshooting

Issue Solution
Avatar doesn’t speak Check TTS settings or restart Vizard
Microphone conflict Verify correct audio input device
Overexposed lighting Disable lighting in avatar config



Step 9 — Expanding Capabilities

  • Add environment awareness using Gemini Vision or screenshots as prompts.

  • Integrate rating scales, proximity triggers, or gaze-based actions for adaptive AI behavior.

  • Store all conversation transcripts automatically with trial data.




Additional LInks:

Stay Updated
Subscribe to our monthly Newsletter
CONTACT US 
Phone +1 (888) 841-3416
Fax +1 (866) 226-7529
813 Reddick St
Santa Barbara, CA 93103