Create interactive AI avatars and intelligent assistants for immersive VR experiences, education, and research.
Overview
SightLab (the VR Experiment Generator for Vizard)’s AI Agent example lets you place intelligent, conversational avatars directly into your VR or XR scenes. These agents can use cloud-based large language models (like Chat GPT, Claude, or Gemini) or offline models via Ollama, allowing flexible deployment even without internet access or sharing data.
They can act as:
Conversational avatars or digital confederates for social research
Educational assistants in virtual classrooms
Interactive characters in multi-agent simulations or training scenarios
Step 1 — Install the Required Libraries
💡 Use the Vizard Package Manager for installation.
Core libraries
openai
anthropic
google-generativeai
elevenlabs
SpeechRecognition
sounddevice
faster_whisper
numpy
pyttsx3
ollama
Optional:
Install FFmpeg (for ElevenLabs TTS)
Install Ollama for local/offline models: ollama.com
Step 2 — Set Up Your API Keys
You’ll need API keys if using online models.
Type CMD in Windows search to open a command line window and enter (depending on which api key you are using):
setx OPENAI_API_KEY your-openai-key
setx ANTHROPIC_API_KEY your-anthropic-key
setx GEMINI_API_KEY your-gemini-key
setx ELEVENLABS_API_KEY your-elevenlabs-key
Then restart Vizard. Offline Ollama models require no API key.
VR Presentation Tool for Educational Multi-User AI Agents
Press and hold “C” or your controller grip to talk (“2” key on the VR Presentation Tool) Release to let the agent respond.
Optional:
Press “h” (Gemini models) to capture a screenshot for vision-based reasoning.
Step 4 — Add an AI Avatar to a Scene
Place your avatar model (Avaturn, ReadyPlayerMe, Mixamo, etc.) into: Resources/avatars See this link for some avatar library options (click on the “Avatar Libraries” option”) or contact sales@worldviz.com for avatar suggestions.
Copy one of the existing config files from: configs/AI_Agent_Config.py