How to Generate Real-Time VR Content with AI

June 18, 2025

AI Agent

SightLab VR’s new AI 3D Model Spawner demo lets researchers and creators summon fully-textured, grab-ready 3D objects on the fly simply by typing or speaking a prompt—no modeling skills required. By piping those prompts through generative-AI services like Meshy or MasterpieceX, the script spins up geometry in under four minutes, drops it into a chosen VR environment, and immediately ties it into SightLab’s built-in gaze-tracking and grab logging. The result is an on-demand sandbox for rapid stimulus generation, UX testing, classroom demos, or game-design experiments without ever leaving VR. Here is how you can use it to generate 3D models in real time. 

Located in ExampleScripts/AI_3D_Model_Spawner


🧩 What It Does

AI 3D Model Spawner allows participants or researchers to spawn objects into a virtual scene on demand, simply by typing text prompts or using speech recognition to speak model requests (e.g. “a glowing mushroom”). The models are created via online APIs, refined, and automatically placed in the VR environment.

  • 🔤 Text → 3D model
  • 🧠 Generative AI (Meshy and MasterpieceX currently supported, others will be added)
  • 👁️ Integrated gaze tracking and grabbing via SightLab
  • 🌗 Toggle lighting with B key
  • 🔁 Generate new models any time using N key

⚙️ Setup Instructions

1. 📦 Install Required Python Libraries

pip install mpx-genai-sdk pillow requests

SightLab and Vizard dependencies must already be installed and configured.


2. 🔑 Get an API Key

🔹 Meshy

  • Visit: https://www.meshy.ai/
  • Log in and go to: https://www.meshy.ai/api
  • Copy your API key
  • Requires Paid Subscription (minimum $20/month)

🔹 MasterpieceX


3. 📁 Add API keys to key files

In windows search type "cmd" and enter the following

🔹 For Meshy:

setx MESHY_API_KEY "your-meshy-api-key"

🔹 For MasterpieceX:

setx MPX_SDK_BEARER_TOKEN "your-mpx-bearer-token"

⚠️ Important: Restart Vizard (or your terminal) after setting these.


4. How to Run

  1. Run AI_3D_Model_Spawner.py
  2. Choose your model generation library (MasterpieceX or Meshy currently) and hardware
  3. Press N to enter a text prompt (e.g., a red futuristic drone)
  4. Press and hold c or the RH grip button to speak commands, let go to send command
  5. A model is generated via the preview API call (geometry only). Before model loads a placeholder sphere will appear.
  6. Once loaded, it is automatically refined with texture and PBR
  7. The refined model replaces the preview in-scene for Meshy (for MasterpieceX there is no preview model)
  8. Press B to toggle lighting on/off to see what looks better
  9. Press N or grip button or c again to generate another model
  10. Grab the new model using the trigger buttons or left mouse button
  11. Press trigger or spacebar to end the trial

Estimated time to model completion: MasterpieceX - 1-2 minutes

Meshy - 2-4 minutes

Config Global Options

# ===== Passthrough / AR Settings =====

USE_PASSTHROUGH = False

# ===== Environment Settings =====

ENVIRONMENT_MODEL = 'sightlab_resources/environments/RockyCavern.osgb'

# ===== GUI Options =====

USE_GUI = False

# ===== Speech Recognition =====

USE_SPEECH_RECOGNITION = True

# ===== Model Options =====

MODEL_STARTING_POINT = [0, 1.5, 2]

MODEL_STARTING_POINT_PASSTHROUGH = [0, 1.5, 0.8]

# ===== Data Saving =====

SAVE_DATA = False


🧠 Example Use Cases

Field Idea Example
🧠 Psychology Generate phobic stimuli like a spider, a syringe
🦴 Education Spawn anatomy models: human skull, brain cross-section
➕ Math/Cognition Test symbolic vs object views: 3 apples, a number line
🧪 UX/VR Dev Prototype object-based interactions in VR
🎮 Game Studies On-demand in-scene assets for testing

Field

Idea Example

🧠 Psychology

Generate phobic stimuli like a spider, a syringe

🦴 Education

Spawn anatomy models: human skull, brain cross-section

➕ Math/Cognition

Test symbolic vs object views: 3 apples, a number line

🧪 UX/VR Dev

Prototype object-based interactions in VR

🎮 Game Studies

On-demand in-scene assets for testing

💡 Tips

  • All models are saved into the /Resources/ folder automatically
  • Can delete Meshy preview models once refined
  • Object names are auto-generated from prompt text for traceability
  • You can grab models with controllers or interact via gaze
  • SightLab automatically logs gaze and grab data per object

📎 Related Links

Stay Updated
Subscribe to our monthly Newsletter
CONTACT US 
Phone +1 (888) 841-3416
Fax +1 (866) 226-7529
813 Reddick St
Santa Barbara, CA 93103