AI-Powered VR Classrooms — No Coding Required

February 16, 2026

Higher education is rapidly evolving toward experiential, technology-driven learning environments. As institutions seek more engaging, collaborative, and measurable teaching methods, virtual reality has emerged as a powerful medium for education, training, and research.

The Multi User SightLab based application, E-Learning Lab is an end-to-end immersive learning tool designed to help educators and institutions create, deliver, and analyze VR-based lessons - with no programming required and with the flexibility to scale from single-user experiences to full multi-user classrooms.

See this video for more information.

To request a trial download of SightLab with the E-Learning Lab go here and click "Request/Trial Download Link".




An End-to-End Tool for Immersive Education

At its core, SightLab’s E-Learning Lab application is a complete solution for immersive learning. It brings together lesson authoring, presentation delivery, AI-driven interaction, and analytics within a single workflow. Built specifically for education, immersive presentations, and research, the platform supports a wide range of instructional styles - from guided lectures to open-ended exploration and collaborative experiments.

Educators can move seamlessly from concept to execution, transforming traditional learning materials into interactive VR experiences that actively engage students rather than passively presenting information.




A Simple, Intuitive Workflow

The E-Learning Lab application follows a clear and accessible workflow that mirrors how instructors already design lessons:

  1. Browse or Create
    Instructors can start from scratch, duplicate an existing project, or launch guided tutorials using the built-in Project Browser. Additionally, students or instructors can upload their own documents, images, PDFs, slideshows and lesson plans which the AI will analyze, summarize and use to build a presentation. 
  1. Add Content
    Lessons are built by dragging and dropping environments, 3D models, images, videos, and interactive objects into the scene.

  2. Enhance with AI and Interaction
    AI agents, quizzes, object interactions, and assessments can be layered in to deepen engagement.

  3. Run Sessions
    Lessons can be launched in desktop mode, VR headsets, or multi-user classroom configurations.

  4. Analyze and Replay
    Every session is automatically logged, allowing educators and researchers to replay sessions and analyze learner behavior in detail.



Building Immersive Lessons with Drag-and-Drop Tools

The Asset Browser acts as a central hub for all lesson materials. Instructors can import content through simple drag-and-drop actions from a wide range of sources, including:


  • Sketchfab for 3D models

  • Wikimedia for openly licensed images and videos

  • AI tools for generating custom content

  • Local and online media libraries

Supported asset types include 3D models, 360° media, stereo and mono images, videos, audio, and narrations, allowing educators to build rich, multi-modal learning environments.




Environments, Navigation, and Scene Design

Lessons begin by placing a 3D environment that sets the context—such as a lecture hall, lab, or immersive scenario. A live preview window allows instructors to navigate the scene by walking, flying, or teleporting, while controlling learner starting positions and navigation modes.

For more advanced layout control, the Inspector Editor provides precise positioning, scene graph management, lighting control, and start point configuration—giving instructors fine-grained control without requiring technical expertise.




AI-Generated and Interactive Content

The E-Learning Lab integrates AI throughout the authoring process. Educators can generate custom 3D models on demand using AI services such as Meshy, with support for both text-to-3D and image-to-3D workflows. These AI-generated assets behave exactly like native scene objects and can be interacted with, animated, or tracked.

AI is also used to generate images, videos, and 360° media, leveraging services such as Gemini, DALL·E, and Wikimedia search. Both stereo and mono formats are supported, enabling immersive visuals for VR headsets and large-format displays alike.




Immersive Media: Images, Video, and 360°

The platform supports a wide spectrum of immersive media experiences. Educators can transport learners to any location using 360° images and videos, sourced from online libraries, existing lesson assets, or generated with AI.

Standard media—such as slideshows, videos, and screencasts—can be displayed on large virtual screens for theater-style viewing, mirrored instructor displays, or AR passthrough experiences. This flexibility allows instructors to blend familiar teaching materials with immersive spatial content.




AI Agents for Guided Learning

AI Agents 

AI agents are a core part of the E-Learning Lab experience and can appear as 3D avatars (with experimental support for live video-based agents) within lessons. These agents can guide students through content, answer questions, adapt explanations dynamically based on lesson context, or simply act as a tutor to discuss educational content in further detail.

Point-and-Click Interaction

In addition to voice interaction, learners can point and click on objects, screens, or interface elements to trigger AI responses, explanations, or next steps—making AI guidance accessible in both headset and desktop modes.

Emotional Context & Adaptive Responses

AI agents can incorporate emotional and contextual awareness, adjusting tone, expressions, and responses based on learner interaction, session flow, or instructional goals—supporting more natural and engaging learning experiences.

Vision Capabilities

With vision-enabled AI models, agents can interpret visual content within the scene, including images, slides, and objects, allowing students to ask questions like “What am I looking at?” or “Explain this diagram.”

Multi-Lingual

Support for 41 Languages


Read a recent study on learning outcomes with AI tutor assisted instruction

Interactive Objects, Assessments, and Guided Learning

Any object in a scene can be made interactive using simple interaction tags. Objects can be marked as grabbable, animated, or designated as targets for gaze tracking and interaction analytics. This enables hands-on learning experiences where students actively manipulate and explore content.




Running Sessions: From Single User to Full Classrooms

The E-Learning Lab supports both single-user and multi-user sessions using the same lesson content. Instructors can run experiences in desktop mode or VR, while multi-user SightLab enables synchronous, collaborative VR classrooms where multiple students share the same virtual space.

Integrated AI agents allow learners to ask natural language questions—such as “What are we learning about?”—and receive context-aware responses based on lesson content.

Lessons from PowerPoint, PDFs, and Existing Materials

The E-Learning Lab allows instructors to import PowerPoint slides, PDFs, Text Files, Images, zip files and other instructional materials and automatically convert them into immersive VR presentations. Content can be placed on virtual screens, arranged spatially, and enhanced with interaction, narration, and AI-driven explanation—preserving existing curricula while transforming delivery. This can also be used as  tutor for students to upload their notes, textbooks, and existing material to gain a deeper understanding. 




Presentation Generator

The built-in Presentation Generator uses AI to automatically create immersive lessons from any topic. Instructors can generate outlines, slides, narration, quizzes, and visual content in minutes, or refine AI-generated lessons using their own materials. This enables rapid lesson creation for new topics or just-in-time instruction. Upload content and set options. 




Augmented Reality (AR) Support

In addition to VR, the E-Learning Lab supports augmented reality experiences, allowing digital content to be overlaid onto the physical environment. AR can be used for blended learning, mixed-reality instruction, and scenarios where maintaining awareness of the real world is essential.




Audio: Text-to-Speech and Audio Files

Audio is fully integrated into the platform, including:

  • AI-powered text-to-speech narration

  • Imported audio files for instructions, sound effects, and voiceovers

  • Spatialized audio to enhance immersion

This enables consistent narration across lessons and supports accessibility and multilingual delivery.




Online Presentation Browser

Instructors can access an online presentation browser with ready-to-use sample lessons and starter content. These presentations can be downloaded, customized, and extended—allowing educators to get started immediately or build upon existing examples.




Third-Party Connections & Extensibility

The E-Learning Lab supports integration with third-party tools and services, including:

  • Eye tracking hardware

  • BIOPAC physiological sensors

  • External AI models and APIs

  • Research and data analysis pipelines

This makes the platform suitable not only for teaching, but also for advanced research and experimental workflows.




Projection-Based VR & Hybrid Classrooms

Beyond headsets, the E-Learning Lab supports projection-based VR environments, including PRISM immersive theaters. These systems enable shared, large-format immersive experiences for lectures, demonstrations, and group learning—ideal for classrooms where headsets are impractical or when collective discussion is desired.




Avatars and Tracking

Avatars

The platform supports multiple avatar libraries and styles, allowing institutions to choose representations that fit their use case. Instructors and students can also use custom avatars.

Full-Body Tracking

Support for full-body tracking enables realistic movement and embodiment in multi-user sessions, enhancing presence, communication, and social learning.




Quizzes & Instructional Flow

  • AI-generated automatically from lesson content
  • Get Immediate feedback
  • Imported from CSV files
  • Defined using simple text-based instruction files
  • Save results to User Data
  • Auto Generate new quiz questions on any lesson and give focused topics

This allows instructors to rapidly assess learning outcomes, guide students step by step, and collect structured assessment data without manual setup.

Screen Casting & Live Content Sharing

The E-Learning Lab supports screen casting and live content sharing, allowing instructors to bring desktop applications, web content, or live video feeds directly into the virtual environment. Screencasts can be displayed on large virtual screens, mirrored instructor views, or shared across multi-user sessions, enabling seamless integration of real-time demonstrations, software walkthroughs, and external content.




Data Logging, Replay, and Research-Grade Analytics

Every session is automatically recorded, capturing:

  • Gaze tracking and fixations

  • Object interactions

  • Quiz results

  • Timing and behavioral metrics

Sessions can be replayed in full 3D, allowing instructors and researchers to review learner behavior exactly as it occurred. For advanced research applications, the platform integrates tightly with BIOPAC physiological sensors, enabling synchronized heart rate, skin conductance, engagement indicators, and biofeedback visualizations.




Instructor Control and Classroom Orchestration

Instructors maintain full control over the classroom experience. They can freeze or transport participants, control playback and screens, pause or resume sessions, and add or remove users in real time. A centralized client launcher simplifies deployment and management across multiple machines and spaces.




A Proven Platform for Higher Education

Worldviz software is used by leading institutions worldwide to power interdisciplinary VR labs and classrooms, supporting education, training, research, and simulation across disciplines.




A Competitive Edge in Immersive Learning

What sets the Multi User SightLab with the E-Learning Lab apart is not just immersion, but measurement, flexibility, and instructional control:

  • Instrumented learning with full session replay

  • AI agent tutoring across multiple LLMs

  • True interactive 6DoF learning environments

  • No/low-code authoring with extensibility

  • Physiological data integration via BIOPAC

  • Instructor-led or self-guided delivery

  • Assessment-ready learning experiences



The Future of Immersive Education

SightLab with the E-Learning Lab represents a new generation of educational technology—one that moves beyond passive VR experiences toward interactive, intelligent, and measurable learning. By combining immersive media, AI assistance, collaborative classrooms, and research-grade analytics, it empowers educators to rethink how learning happens in virtual spaces.

To learn more about setting up VR Classrooms and Labs contact sales@worldviz.com

Click here to learn about an immersive classroom use case at Lasell University

For more information on SightLab’s E-Learning Lab click here

Stay Updated
Subscribe to our monthly Newsletter
CONTACT US 
Phone +1 (888) 841-3416
Fax +1 (866) 226-7529
813 Reddick St
Santa Barbara, CA 93103