Virtual Reality (VR) has become a popular tool for experimentation and research in fields such as psychology, neuroscience, and human-computer interaction. However, creating VR experiments can be a time-consuming and complex process, involving the use of 3D modeling software, programming languages, and various other tools.
Recent advances in artificial intelligence (AI) have made it possible to streamline this process, making it easier and faster to create VR experiments. In this article, we will explore how AI can be used to create VR experiments and how it can help researchers to generate ideas, find and create resources, and automate various aspects of the experiment design process.
AI language models, such as GPT-3, can be used to generate descriptions of VR scenarios and tasks that can be used in experiments. For example, a researcher might provide the AI model with a brief description of the goals of their study, and the model could generate a detailed scenario that meets those goals. This can be a valuable time-saving tool, as it eliminates the need to spend time brainstorming and writing out potential scenarios by hand.
Another advantage of AI in VR is its ability to help researchers find and create resources for their experiments. With the use of generative AI, we’ve already seen how you can generate 2D images and even video using text prompts, but now those in need of 3D models can use a simple text prompt to generate 3D models, using tools such as Point-E (where you can generate low quality point clouds) to more advanced up and coming tools like Nvidia’s Magic 3D or Google’s Dreamfusion.
Another advancement with the creation of 3D models using AI is with NerFs (or neural radiance fields) which is using a neural network to take a partial set of 2D images (or a video) to generate a complex 3D scene. It is in a sense a higher level of photogrammetry, where it needs less information to generate an interactive, 3D scene from photographs. There’s a link to some more information on that here, as well as a link to Nvidia’s instant Nerf tool. There also was just a paper released from Cornell University that demonstrated creating 3D models from a single image.
Additionally, AI algorithms can be used to create animations for avatars and other virtual characters, allowing researchers to create dynamic and engaging experiences for participants. A couple of free tools you can try now to create animations are Kinetix and Rokoko AI (see our November 2022 Tech Tip for how to use Kinetix to add animations to your avatars in Vizard). With these tools you can easily generate animations using just a phone camera or uploading an existing video.
There is also an AI enabled solution from AR51 that allows for markerless motion capture of multiple users, allowing researchers to track the movements of participants as they interact with the VR environment.
One of the most time-consuming aspects of creating VR experiments is writing code to control the experiment. AI language models can help to streamline this process by generating code (for instance, Python code that can be used in our Vizard software). This can save researchers a significant amount of time, as they no longer need to spend hours writing code by hand. Additionally, AI-generated code can be highly accurate and efficient, reducing the likelihood of bugs and other issues that can arise when writing code manually.
To try this yourself, you can see this tutorial on how to get a few models off of Sketchfab and run them in Vizard. Once you have your scene ready, here are some examples you can try out (these are with chatGPT, but you can also use something like CoPilot):
1. Ask chatGPT to do something like randomize the position of the objects in your scene, or change their color or size every time you press the spacebar. You would just need to specify how many objects you have and copy and paste your code along with your prompt. Since ChatGPT knows Python it can usually get you at least most of the way there. You can also feed it a little bit of Vizard code to help it along.
2. After copying and pasting some sample code from the Vizard documentation so that ChatGPT understands a concept, you can then ask chatGPT to add code to your original script based off of the sample you just gave it.
3. You can even ask chatGPT to create an entire experiment by just describing it in natural language. It will at least give you a good starting template, and you may just have to swap out some of the commands if they don’t exist in Vizard.
In addition to generating new code, AI can also be used to fix existing code. AI algorithms can analyze code and identify areas that can be optimized or improved, allowing researchers to make their VR experiments more efficient and effective. This can be particularly useful for fixing bugs or improving performance, as AI algorithms can quickly identify the source of the problem and suggest a solution. An example of this might be if you are either having an error show up in your code or it is not giving you the correct results, you can copy the code into ChatGPT and describe the problem in natural language.
Using AI language models to summarize and comment existing code can be a useful tool for developers who are working on large codebases or trying to understand unfamiliar code. AI language models can quickly analyze code and generate human-readable summaries and comments, highlighting important details and providing context for the code's purpose and functionality. This can help developers save time and effort when trying to understand or modify code, and can also improve collaboration and communication between developers who may have different levels of familiarity with the codebase. However, it's important to note that AI language models are not perfect and may occasionally produce inaccurate or incomplete summaries or comments, so it's important to review the output carefully and use it as a starting point rather than a definitive source of information.
In addition to the examples discussed above, AI can be used for a variety of other tasks in creating VR experiments. Some additional examples include:
AI Chatbots: AI chatbots can be placed in VR scenarios so you can interact with virtual agents in a more natural way using natural language.
Natural Language Processing: AI can be used to process and analyze the natural language data generated by participants, providing insights into participant behavior and attitudes.
Predictive Modeling: AI algorithms can be used to make predictions about participant behavior, allowing researchers to adjust the VR environment in real-time to optimize the experience.
Personalization: AI can be used to create personalized VR experiences that are tailored to individual participants based on their behavior, preferences, and goals.
Emotion Recognition: AI algorithms can be used to analyze participant facial expressions and body language to determine their emotional state, allowing researchers to study the emotional impact of VR experiences.
Overall, AI has the potential to revolutionize the way that VR experiments are created and conducted, making it faster and easier to design and implement high-quality VR experiences. With its ability to generate ideas, create resources, and automate complex tasks, AI has the potential to unlock new possibilities in VR research and experimentation.
For more information on leveraging these tools, as well as additional VR hardware and software configurations, contact email@example.com.