The Virtual Audience Plugin
Rule based behaviour model implementation
Tell Me More

Research Question

How virtual audience behaviour models can enhance Virtual Reality simulations in terms of user engagement, environment control, beleivability, presence or even for the design of scenarios or stories ?

Research

In order to investigate the impact of such models in virtual reality simulation we developped our own behaviour model based on the existing literature.

Evaluations

After developping our model we ran few evaluations in order to investigate how user were perveiving non-verbal behaviours in VR compared to desktop videos and how the user were perceiving the generated audiences attitudes generated by our model.

The AtmosMaker Plugin


Posted on March 21, 2021



The AtmosMaker Plugin is the system developped within the project Virtual Audience which aims at providing a virtual audience simulator in virtual reality that allows you to easily build and experience a wide variety of audience attitudes with small or large groups of virtual agents. The system implements a virtual audience behaviour model with developped based on the existing literrature. So far the system allows you to design and control your audiences but it does not include animations or 3D characters.

If the system is originally made for a teacher training system we are now working with therapists to help people who stutter and others with social disabilities. You want to know more about our projects checkout the HCI-Group page of the project or this page to get the last features.

The Virtual Audience Team:

PhD Student Yann Glémarec (me), Dr. Jean-Luc Lugrin and Dr. Anne-Gwenn Bosser (PhD supervisors), Prof. Cedric Buche and Prof. Marc Latoschik (PhD Directors). Paul Cagniat, Lucas Brand and Fergal Iquel are master students helping on the project developing new features. Aryana Collins-Jackson is a PhD student also helping a lot on our user evaluations.

Abstract
Each posts will detail a new feature or a new milestone we reached.

Perception User Evaluations


Posted on March 21, 2021



Because our model is based on existing models which where evaluated on desktop videos we studied the potential differences that can exist between the user perception in Virtual Reality compared to a screen.

The paper is freely available on Frontiers in VR's website. It contains a simple description of the system and how it works. But also the design and the results from our two first user evaluations.

Citation:

Yann Glémarec (PhD Student), Dr. Jean-Luc Lugrin, Dr. Anne-Gwenn Bosser, Aryana Collins-Jackson (PhD Student) Prof. Cedric Buche and Prof. Marc Latoschik. Simulation and Evaluation of Audiences behaviors in Virtual Reality

Categories
Abstract
This section contains the citation to use if you are using the system and links to where to find our paper.

Plugin's features


Posted on March 21, 2021



The plugin allows to whether control the whole audience or a specific virtual agent from the audience. The plugin works with a large variety of 3D models. It also allows to control the audience with a remote Graphical User Interface. The system is fully compatible with Virtual reality.

Tested devices:

  • Windows x64
  • Vive Pro - Vive Pro (eyetrackers) - or Vive first generation
  • Oculus Rift and Rift S
  • Oculus Quest (Max 75 fps - Hardware limitations)
  • Iphone Live Link - Facial tracking

Main Features:

  • Head movements: e.g. nod, shake
  • Facial Expressions: using Morph targets/Blend Shapes, the 3d model used must have some or it won't display any.
  • Gaze: agents can stare at the main user or gaze away. A interface can be use to allow the spectator to look into specific directions or at specific objects.
  • Posture: the posture can be changed and combine with any of the behaviours mentionned above.
  • Movements: the virtual agents can move (walk/run) and sit down when it reach his sit for instance.
  • Backchannels: some backchannel such as phatic expressions can be triggered such as 'mmhmm' when nodding. Manually controlled, autonomous mode in a future release.
  • Audience population: the number of spectators can be modified before or during the simulation, agents can also come in the room and sit down for instance
  • Synchronized Sounds: Agents can talk or play very specific behaviour with sounds, e.g. phone ringing and then apology, asking to repeat the sentence. These behaviours are not optimized yet, and ask a lot of parameters to be checked. I would advise to use Unreal Engine's Montage.
  • Unreal Meta-Humans compatibility, example project incoming.

GIT Link
You will find a the very last version on this link (need HCI login), and here the public link (otherwise contact me).

Contact

.

HCI Group

Uni Würzburg - Germany

yann.glemarec@uni-wuerzburg.de

COMMEDIA

ENIB - Equipe du Lab-STICC

yann.glemarec@enib.fr