Designing a Metaverse AR glasses app for virtual collaboration | by Belanna Zhou | Dec, 2021


A few months ago, Facebook announced a change in its company name change to Meta. The metaverse is the next evolution of social connection and brings the metaverse to life. I love it and honestly need it in the future because this technology of AR and VR will enhance people’s living and explore the world with new and exciting possibilities. The metaverse can integrate online 3D Virtual environments through conventional personal computing, as well as and headsets. what if we use this metaverse concept in education? and what if we use this technology in highly anticipated AR glasses? with AR glassess’ better portability to enhance virtual and physical people collaboration? This is a concept study and my final project by taking the course with the instruction of professor @ at the University of Michigan this semester. Education topic is just a What-ifs topic. What if all these concepts and case studies become realities. how virtual collaboration can benefit us all? How does it benefit to connect students that are geographically diverse, maintaining uniformity in the execution of tasks?

How might AR Glasses offer hands-on experiences that are physically impossible to do in the real world for virtual collaboration?

The information will synchronously overlay to what user see in museum

The difficulties of virtual group project collaboration bring to my motivation to this project. Have you ever felt the difficulties of doing a virtual group project when you’re collaborating with teammates? Sometimes it’s hard to communicate with team members, due to not seeing the same things in group base context, especially when you’re learning a hard topic together. Also, a Lack of personnel connection when doing virtual group collaboration. Remote students sometimes often feel isolated, which can create a degrading effect on team unity. Oftentimes, text-based communication, such as email and text messages, because it lacks physical body language, tone, and other subtle nonverbal cues. The misconnection often caused the virtual group students’ collaboration to misinterpret the contents and increased the likelihood of misinterpretation. When students take the virtual study, without feeling being fully represented, students oftentimes feel less engaged with the contents. From finding, team members are more reserved on digital channels and less likely to participate in meetings or reply to emails if working remotely.

I chose to set the virtual collaboration in a local museum. Actually, this AR glasses concept can be expanded into any other sceneries. The goal of the AR glasses is to remediate the poor virtual communication and lag collaboration due to lack of personal connection and physical interaction with each other, and the Covid-19 with constraints in social distance.

Imagining the future

  • Clear user instruction and instant feedback: giving users the right feedback on object placement and designing clear instructions are important. Provide instant feedback. When pressing the affordance, provides the user instant feedback.
Provide users instruction on how to interact the interfaces with gestures
When users click a virtual affordance, provide them instant feedback. Loading the screen with the right user interface feedback
  • Threat modeling: It is a method for systematically identifying potential threats to security & privacy (and more broadly, societal harms. It’s very important to build this for users to acknowledge the potential threats and privacy harms if they allow some interactions, such as data access, data aggregation, active input with voice command, physical objects, or ambient sounds in the environment.
Users will get the modal ask for permission when the AR glasses access the environment sounds and environment.
The example of users’ input protection we can use the form to specify each context and identify what’s in threat, who is threatened and how to mitigate the threats to use.

Integrate UI into the environment: In my final prototype, I decided to embed UI elements into the environment. Although there’re many more UI elements I want to add to the interaction, I ended up minimizing them to the key interactions. I got some inspiration from Qin Bian by her project . I integrate the UI elements to AR glasses.

Some major interactions

I started with a large scope at the beginning. I finally minimize the scope to focus on the important interaction and features to implement.

  • Virtual avatar standby. A virtual user with a selected avatar presents as a 3D model standing next to the physical user who is wearing the AR glasses. The AR glasses transmit graphics synchronously to the virtual and physical users, so both of them can view the artworks at the same time in the video call.

Virtual collaboration with tools. The physical users and virtual users have the available collaboration tools to use, such as Google Docs. They can voice input to take notes. The note-taking process is meant to capture ideas and organize thoughts for enhancing virtual collaboration efficiency.

Context discovery and evaluation

I start with context evaluation to explore the possibility and problems, at the same time, to evaluate the best use case the AR glasses can utilize to help users with virtual collaboration in education.

  • 1. Future education is full of diverse teaching and virtual learning. show that 90% of educators agree that AR and VR technology effectively provides differentiated and personalized learning experiences and helps students focus.
  • 2. The covid-19 pandemic has forced the world to engage in the ubiquitous use of virtual learning. It changes students learning habits across all levels of education and adapts to online collaborations. One of the challenges that higher education students face is the team-based project that involves multiple creative and interactive collaborative support learning and self-study engagement.


Second, I started with a paper storyboard. I developed the four scenes to ideate what’re the possible key interactions. These four scenes focus on how the physical users and online virtual users can colocate with each other in space, how they interact digitally with each other, and what are possible collaboration tools they can use.

User storyboard about the four major scenes to prototype with digital tool

User Flow

Third, I then build the user flow, with a birds eye overview of the user interaction. This user flow helped me to focus and know what to prototype with when working on the digital prototype. It’s like an app GPS, gave me the insight of not losing sight in features creation and remembering the “why”.

The major tools I used are two. they both have pros and cons. I include my lesson learned by using both of them for your reference.

Figma: I used it to build the key interactions screens. For me, building the digital prototype in Figma is very fast and easy. Because of my background in UX and UI design for years. The Figma prototype helped me to quickly establish the interaction context and figure out how to solve the challenge of UI design screen when it’s overlaid with the real-time environment, what’s the opacity and readability of contents should be designed on screen.

  • Pro: Fast and easy to build the user flow and user interfaces
  • Con: 2D flat design, cannot visualize the 3D dimensional model
Figma prototype for the menu screen, contact screen, and collaboration tools screen design

Unity: I used Unity to build the museum scene and a virtual character. The unity prototyping helps me with figuring out key AR virtual interactions, such as user dimensions, the third person position, UI design depth, hand tracking, environment tracking, etc.

  • Pro: many sceneries and theme templates are available in the unity library. The 3D characters can be purchased. It saved a lot of time to build a 3D new model from scratch.
  • Cons: the UI look is gaming. The Unity prototype requires some scripts knowledge and rewriting. Debugging cost a lot of time.
  1. Flexibility in between hands-free and voice control.

The AR glasses offer flexibility in between hands-free and voice control for users with a smooth touchpad navigation experience. This is very suitable for outdoor group activities, like visiting museums, students oftentimes have a notebook on hand to take notes. The AR glasses are very suitable for other outdoor activities as well.

2. It projects the real-time content continuously with real-world.

Also, the AR glasses projects real-time content, continuously integration the digital content with the real world. It adds extensible knowledge to users to know what they see. It builds a lot of convenience and flexible user experience compared with an AR mobile app.

Future work

  1. The focal point of AR Glasses to users

One thing I learned is that that fitting, sizing, and ensuring the overall comfort of the AR glasses to users is essential. However, it requires a tremendous amount of works. The future thinking to my AR glasses prototype is to design for various focal points at AR glasses needed, ensuring it would be comfortably fit users in any situation.

2. Challenges of virtual contents overlay the physical objects needs more research

In addition, when users are in a physical real-world environment and collaborate with virtual users, they would encounter a lot of challenges. For example, what if when another real-world physical person is walking by. How would it affect the virtual users’ 3D avatar projecting to the environment? Would this 3D virtually overlay it? Or users would get a warning as an error? Those are all challenges that AR glasses need to overcome and think about. I believe with more study, research, and careful user experience design, would address those challenges and shape the overall operations of AR glasses in the future. AR glasses can enhance people’s life quality and finally benefit people’s lives.

Source link

Leave a reply

Please enter your comment!
Please enter your name here