Simulation Interfaces
Advanced modeling software is creating real-time dynamic simulations. Novel interface technologies are creating new ways to present these simulations to users.
The most market-ready technology is extended reality headset. Extended reality is an umbrella term for virtual, augmented, and mixed reality, and blurs the line between the physical and simulated worlds. Perhaps the greatest potential for XR is in delivering shared and collaborative experiences as the next mobile computing platform.
Existing XR platforms include the Qualcomm XR smart glasses powered by the company’s Snapdragon XR1 processor, introduced in late May 2018, and the Microsoft Hololens, which I wrote about in Mechanical Engineering in October 2016. The next version of HoloLens—the HoloLens 2 headset, code named Sydney—is planned for release in 2019, will feature an improved field of view, and will be much lighter and more comfortable to wear. The HoloLens 2 will also include Microsoft’s latest generation of the Kinect sensor, and a custom AI chip to improve performance.
Looking further out, some recent work has looked to linking extended reality to the ultimate computing platform—the human brain. Attempts are being made to develop brain-computer interfaces that allow users to scroll menus, select items, launch applications, manipulate objects, and even input text using only their brain activity.
Examples of recent efforts include Neurable, a brain computer interface for mixed reality that enables the user to navigate only with thought. Neuralink is a neurotechnology company working on developing ultrahigh bandwidth brain-machine interfaces to connect humans and computers. The interfaces will enable human brains to directly interface with software and hardware, effectively bypassing low-bandwidth mechanisms such as speaking or texting to convey the thoughts.