LEVITATE

 

  • Project Name: LEVITATE
  • Project Type: Future and Emerging Technologies (FET)

  • Start & End Dates: 01/01/2017 - 31/03/2021

  • Total Budget: 2,999,870 €

Funded by the EU this project is creating, prototyping and evaluating a radically new human-computer interaction paradigm. The aim is to empower the unadorned user to reach into a new kind of display composed of levitating matter. This tangible display allows you to see, feel, manipulate, and hear three-dimensional objects in space.

LEVITATE leverage theories and tools from acoustics in a radically new approach. Their primary tool is ultrasound in the range of 40-70kHz: they use ultrasound to levitate and manipulate to create acoustic forces that levitate particles; they use ultrasound to project directional audio cues; they also use focused ultrasound to create haptic pressure points that can be touched and felt in mid-air.  

Combining all these components is challenging but in the end LEVITATE hope to create a rich multimodal display that is fun and engaging. There are numerous benefits and applications for this advanced display technology. For example, instead of having to reach for an iDrive dial in a car, users may just reach out and the dial is created directly under their hand. Instead of controlling a virtual character on a TV screen when playing a tennis video game, players could hold a real physical racket in their hand and play with a ball made of levitating particles whose behaviour is controlled digitally. Instead of interacting with a virtual representation of a protein behind a computer screen, scientists could gather around a physical representation of the protein in mid-air, reach into it to fold it in different ways, and draw other proteins closer to see, feel and hear how they interact. The flexible medium of floating particles could be used by artists to create new forms of digital interactive installations for public spaces. Engineers could directly walk with their clients around virtual prototypes floating in mid-air, while both are able to reach into the model and change it as they go.