Multiple sensory inputs: Sound plus movement patent delivers better user experiences

The combination of sound recognition and movement is the subject of a new patent, granted to Audio Analytic this week.

tennis - sound and movement

Wearable devices such as smartwatches, TWS earbuds and AR glasses that can combine sound recognition and movement detection will open up opportunities to deliver powerful new and improved user experiences. These experiences range from a more granular understanding of physical activity to helping people navigate or discover the world around them.

The combination of sound recognition and movement is the subject of a new patent, that was granted to Audio Analytic this week.

Sometimes the readings from motion sensors suffer from ambiguity when it comes to guessing what people are doing. However, if we want our smart devices to be more helpful and act as frictionless, hands-free, assistive interfaces, then compact, embedded AI needs to understand what people are doing as precisely as possible. In turn, this triggers the right kind of user experience at the right moment and in the proper context.

As humans, we infer the context around us by integrating multiple sensory perceptions, and we do that so naturally that we even forget that we are doing it. The sounds around us provide rich information that enables us to better understand context.

By empowering consumer devices to combine multiple senses, consumers benefit from products that can do more on their behalf, whether the application is health, wellbeing, convenience, safety or entertainment.

Audio Analytic's patent covers multiple use cases, including:

  • Your wearable device can more accurately detect the sport you are playing by combining movement and sound (e.g. sprinting vs soccer, tennis vs squash)

  • Your smartwatch can detect whether you are washing your hands by combining its understanding of the movement of your hands and the sound of running water

  • Your AR glasses can combine your movement and the environmental sounds around you to anticipate your needs better, keep you safe or deliver context-relevant information about your location or activity.

Sound recognition is an essential piece of machine perception that was missing from the perceptual AI puzzle until Audio Analytic overcame the significant challenges presented by this specialised branch of AI. As a result, product designers now see how essential the sense of hearing is to context recognition and the role it plays alongside other types of AI. This is because in addition to enabling valuable and unique applications in its own right, sound recognition enhances the value of other sensory inputs by providing critical contextual cues.

"I’m very excited by the wide range of new user experiences that this type of ‘sound +’ sensor fusion will bring to consumer electronics," says Dr Sacha Krstulović, Director of Audio Analytic Labs.



Read more

Looking for something specific?