Article

How sound design creates more human-centred services

A jagged soundwave.

Sound is the angsty middle child of UX design. We love and understand its value, but we don't have the time or resources to give it our attention.

And all the beeps and pings in the world cannot shift our laser focus away from a user’s visual experience.

Sound is often reduced to its core role of functionality: as a toolkit for confirmation, warning, and success semantics. Take, for example, the satisfying ‘ping’ from a successful Apple Pay purchase, or the ‘scrunch’ of dragging a file into your desktop’s recycle bin.

However, a growing field of study advocates for sound-driven approaches to design. It’s not so much about whether we should include sound but about how we use it to influence the holistic service experience.

That influence is a powerful thing. Take for example, someone’s daily commute on public transport. They may pop on a playlist to drown out screeching wheels, or intentionally pause to enjoy the intermittent familiarity of train announcements.

That person reacts to sound, whether negatively or positively, because it causes them to feel something.

In other words, sound provides an emotional and visceral response. This creates huge potential for thoughtful, human-centred design - not only in adding function to services, but shaping the environments (whether digital or physical) contained within them.

Sound positions us in places and feelings

Let’s revisit that recycle bin example. Microsoft first introduced this feature on Windows 95, in 1995, when corporate office culture was booming.

Imagine it: the drudgery of clacking away in a little grey cubicle; talking about the big game around the water cooler; your boss dumping a comically large stack of papers on your desk at 5pm. Garfield had every right to hate Mondays.

Companies were in the process of shifting from paper-based offices to digital-first. Windows 95 arrived and sped things up, selling more than 40 million copies in its first year (and becoming the world’s most popular operating system). It introduced a drastically more user-friendly interface, including features inspired by objects found in real offices. This is design approach is known as ‘skeuomorphism’, with the recycle bin being a representation of its real-life counterpart.

This realism extends to the recycle bin's ‘scrunch’ sound, bridging a connection between product and user context. It offered people emotional catharsis: you could take that comically large stack of papers on your desk and throw it into the recycle bin of irrelevancy.

The late humanistic geographer Yi-fu Tuan described this catharsis as ‘topophilia’: the powerful relationship between place and feeling. In his important work Topophilia: A Study of Environmental Perception, Attitudes, and Values, he writes:

“We are usually more touched by what we hear than what we see. The sound of rain pelting against leaves, the roll of thunder, the whistling of wind in tall grass excite us to a degree that visual imagery can seldom match… Music is for most people a stronger emotional experience than looking at pictures or scenery. Why is this? Partly, perhaps, because we cannot close our ears as we can our eyes. We feel more vulnerable to sound… it provides information of the world beyond the visual field.”

The Microsoft recycle bin sound is one of several in-product sounds designed to flesh out the ‘digital environment’ of Windows 95. Together, they form a cohesive sense of place - intentionally designed to relay signals, confirm activity, and the relationships within the service ecosystem.

The design debt of sound

On the other hand, sound implemented without intention can seriously damage the user’s experience – and even harm their wellbeing.

The emotional impact of service soundscapes is well documented in UK public hospitals. John Drever, Professor of Acoustic Ecology at Goldsmiths University, London, studies the sonic environment of our world and our relationship to it. He was invited by the NHS to study the effects of noise on people’s health.

Focusing on one of the busiest inpatient wards in a London hospital, he was struck by how much noise there was competing for attention: from the numerous devices (monitoring devices, ventilators, telephones, pagers), to the overlapping conversations people were having around them.

This stressful environment created substantial health risks. For example, when staff needed to give medication to patients, some got distracted reading or administering doses. Additionally, patients were not sleeping because of the noise. They became agitated with their neighbours and wanted to leave, worsening their health outcomes.

The World Health Organisation recommends sound levels in hospital wards should not exceed 35 decibels (dB) as sick people are more sensitive to noise, and less able to cope with stress. For context, conversational speech registers at 60 dB and a vacuum cleaner at 70dB. That means many hospital patients are not receiving an optimal service experience.

“[Noisy hospital wards] are the last place you want to be when you’re sick,” Professor John Drever said when we talked earlier this year. “Sound in medical environments is an ancient concept… Good old Florence Nightingale right at the beginning of nursing talked about this: ‘Unnecessary noise is the most cruel absence of care that can be inflicted on the sick or well’. She recognised the importance of quiet wards and that’s part of the healing process.”

On John’s recommendations, the hospital got funding to install absorbent materials into the ceiling to lower noise levels. However, many public hospitals face latent consequences in the absence of sound-driven design. And in many cases, sadly, they do not have the resources to mitigate them.

Continuous iteration, such as adding new machines over time, or gradually crowding patients into rooms, adds to design debt. And as design teams know, new bells and whistles can take users considerable effort to adjust to.

A busy hospital ward scene with nurses and patients talking, surrounded by sounds of beeping machines, ringing phones, and other hospital noises.

How ambient sound can restore us

“When you’re in an unfamiliar environment, like a hospital, you spend a lot of time thinking about what sounds mean,” said Hugh Huddy, a sound design consultant who’s worked closely with John, when I spoke with him. “We’re constantly trying to decode our environment and attach meaning to it.”

“However, we relax when we know what sounds are,” he added, “such as the rustling of leaves and trees. We don’t need to decode them.”

Hugh is the co-founder of Radio Lento, a public service podcast offering recordings of “diffused spatial soundscapes”. That is, a sonic environment. This could be the pleasant experience of sitting by the beach in Cornwall, or an autumn walk through the woods.

Using what he calls ‘sound photography’ Hugh visits these places and captures high-quality recordings. He’s careful not to simply offer listeners piped birdsong, but all-round ambient sound (best listened to with good headphones) in all its subtle, interconnected complexity.

The long-term goal of Radio Lento is to study the effects of ambient sound on human wellbeing. Blue and green health – the benefits of being close to water and nature, respectively – are an important focus in the world of therapeutics.

For example, the sound of rain falling helps some people sleep. And the sound of rustling leaves is both nonspecific and ubiquitous. We know instinctively it’s made of millions of leaves moving together, and cannot pinpoint which exact leaves, but on a primal level it can be incredibly restorative.

“It’s very important to draw parallels between visual and auditory shapes and patterns,” says Hugh. “Take a kilt: you could say it’s just some coloured lines. But the sound of rain is also a pattern: it’s actually just contoured forms of white noise with clicks in it.”

Both examples, argues Hugh, are more than the sum of their parts. Kilts carry a lot of cultural value, while the sound of rain is a universal human experience. Both patterns tap into emotional memory.

“The design of a service not just needs to, but has to, take the dimensions of your memory into account, rather than just ignore them,” says Hugh. “That’s why people often don’t like hospitals.”

To redesign the service environment of a hospital, this means mitigating stressful sounds. For example, installing soundproof insulation like in John’s study, or registering more comforting sounds to associate new emotional memory-pathways.

In one case study, sensory delivery rooms were introduced in Denmark in 2013. These consisted of calming lighting, peaceful images projected onto the wall, and, most notably, relaxing nature sounds, in place of the loud activity typically expected.

One study showed that, as a result of these provisions, both the number of caesarean sections and doses of anaesthesia reduced significantly.

The power of authentic, organic sounds

We know that sound registers emotional and visceral responses. To achieve a particular response, such as reducing stress, sound needs to be designed for the right goal and context.

“There’s a missing level of bespoke sound design [in services],” says John Drever. The professor recalls a corridor in St Guy’s Thomas Hospital with a window, looking out to a peaceful grassy scene outside. Intermittently, a speaker played a blackbird singing.

“When I first heard it, I got a bit panicky and thought: ‘gosh, a bird’s got caught inside’,” says John. “But then I realised it’s the wrong time of day. Why was I hearing a dawn chorus at, like, three o'clock in the afternoon in summer?”

Hugh Huddy describes this phenomenon as ‘plausible continuity’: the idea that to be effective, soundscapes need to be as authentic as possible. This includes the full experience of the environment, with sounds that capture the time of day, season, and local activity.

“Don’t try and create an environment,” says Hugh. “Take the environment. Convey spatialised information that you’d really hear and attach it to that actual experience. That confirms what’s happening and where you think you are.”

Increasing our spatial awareness

‘Spatialised information’ is the result of us cognitively processing the world through spatial awareness. This is our ability to be aware of our surroundings and our body’s position in relation to other objects.

In the context of physical services, this means navigating from one point to another. In digital services, spatial awareness helps users of all abilities distinguish the user interface and identify where each path can take them.

“Knowing where you are is vitally important to not being confused,” says Hugh Huddy. “Reinforcing your sense of place helps users get a better experience and think more clearly. Most importantly, your mind will load up more relevant information more quickly.”

A clear auditory map reduces stress, boosts usability and strengthens our ability to engage more intuitively with a service.

Tony Charalambous, a product designer at SPARCK, recently co-developed a concept for a mobile purchasing app for a major financial client. Using sounds and haptics, the app would help visually impaired users detect the card reader.

“Dowsing machines were our inspiration, similar to sonar in submarines,” says Tony. “You’re trying to lock the users into two axes: left and right, and back and forth.”

“Sonic waves are bouncing off objects within the radius,” he continues, “so it’s essentially telling you there’s something there and how far it is based on data received on the device. Once it has that constant monotonic frequency, you know you’ve found what you’re looking for.”

Similar innovations will become increasingly necessary as our technology evolves. Wearable screens like the Apple Vision Pro centre their products on augmenting reality to create a new spatialised environment.

The device’s sound interface provides core building blocks of this environment. Take, for example: softening the end of a tone when closing an app, controlling the volume of multiple sources, pinging to confirm you’re ready to select a feature – all while mixing noise levels with sounds filtered in from the outside world.

In sound-design there is a relationship between devices and apps. The sounds need to match semantically, work in harmony together, and not take the user out of their experience.

Without this, users lose essential context to their experience. Listening to well-designed soundscapes engages more visceral senses and layers more meaning.

Sonification helps us read the universe

Wanda Diaz Merced is an astronomer who studies space phenomena – think stars, galaxies, supernovas. Wanda began progressively losing her sight as a teenager, and it degraded significantly at university. Unable to read books, or see the board at lecture halls, she was at a disadvantage during a critical time in her life.

At the point of losing her sight completely, a colleague showed Wanda something life-changing: an audio recording of a sunburst. She could hear the moment of eruption to when the sunburst ended, fading into familiar cosmic humdrum. She could interpret data about this event using sound alone.

Inspired by this, Wanda developed a technique called ‘sonification’. This turns data points into sounds, using auditory representations to mimic qualities of space phenomena, with:

  • pitch to represent brightness (higher pitch means brighter objects, lower means darker ones)
  • rhythmic patterns to represent cycles or patterns over time
  • continuous or fluctuating tones to represent stability of cosmic events.

For example, a black hole might emit gamma-rays in short, intense bursts. This means high pitched, staccato tones. Listening to this would tell Wanda everything she needs to know about the event.

Her work makes astronomy more accessible to scientists with visual impairments and other needs. It also offers listeners the ability to enrich their understanding of the data itself.

Each object gains distinct auditory signatures. While Jupiter has its own sound, Mars will have another. A 2021 research study recommends that auditory signatures have huge potential in the world of accessibility in digital products and services. For example, website menu options could contain different tones for screen reader interactions.

Sonification provides a clear and distinctive auditory map of the digital environment. In a fitness app this could mean tracking a user’s workout intensity, triggering faster-paced background sounds. Or a celebratory jingle could signal a successfully completed workout.

It also has potential to relay behavioural insights to design and product teams. For example, a social media app might record an increase in user activity or sign-ups through uplifting sound. And institutions like museums could use sonification to record capacity.

In all instances, sound reveals more than just plain data. It associates that data with the emotional context of what’s happening. Positive sounds indicate success, while thoughtfully-designed, stress-reducing sounds might indicate problems.

Tony Charalambous talks about emotion as a cornerstone of human-centred design. In haptics, products engage peoples’ sense of touch. Physical features like vibration complement sound to strengthen the connection between the user’s physical reality and the digital service.

“People might be in public spaces using this app,” says Tony. “Stress levels tend to be quite high when you’re about to do a task and you’re still trying to navigate within your surroundings. You don’t want to add to that stress, so haptics need to match users’ states. For example, there’s a different between the on and off button: there’s a hollow, deeper vibration for ‘off’, while the ‘on’ vibration is a little more high-pitched.”

Use ‘aural diversity’ to make sound accessible

John Drever coined the term “aural diversity” to describe the need to design for different hearing needs and abilities.

“We’re designing for people with the hearing of an 18-year-old,” says John. “But the fact is we all hear differently. Some people have very sensitive hearing, some have hearing loss. In others, that hearing loss is asymmetrical – only in one ear. Some people have tinnitus and misophonia, where sounds trigger uncomfortable emotional responses. And all these different kinds of sensitivities to sound have been ignored historically.”

Some applications use high frequency sound to represent data. To people with tinnitus, this can be really painful. In another example, crisps are carefully designed to create a ‘crunch’ sound when you bite into them. For people who experience autonomous sensory meridian response (ASMR) – a pleasant tingle caused by certain sounds – this can be wonderful. For people with misophonia, it might be intolerable.

John relates this lack of understanding of aural diversity to an experience with his infant son. Inside a public toilet, after washing his hands, he placed them inside one of the high-speed hand driers. Air blasted out at 100 miles per hour creating an intensely loud noise that terrified his child.

“This made no sense,” says John. “Why have this extremely loud device in a peaceful environment like a toilet? I started talking to people with autism and learned that they had huge problems with access to public spaces, because toilets were too loud – even disabled toilets! Toilets are very reverberant spaces, they just make things sound louder. It doesn’t get absorbed, but rather just bounces around and build up.”

Legally, these hand driers are allowed to operate because they are based on the threshold of damage (hearing loss) in their space. However, they cross the threshold of comfort. That results in a sonic environment that is uncomfortable for many users, and downright unusable for others.

Finding sounds that are ‘universal’ is difficult. Which is why human-centred design can help identify and create sonic environments that incorporate aural diversity and work for as many people as possible.

How to use sound in the design process

So, if sound registers an emotional and visceral response, and we know these responses can enrich the way we interact with services: where do design teams begin?

“Our language is very imprecise and untethered when it comes to talking about our hearing experience,” says Hugh Huddy. “If you’re unable to express if a sound is working or not, you’re not going to be able to design for it. How do we talk about sound in design? What’s its relevance? What role should it play? We need to start having these conversations and getting comfortable with the concept of sound-driven design.”

Studies reveal that there is a major semantic gap in designing with sound. Different roles in product teams have unique interpretations of sound, which results in the lack of a universal language for discussing it with stakeholders.

It is challenging to describe sound, and the meaning of it, with a limited vocabulary. This is combined with a lack of investment and skills in sound design tools.

To overcome this, experts recommend teams do the following:

  • Develop a shared vocabulary of terms and definitions of sound that can be understood across practices – know your ‘beeps’ from your ‘pings’.
  • Make associations with sound to familiar references. For example, an error message ‘thud’ could be described as a ‘stop sign’ sound.
  • Provide education and training in the potential of sound in design to stakeholders across disciplines.
  • Develop interactive, multimodal prototypes, combining visual and auditory elements, that allow stakeholders to experience sound in a tangible way.

By using these strategies in combination with user-centred design principles, teams can create more immersive, sound-driven digital environments.

This has the potential to enrich the experience for all users by engaging more of our senses and, in turn, our humanity.