MENU

Configuring complex audio use cases with WISCE and Android

Configuring complex audio use cases with WISCE and Android

Feature articles |
By eeNews Europe



Even if you just think about audio, supporting many different use cases, each of which requires different combinations of links between components.

Take the most basic use case as an example – the humble phone call. The voice signal comes in from the microphone on the handset, is digitised and sent to the modem and hence to the mobile phone network. The return voice comes across the network, is decoded, converted back to analogue and sent to the ear speaker where the user hears it.

But maybe the user has a headset plugged in, in which case the voice signals in both directions need to go through the headphone jack. They could have a Bluetooth headset – now the signals need to be routed through the Bluetooth radio instead.

But modern smartphones are more intelligent than that. The voice signal is cleaned up to remove background noise making it more audible. Processing on the received signal counteracts the background noise, allowing users to hear the person they are talking to over the street noise.

Both of these need another microphone to monitor the background noise so it can be removed. Dynamic range compressors boost quiet parts so they can be heard above the background noise. Further filtering removes harmonics which generate resonances in the case which otherwise might cause unwanted buzzing.

So many possibilities, and we’ve only looked at a single-person phone call. When bringing in speakerphone there is further processing to detect and focus on the main speaker and filter out background noise, known as beamforming. This works more effectively with a third or fourth microphone, bringing in yet more signals. Or what about music playback – the audio now comes off the SD card on the phone, via the Applications Processor before being decoded and played for the user.


Connecting up the audio

For many years, smartphones have used an Audio Hub to manage all the routing and connections. The different audio signals come into the Audio Hub, containing a series of mixers and multiplexors (muxes) to connect the signals from one place to another. Until recently, there have been maybe twenty different routing components, each with only a handful of options, so they are set up by selecting the appropriate option from a list, or even just looking at the datasheet and typing in the correct value to the register.

As smartphones have become more complex, the number of options has increased exponentially, requiring a complete change in routing paradigm.

No longer is it sufficient to have a few standard routes and select between them. Now the model is much more like an old-fashioned telephone switchboard, where blocks can take their input from any signal going. Suddenly the number of options for each register value has increased by an order of magnitude, from four to 62, making the routing much harder to work with, much harder to visualise and much easier to get wrong without realising.

To manage this complexity and make sense of all the options, you need a graphical tool where you can draw out your routes, such as Wolfson’s WISCE interface. For example, figure 1 shows a simple route possibly representative of a phone call.

Fig. 1: A simple phone call route.

We have a stereo signal coming in from IN2, plus a noise signal from the microphone attached to IN3L. The signals are processed to remove ambient noise on the DSP core, then sent to the baseband via Audio Interface AIF2. The far call comes in on AIF2 and is shaped via Equaliser blocks before being output to the headphones on OUT4. The local voice signal is also attenuated and mixed into the signal being played from the headphones as a side tone.

This route involves writes to 11 of the nearly 200 register fields dedicated to signal routing, selecting the appropriate one of the 62 options. WISCE tracks the writes which have been made in a history, which can be saved to a file for later reference, either by loading it back into WISCE or when setting up the driver on the end product.


Now imagine you have made a mistake, and the right headphone has been connected to the unused EQ3 instead of the intended EQ2. This is an easy typo to make, but you would only hear half the signal. Working through all the register fields to spot the one set to EQ3 instead of EQ2 would be laborious in the extreme.

Fig. 2: A phone call route with a mistake highlighted by WISCE.

With WISCE, it becomes obvious. From figure 2, you can immediately see the break in the chain – the input to OUT4 right is connected to a floating block (EQ3) instead of to the right channel output (EQ2) as it should be. The graphical representation makes it much easier to debug.

Software processing blocks

As well as connecting up hardware components, a particular use case may involve some run-time loadable software effects, running on a Digital Signal Processing (DSP) core as part of the CODEC or on a separate dedicated chip. When the use case is loaded, the operating system may need to load a firmware image to the DSP core to get the appropriate effect. This can take a while – sometimes even of the order of hundreds of milliseconds – so the operating system will try to avoid changing firmwares where possible.

Sometimes, however, it is unavoidable – for example, switching from a handset phone call to speakerphone may involve replacing the ambient noise reduction designed for a headset with a beamforming algorithm which tracks the current speaker.

In Android this switching of firmwares is typically handled by a use case manager, specifying any firmware images required and where they should be loaded. The use case manager will track which firmware image is currently loaded on the core. If the use case calls for the firmware currently on the core, there’s no need to reload it. If, however, the use case calls for a different firmware it will have to be changed.


Tuning for the best sound

Connecting up the modules is only the first step. They then need to be configured for the best sound. The small form factors of smartphones means compromises have to be made in terms of speakers and their location. This often means they boost certain frequencies more than others. In figure 3, the top plot shows the measured frequency response of the speaker. You can see how the speaker has a slight peak at about 900Hz, another higher peak at around 3800Hz. These peaks correspond to resonances in the case and would be heard as a buzzing.

Fig. 3: Five band EQ compensation for speakers.

A five-band parametric EQ with three band-pass filters has been used to tune the audio output to compensate for these peaks making the overall response more linear across the 800Hz-4kHz range. See the frequency response of the EQ in the middle plot, and the effect on the speaker output in the bottom plot.

Figure 3 also shows how the performance tails off below 800Hz as would be expected for a small speaker. This might be compensated for with some bass boost, or with psycho-acoustic tricks to fool the ear into thinking the lower frequencies are actually there, but realistically it’s difficult to get a good bass response from a small speaker.

The response of the speakers, microphones and any resonances is critically sensitive to the shape and composition of the case and the acoustic chambers within the phone. On the other hand, controlling the settings on the chip requires access to the control signals, making accurate tuning challenging.

When creating our settings on an evaluation board, we are not taking the real design of the phone into account. We can simulate the speaker and microphone responses by playing our audio through a tool like Matlab, but there may be subtle aspects that a recording may miss, leading to a suboptimal tuning.

If we bore through the case to introduce wires to access test points or control interfaces, we change the acoustic properties of the phone, so the settings we come up with will not be appropriate for the final handset without the holes.

Another option is to set the phone up with a particular configuration, run the test and record it. The recording can then be analysed to suggest changes, the phone configuration can be updated and the test run again. This makes for slow testing, and discourages the engineer from trying too many tuning options.

Doing the tuning on the real handset without modification would be ideal, controlled interactively from the tuning/configuration tool using a remoting technology like Wolfson’s WISCEBridge. A PC running WISCE communicates with a WISCEBridge server over TCP/IP. It sends configuration commands and queries to the server, which then updates the device, or returns current settings from the device.

The simple protocol and use of TCP/IP means it can be implemented and deployed in a wide range of form-factor products. There is a version which runs on Linux (and hence Android) which communicates with the operating system to configure the device. This can run over any connection, even Wi-Fi, enabling completely wire-free tuning. There is also a version which communicates with an Android device over ADB, the USB Android Debug Shell, requiring just a USB cable connected for this, without anything unusual installed.

About the author

Ian Brockbank is Software Tools Manager at Wolfson Microelectronics – www.wolfsonmicro.com – He can be reached at Ian.Brockbank@wolfsonmicro.com

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s