Researchers in the UK have used wireless signals and machine learning to remotely detect moods.
The team at Queen Mary University collected heartbeat and breathing signals of 15 participants from radio frequency (RF) reflections off the body followed by novel noise filtering techniques.
The key to interpreting the wireless data was a new deep neural network (DNN) architecture based on the fusion of the raw RF data and the processed RF signal for classifying and visualising various emotion states.
The model achieved a high classification accuracy of 71.67% for independent subjects and provides better performance with limited amount of raw RF and post processed time-sequence data compared to five other ML algorithms. The deep learning model has also been validated by comparing the results with signals from ECG heart monitors.
The study, published in the journal PLOS ONE, asked participants to watch a video selected by researchers for its ability to evoke one of four basic emotion types; anger, sadness, joy and pleasure.
Previous research has used similar non-invasive or wireless methods of emotion detection, however in these studies data analysis has depended on the use of classical machine learning approaches, where an algorithm is used to identify and classify emotional states within the data. For this study the scientists instead employed deep learning techniques, where an artificial neural network learns its own features from time-dependent raw data, and showed that this approach could detect emotions more accurately than traditional machine learning methods.
“Deep learning allows us to assess data in a similar way to how a human brain would work looking at different layers of information and making connections between them. Most of the published literature that uses machine learning measures emotions in a subject-dependent way, recording a signal from a specific individual and using this to predict their emotion at a later stage,” said researcher Achintha Avin Ihalage.
“With deep learning we’ve shown we can accurately measure emotions in a subject-independent way, where we can look at a whole collection of signals from different individuals and learn from this data and use it to predict the emotion of people outside of our training database.”
A pair of Vivaldi type antennas was used to form the radar signal, operating at 5.8 GHz. One antenna is used for RF signal transmission towards the body, while the second antenna was used for receiving RF reflections off the body at a distance of around 30cm. A pair of coaxial cables were used to connect both antennas to a N5230 C programmable vector network analyser from Rohde & Schwarz through coaxial cables. A laptop was used to play videos and the participants were asked to wear headphones so that they can effectively focus on the audio.
All this took place in an anechoic chamber to reduce any interfering noise emanating from external environment that might alter the emotions of a participant during the experiment. However it points to the potential to use reflections from local WiFi at 5GHz or 6GHz with WiFi 6E to take measurements. This then raises a number of ethical issues.
The DNN architecture processed the time domain wireless signal (RF reflections off the body) and the corresponding frequency domain version obtained by continuous wavelet (CW) transformation. Here, the RF reflection signal is one-dimensional (1D) and the CW transformation is an image of three dimensions (3D), represented in the format of (width, height and channels).
“Being able to detect emotions using wireless systems is a topic of increasing interest for researchers as it offers an alternative to bulky sensors and could be directly applicable in future ‘smart’ home and building environments. In this study, we’ve built on existing work using radio waves to detect emotions and show that the use of deep learning techniques can improve the accuracy of our results,” said Ahsan Noor Khan.
“We’re now looking to investigate how we could use low-cost existing systems, such as WiFi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment. This type of approach would enable us to classify emotions of people on individual basis while performing routine activities. Moreover, we aim to improve the accuracy of emotion detection in a work environment using advanced deep learning techniques.”
Professor Yang Hao, the project lead added: “This research opens up many opportunities for practical applications, especially in areas such as human/robot interaction and healthcare and emotional wellbeing, which has become increasingly important during the current Covid-19 pandemic.”
The researcher now plan to work with healthcare professionals and social scientists on public acceptance and ethical concerns around the use of this technology.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.