Device that Analyzes Speaker’s Emotional State

Technology #12864

Questions about this technology? Ask a Technology Manager

Download Printable PDF

Categories
Researchers
Rahul Shrivastav
Sona Patel
Managed By
Lenny Terry
Assistant Director 352-392-8929
Patent Protection
US Patent 8,788,270

Analyzes Acoustic Signals to Effectively Determine Speaker’s Emotional State

This device is capable of analyzing the emotional state of a speaker throughout a conversation, including those held over the phone. This technology has widespread application, from the gaming industry and virtual reality applications to health testing and monitoring. One application of this technology is for call centers, an $18 billion dollar industry in the United States alone. Many companies who employ call centers have found it difficult to track the performance of their employees, and therefore have been unable to recognize which employees are most efficient and successful at keeping customers happy. Researchers at the University of Florida have created tools that are able to use the acoustic patterns of speech to track and categorize a speaker’s emotions into a small number of dimensions or categories, such as: happy, content, sad, angry, anxious, and bored. This device can be used by call center companies to determine which of their employees are most efficient and successful at maintaining a positive interaction throughout calls with customers.

Application

Software tools that extract acoustic patterns from voice recordings to determine the emotion of a speaker or conversation

Advantages

  • Provides greater accuracy in classifying emotion in speech using unique listener-based models
  • Allows measurement of changes in magnitude or intensity of emotion (very sad vs sad)
  • Allows measurement of changes in emotion over time, enhancing existing voice recognition software

Technology

These tools work by measuring one or more characteristics of an utterance of speech against a sophisticated model of said characteristics. By comparing the utterance to the model, the tools are able to determine the emotional state of the speaker. Models can vary to mimic how listeners would perform in a specific application or can be predetermined based on an existing database of recorded speeches. The tools can categorize the speakers’ emotional state into up to six emotion categories. The tools also are capable of monitoring how the speakers’ emotion changes throughout a conversation. Beyond measuring the emotions present, these tools are capable of measuring the level of emotion present, meaning that the device can differentiate between sad and very sad speakers. The tools can be housed locally or in the cloud, allowing for rapid and scalable deployment for specific applications.