Marsview’s Tone Analyzer API service has a unique capability that uses acoustic and linguistic analysis to determine emotions. Sometimes, it's not about what you say, but rather the way you say it.
Use Acoustic voice tone spectrogram to detect the speaker’s emotion as neutral, calm, happy, sad, angry, fearful, disgust and surprised.
Use linguistic analysis to detect emotional and language tones such as anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, trust in written text.
Based on the subject of the topic and the phrases used, determine sensitivity level.
During support conversations with customers, understanding the tone of the customer helps to respond appropriately or seek help. You can also see if customers are satisfied or frustrated, and if agents are polite and sympathetic. Enable your chatbot/voice assistant to detect customer tones so you can build dialog strategies to adjust the conversation accordingly.
First, the API uses the spectrogram of the speaker’s voice to determine “how” it was spoken followed by a text classifier to detect “what” was said. Using this information, the appropriate emotion is returned.
Marsview conversation self-service API platform offers a comprehensive suite of proprietary APIs and developer tools for automatic speech recognition, speaker separation, multi-modal emotion and sentiment recognition, intent recognition, time-sequenced visual recognition, and more. Designed for the demanding Call Center environments (CCAI) that handle millions of outbound and inbound sales and support calls. Marsview APIs provide end-to-end workflows from call listening, recording, insights generation, and Voice of Customer Insights. Conversation APIs are also used in one-on-one to many-to-many conversations and meetings to automatically generate rich contextual feedback, key topics, moments, actions, Q&A, and summaries.