Emotion & Tone
Enable this model configuration to analyze speaker's tone (acoustic) & emotions based on spoken text (Lexical Emotion Analysis)

Overview

Emotion Analysis

The Emotion Analysis model will help you understand and interpret speaker emotions in a conversation or text. It is designed to understand human conversation in the form or free text or spoken text and is designed after the emotion wheel.
The Emotion wheel describes eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust.

Emotion Types

Types of Emotions detected by enabling this model configuration in the Speech Analytics API:
Admiration Amusement Anger Annoyance Approval Caring Confusion Curiosity Desire Disappointment Disapproval Disgust Embarrassment Excitement Fear Gratitude Grief Joy Love Nervousness Optimism Pride Realization Relief Remorse Sadness Surprise Neutral

Tone Analysis

Tone Analysis suggests speaker emotion using only audio clues. Sometimes the speaker may show emotions in the tone of the response and this is important to capture to get the overall sentiment/mood of the conversation which cannot be extracted from conventional Lexical Emotion analysis.
Marsview's propritary Tone Analysis AI can detect the intonations in the tone to the statement level.

Types of Tone

Marsview is capable of detecting the following tones in an audio file:
Calm Happy Sad Angry Fearful Disgust Surprised

modelTypeConfiguration

Keys
Value
modelType
emotion_analysis
modelConfig
Model Configuration object for emotion_analysis (No configurations)

Example Request

Curl
Python
1
curl --location --request POST 'https://api.marsview.ai/cb/v1/conversation/compute' \
2
--header 'Content-Type: application/json' \
3
--header "Authorization: {{Insert Auth Token With Type}}" \
4
--data-raw '{
5
"txnId": "{{Insert txn ID}}",
6
"enableModels":[
7
{
8
"modelType":"speech_to_text",
9
"modelConfig":{
10
"automatic_punctuation" : true,
11
"custom_vocabulary":["Marsview", "Communication"],
12
"speaker_seperation":{
13
"num_speakers":2
14
},
15
"enableKeywords":true,
16
"enableTopics":false
17
}
18
},
19
{
20
"modelType":"emotion_analysis"
21
}
22
]
23
}'
Copied!
1
import requests
2
auth_token = "replace this with your auth token"
3
txn_id = 'Replace this with yout txn id'
4
request_url = "https://api.marsview.ai/cb/v1/conversation/compute"
5
6
#Note: Emotional analysis is dependant on the output from speech to text model,
7
# Hence both models needs to be given in the request for this to work
8
def get_emotion_and_tone():
9
payload={
10
"txnId": txn_id,
11
"enableModels":[
12
{
13
"modelType":"speech_to_text",
14
"modelConfig":{
15
"automatic_punctuation" : True,
16
"custom_vocabulary":["Marsview", "Communication"],
17
"speaker_seperation":{
18
"num_speakers":2
19
},
20
"enableKeywords":True,
21
"enableTopics":False
22
}
23
},
24
{
25
"modelType":"emotion_analysis"
26
},
27
]
28
}
29
headers = {'authorization': '{}'.format(auth_token)}
30
31
response = requests.request("POST", headers=headers, json=payload)
32
print(response.text)
33
if response.status_code == 200 and response.json()["status"] == "true":
34
return response.json()["data"]["enableModels"]["state"]["status"]
35
else:
36
raise Exception("Custom exception")
37
38
if __name__ == "__main__":
39
get_emotion_and_tone()
Copied!

Example Metadata Response

1
"data": {
2
"emotion": [
3
{
4
"transcript": "Good evening teresa.",
5
"startTime": 1390,
6
"endTime": 2690,
7
"speaker": "1",
8
"tone": {
9
"value": "calm",
10
"confidence": 0.9030694961547852
11
},
12
"emotion": {
13
"confidence": 0.9549336433410645,
14
"value": "JOY"
15
},
16
"wordsPerMinute": 92.3076923076923
17
},
18
]
19
}
Copied!

Response Object

Field
Description
emotion
A list of emotion objects
transcript
The sentence for which emotion is being analyzed
startTime
Start time of the sentence in the input Video/Audio in milliseconds.
endTime
End time of the sentence in the input Video/Audio in milliseconds.
speaker
Id of the speaker whose voice is identified in the given time frame.
tone
Object that describes the tone of the speaker
tone[value]
Tone of the speaker in the given time frame
tone[confidence]
Value indicating the models confidence in the predicted tone value
emotion(object)
Object that describes the emotion of the speaker
emotion[confidence]
Value indicating the models confidence in the predicted emotion value.
emotion[value]
Emotion of the speaker in the given time frame.
wordsPerMinute
Average words per minute spoken by the speaker.
Last modified 1mo ago