Questions & Responses
Enable this model configuration to detect all the questions and responses from the conversation

Overview

Automatically identify and detect questions or requests posed during the conversation and also the apt response in the conversation in a consumable form.
The API automatically detects the Question and Response by the speaker.

modelTypeConfiguration

Keys
Value
modelType
question_response
modelConfig
Model Configuration object for question_response (No configurations)

modelConfig Values

Keys
Allowed values
Default
quality
0,1, 2(Check question quality values for usage)
1

Question Quality

Quality Value
Description
0
Detect only well defined questions.
1
Detect well defined and reasonably well defined questions
2
Detect well defined, reasonably well defined and small talk based questions

Example Request

Curl
Python
1
curl --location --request POST 'https://api.marsview.ai/cb/v1/conversation/compute' \
2
--header 'Content-Type: application/json' \
3
--header "Authorization: {{Insert Auth Token}}" \
4
--data-raw '{
5
"txnId": "{{Insert txn ID}}",
6
"enableModels":[
7
{
8
"modelType":"speech_to_text",
9
"modelConfig":{
10
"automatic_punctuation" : true,
11
"custom_vocabulary":["Marsview", "Communication"],
12
"speaker_seperation":{
13
"num_speakers":2
14
},
15
"enableKeywords":true,
16
"enableTopics":false
17
}
18
},
19
{
20
"modelType":"speech_type_analysis"
21
},
22
{
23
"modelType":"question_response"
24
"modelConfig":{
25
"quality" : 1
26
}
27
}
28
]
29
}'
Copied!
1
import requests
2
auth_token = "replace this with your auth token"
3
txn_id = 'Replace this with your txn id'
4
request_url = "https://api.marsview.ai/cb/v1/conversation/compute"
5
6
7
def get_question_and_response():
8
payload={
9
"txnId": txn_id,
10
"enableModels":[
11
{
12
"modelType":"speech_to_text",
13
"modelConfig":{
14
"automatic_punctuation" : True,
15
"custom_vocabulary":["Marsview", "Communication"],
16
"speaker_seperation":{
17
"num_speakers":2
18
},
19
"enableKeywords":True,
20
"enableTopics":False
21
}
22
},
23
{
24
"modelType":"speech_type_analysis"
25
},
26
{
27
"modelType":"question_response"
28
},
29
]
30
}
31
headers = {'authorization': '{}'.format(auth_token)}
32
33
response = requests.request("POST", request_url.format(user_id=user_id), headers=headers, json=payload)
34
print(response.text)
35
if response.status_code == 200 and response.json()["status"] == "true":
36
return response.json()["data"]["enableModels"]["state"]["status"]
37
else:
38
raise Exception("Custom exception")
39
40
if __name__ == "__main__":
41
get_question_and_response()
Copied!

Example Response

1
"data": {
2
"questionResponse": [
3
{
4
"questionStartTime": 74840,
5
"questionEndTime": 76010,
6
"responseEndTime": "91389.999",
7
"questionBlock": [
8
{
9
"question": "Why should we choose you?",
10
"startTime": 74840,
11
"endTime": 76010
12
}
13
],
14
"response": "well, I'll probably myself on my work ethic. I am willing and capable of working long hours to complete the tasks. I have experience in this field and I am continuing my education so further my status.",
15
"questionConfidence": "1.0",
16
"responseConfidence": "0.7773251640550028",
17
"source": "ai_generated",
18
"speaker": "unknown"
19
},
20
]
21
}
Copied!

Response Object

Field
Description
questionResponse
List of question response objects which occured together.
questionStartTime
Start time of the question in milliseconds.
questionEndTime
End time of the question in milliseconds.
responseEndTIme
End time of the response in milliseconds.
questionBlock
List of questions identified in the given time frame.
question
Question identified in the given time frame.
startTime
Start time of the identified question in the given time frame.
endTime
End time of the identified question in the given time frame.
response
Response given to the question identified in the time frame.
questionConfidence
Model's confidence in the identified question block.
responseConfidence
Model's confidence in the identified response .
source
speaker
Id of the speaker whose voice is identified in the given time frame. (Will return a String "unknown" if the speaker could not be identified or speaker separation is set to -1(disabled).
Last modified 1mo ago