Summary
Enable this model configuration to get the extractive summary of the video/audio.

Overview

This model extracts the key sentences used in the video/audio and summarizes them .

modelTypeConfiguration

Keys
Value
modelType
extractive_summary
modelConfig
Model Configuration object for extractive_summary(No configurations)

Example Request

Curl
Python
1
curl --location --request POST 'https://api.marsview.ai/cb/v1/conversation/compute' \
2
--header 'Content-Type: application/json' \
3
--header "Authorization: {{Insert Auth Token}}" \
4
--data-raw '{
5
"txnId": "{{Insert txn ID}}",
6
"enableModels":[
7
{
8
"modelType":"speech_to_text",
9
"modelConfig":{
10
"automatic_punctuation" : true,
11
"custom_vocabulary":["Marsview", "Communication"],
12
"speaker_seperation":{
13
"num_speakers":2
14
},
15
"enableKeywords":true,
16
"enableTopics":false
17
}
18
},
19
{
20
"modelType":"extractive_summary"
21
}
22
]
23
}'
Copied!
1
import requests
2
auth_token = "replace this with your auth token"
3
txn_id = "Replace this with your txn id"
4
request_url = "https://api.marsview.ai/cb/v1/conversation/compute"
5
6
7
def get_extractive_summary():
8
payload={
9
"txnId": txn_id,
10
"enableModels":[
11
{
12
"modelType":"speech_to_text",
13
"modelConfig":{
14
"automatic_punctuation" : True,
15
"custom_vocabulary":["Marsview", "Communication"],
16
"speaker_seperation":{
17
"num_speakers":2
18
},
19
"enableKeywords":True,
20
"enableTopics":False
21
}
22
},
23
{
24
"modelType":"extractive_summary"
25
},
26
]
27
}
28
headers = {'authorization': '{}'.format(auth_token)}
29
30
response = requests.request("POST", headers=headers, json=payload)
31
print(response.text)
32
if response.status_code == 200 and response.json()["status"] == "true":
33
return response.json()["data"]["enableModels"]["state"]["status"]
34
else:
35
raise Exception("Custom exception")
36
37
if __name__ == "__main__":
38
get_extractive_summary()
Copied!

Response

1
"data": {
2
"summaryData": [
3
{
4
"sentence": "I will start by asking a few questions and then give you an opportunity to ask any questions you may have at the end.",
5
"startTime": 6600,
6
"endTime": 12950,
7
"speaker": "1"
8
},
9
]
10
}
Copied!

Response Object

Field
Description
summaryData
List of key sentences identified in the Video/Audio
sentence
A sentence identified by the model in the given time frame.
startTime
Start time of the sentence in the input Video/Audio in milliseconds.
endTime
End time of the sentence in the input Video/Audio in milliseconds.
speaker
Id of the speaker whose voice is identified in the given time frame. (Will return a String "unknown" if the speaker could not be identified or speaker separation is set to -1(disabled).
Last modified 1mo ago