This endpoint takes in four parameters:
content
: the content that you are sending to an LLMsession_id
: the session id for set of calls to the LLMsource
(optional) : the source of this content that you are sending to the LLMdestination
(optional) : the destination of this output that will come from the LLM
We return a simple boolean indicating whether a threat was found, and a dictionary of threats found and their relevant metadata.
Request Example
def send_input_to_promptarmor(content, source=None, destination=None, api_key):
promptarmor_headers = {
"PromptArmor-Auth": f"Bearer {api_key}",
"PromptArmor-Session-ID": str(uuid.uuid4()), #The session ID is unique to each user session(e.g. a workflow or conversation)
"Content-Type": "application/json"
}
url = "https://api.aidr.promptarmor.com/v1/analyze/input"
data = {
"content": content,
"source": source,
"destination": destination
}
response = requests.post(url, headers=promptarmor_headers, json=data, verify=True)
print("Detection:", response.json()["detection"])
Response
{
"detection": false,
"info": {
"Code": {
"detection": false,
"metadata": {}
},
"HTML": {
"detection": false,
"metadata": {}
},
"HiddenText": {
"detection": false,
"metadata": {}
},
"InvisibleUnicode": {
"detection": false,
"metadata": null
},
"Jailbreak": {
"detection": false
},
"MarkdownImage": {
"detection": false,
"metadata": null
},
"MarkdownURL": {
"detection": false,
"metadata": null
},
"Secrets": {
"detection": false,
"metadata": {}
},
"ThreatIntel": {
"detection": false
},
"Anomaly": {
"detection": false,
"metadata": {}
}
}
}