Check the url for vulnerabilities if content from the url is sent to an LLM. Avg latency 600 ms.

Example: Check if a website contains an injection

def url_contains_injection(url : str):
  
  url = "https://api.promptarmor.com/v1/check_url"
  
  #You should store your API key in an environment variable
  headers = {"accept": "application/json; charset=utf-8", "Api-Key": "Your-Api-Key"}

  #Call the PromptArmor endpoint
  contains_injection = requests.get("https://api.promptarmor.com/v1/check_url", params={"url": url}).json()["containsInjection"]
  
  return contains_injection
  

Implementation Example: For an agent browsing a website

def browse_website(url: str):
  
  if url_contains_injection(url):
    print("Website contains an injection!")
    #Handle the injection case(e.g. skip processing this url)
    ...
  
  #No injection, continue with the agent action(e.g. sending GPT-4 the content of the website)
  do_agent_action_on_website(url)

Language
Click Try It! to start a request and see the response here!