PromptArmor is the end-to-end security platform for securing all interactions with LLMs: inputs, outputs, and actions.

Interactions can be inside your organization or customer facing. Our cutting-edge threat intel keeps our detection engine up to date so you don't have to worry about security when building powerful LLM functionality.

PromptArmor is primarily charged with mitigating the risks of:

  • Data exfiltration: when data in the context window that is sent to an LLM is exposed and/or sent to an attacker.
  • Phishing: when phishing attacks are sent to end users of LLM tools (whether internal or external users) in the same window where trusted interactions take place.
  • System manipulation: when LLM tools have the ability to take actions, and attackers can manipulate the LLM into kicking off a malicious workflow

Although PromptArmor does allow for more custom functionality, we are primarily a security platform and that is what makes us the best at what we do.

Getting Started

PromptArmor is made for easy integration. You can get set up with the simplest version in less than 15 minutes.

To get started, sign up for an API Key.

After that move on to Setting up a config.

Security and Data Privacy

As a security company, we of course take security and data privacy seriously.

We do not store any content you send us by default, except for detections. You have the option to make sure we don't store detections either, but that would mean you have to manage mapping between the content IDs we flag and the actual content of the detection.

Our cloud offering is SOC2 Compliant. We can also deploy within your VPC, if you so require.

Only aggregate and anonymized data (e.g. ID #s) are sent back to the dashboard, unless you opt in to sending more data to see more context in your dashboard.

Current Benchmarks

As of Feb 1, running 7 days.

Duration (Seconds)0.00080.00150.00810.14
Largest request processed at or below that duration (bytes)*3.3k12.1k52.2K668.7K

*Indicates the largest request processed for the corresponding duration benchmark. E.g. the largest request processed in 0.14 seconds or less was 668.7K