Skip to main content

Prompt Rules

Prompt rules in a policy inspect and act on the content of requests and responses that your collector intercepts from AI systems.

You can define a separate set of rules for each event type supported in your collector.

You can set up prompt rules by enabling and configuring different

detectors.

Enable detector

If no detectors are enabled, the No Prompt Rules Enabled section displays a list of detector buttons.

To enable a detector, click its button. The button becomes highlighted and the section label changes to Execute Prompt Rules.

The enabled detector details appear as an expandable card showing the detector name and action labels (Report, Transform, or Block).

To disable a detector, click its highlighted button.

Configure detector

Click the pencil icon () to expand the detector card.

In the expanded detector card, configure the detector's rules and assign an action to each rule. The available options depend on the detector type.

tip:

Learn about actions available for different detector types in Detectors and Actions.

Configure detector rules

Each detector identifies a specific risk type, such as PII exposure, malicious entities, prompt injection, or toxic content. When the detector identifies a threat, it applies your configured action.

Single-rule detectors

Some detectors use a single rule that applies one action. A single-rule detector provides controls such as toggles or sliders. You can adjust parameters to customize rule behavior.

Example:

The Malicious Prompt detector reports or blocks prompts with detected adversary intents.

You can configure the malicious prompt rule to detect prompt injection attempts. To improve the accuracy of detections, you can provide additional context with examples of benign and malicious prompts.

Detectors with multiple rules

Some detectors include multiple rules, each targeting a specific data type within a broader risk category:

  • Malicious Entity detector includes predefined rules for detecting malicious IP addresses, URLs, and domains.
  • Confidential and PII Entity and Secret and Key Entity detectors let you select a custom set of predefined redaction rules and configure them individually.
  • Custom Entity detector lets you create and use custom rules based on one or more text patterns. Click + Custom Rule to create a rule.
Example:

The Confidential and PII Entity detector identifies and acts on personal identifiers, credit card numbers, email addresses, locations, and other sensitive data types. You can configure and test a separate rule for each of these types in the detector.

Add rule
  1. Click the icon next to the Rules label in the expanded detector card.
  2. In the list of available rules, select the checkbox for each rule you want to enable in the detector.
  3. Click Add.

Remove or edit rule

Click the menu icon () in the rule row.

  • Click Edit to open the Edit Rule dialog. In the dialog, define the rule configuration and try it with the built-in Test Rules feature.

    Click Update to apply the changes.

  • Click Delete to remove the rule from the detector configuration.

Assign rule action

In the action dropdown next to the rule name (or labeled Set action for single-rule detectors), select an action to apply when the rule conditions match.

Apply detector changes

  • Click Update to apply your changes.
  • Click Cancel to discard your changes and close the rule editor.
  • Click the delete icon (🗑️) to remove all saved customizations you made to the detector configuration.

tip:
  • Disabling a detector by clicking its highlighted button removes it from the policy but preserves your configuration changes.
  • Using the delete icon resets the detector configuration to its defaults.

Save policy changes

After you change a policy, click Save Changes in the bar at the bottom of the page to apply them. If you navigate away without saving, AIDR prompts you to save or discard your changes.

Test prompt rules

You can test your enabled prompt rules in the Sandbox tab on the right side of the policy page. Type a message that triggers one of your enabled detectors to verify how the policy processes it.

For details and examples, see Policy Testing > Sandbox .

Detectors

You can enable the following detectors in prompt rules:

Malicious Prompt

Detect attempts to manipulate AI behavior with adversarial inputs.

Supported actions:

Additional configuration:

  • Generic Prompt Injection and Jailbreak Detection - Detect attempts to manipulate AI system behavior.
  • Custom Benign Prompt Detection - Provide examples of benign prompts for better accuracy.
  • Custom Malicious Prompt Detection - Provide examples of malicious prompts for better accuracy.

Malicious Entity

Detect harmful references such as malicious IPs, URLs, and domains.

You can assign an action to each rule for one of the three malicious entity types (IP Address, URL, Domain):

MCP Validation

Detect tool poisoning and other security issues in MCP tool definitions. These definitions are included in the tools parameter in requests to AIDR APIs.

The detector identifies the following threat types:

  • Malicious prompt in tool description - Detect malicious instructions embedded in tool descriptions, such as attempts to exfiltrate system prompts or manipulate model behavior.
  • Conflicting tool names - Detect duplicate tool names that could cause the model to invoke the wrong tool.
  • Conflicting tool descriptions - Detect tools with similar or identical descriptions that may indicate a spoofing attempt.

For example payloads and responses, see API reference .

Supported actions:

Additional configuration:

  • Similarity threshold - Lower the threshold to increase sensitivity, or raise it to require higher confidence for a detection. Drag the slider to adjust.

Confidential and PII Entity

Detect personally identifiable information (PII) and other confidential data, such as Email Address, US Social Security Number, and Credit Card.

You can add individual rules for each supported data type, as described in Detectors with multiple rules, and apply an action to each rule:

Secret and Key Entity

Detect sensitive credentials such as API keys and encryption keys.

You can add individual rules for each supported secret type, as described in Detectors with multiple rules, and apply an action to each rule:

Language

Detect the language of text and apply language-based security policies. You can create a list of supported languages and select an action for language detection:

  • Block all except (allow list) - Specify the languages allowed in requests to the AI system.
  • Block (block list) - Specify the languages to block in requests to the AI system.
  • Report (detected list) - Report all detected languages.

Additional configuration:

  • Similarity threshold - Lower the threshold to increase sensitivity, or raise it to require higher confidence for a detection. Drag the slider to adjust.

Code

Detect attempts to insert executable code into AI interactions.

Supported actions:

Additional configuration:

  • Confidence threshold - Lower the threshold to increase sensitivity, or raise it to require higher confidence for a detection. Drag the slider to adjust.

Competitors

Detect mentions of competing brands or entities.

You can manually define a list of competitor names to detect and select an action for each match:

Additional configuration:

  • List of competitors (required) - Enter competitor names to identify references in content submitted to or received from the AI system.

Custom Entity

Define rules to detect text patterns.

Add custom rules as described in Detectors with multiple rules, and apply an action to each rule:

Topic

Report or block content related to restricted or disallowed topics, such as politics, health coverage, and legal advice.

You can select predefined topics to block, or report all detected topics.

Supported actions:

  • Report - Detect supported topics and include them in the response for visibility and analysis.
  • Block - Flag responses containing selected topics from your list as "blocked".

Additional configuration:

  • Confidence threshold - Lower the threshold to increase sensitivity, or raise it to require higher confidence for a detection. Drag the slider to adjust.

Currently supported topics:

  • Financial advice
  • Legal advice
  • Religion
  • Politics
  • Health coverage
  • Toxicity
  • Negative sentiment
  • Self-harm and violence
  • Roleplay
  • Weapons
  • Criminal conduct

Actions

You can configure each detector to perform one of the following actions when it triggers:

  • Report the detection.
  • Transform the submitted text by redacting or encrypting it before AIDR returns it to the collector.
  • Mark the request as "blocked".
note:

Blocking actions may prevent subsequent detectors from running. This improves performance.

Report Only Mode:
  • Does not enforce actions in real time and does not affect user experience
  • Sets Status in AIDR logs to Reported

Learn about Report Only Mode .

The following actions are supported across detectors:

Block

Flag the request as blocked.

  • AIDR API response

    The top-level blocked property is set to true.

    This signals to the collector to stop processing the request.

    Browser (input only), Gateway, and Agentic collectors automatically enforce the blocking action.

  • AIDR logs

    The status field is set to blocked. The blocking action is also reflected in the log Summary and Findings fields.

note:

A blocking action can halt execution early and prevent remaining detectors from running.

Block all except

Explicitly allow input only in the specified languages in the Language detector settings.

Defang

Modify malicious IP addresses, URLs, or domains to prevent accidental clicks or execution. The defanged values remain readable for analysis. For example, a defanged IP address may look like: 47[.]84[.]32[.]175.

  • AIDR API response

    The top-level transformed property is set to true.

  • AIDR logs

    The status field is set to transformed.

    The transformed data is saved in Guard Output and Findings fields.

Disabled

Prevent processing of a particular rule in multi-rule detectors.

Report

Report the detection in AIDR logs without acting on the detected content. User interactions with the AI system remain unaffected.

Redact actions

Redact actions transform the detected text before AIDR returns it to the collector. You can assign redact actions to rules in the following detectors:

When you apply a redact action, the following values change:

  • AIDR API response

    The top-level transformed property is set to true.

  • AIDR logs

    The status field is set to transformed.

    The transformed data is saved in Guard Output. The Summary and Findings fields note the applied transformation.

For each detector rule, you can select an action:

Replacement

Replace the rule-matching data with a descriptive token (for example, <PHONE_NUMBER> or <US_SSN>). In the rule Edit option, configure the replacement value.

Mask (****)

Replace the rule-matching text with asterisks.

Partial Mask (****xxxx)

Partially replace the rule-matching text with a masking character (for example, ***-***-7890 for a phone number). In the rule Edit option, configure partial masking settings:

  • Masking Character - Specify the character for masking (for example, #).
  • Masking Options
    • Unmasked from left - Define the number of starting characters to leave unmasked. Use the input field or the increase/decrease UI buttons.
    • Unmasked from right - Define the number of ending characters to leave unmasked. Use the input field or the increase/decrease UI buttons.
  • Characters to Ignore - Specify characters to leave unmasked (for example, -).

Hash

Replace the detected text with a cryptographic hash. To enable hashing, configure a salt value.

Format Preserving Encryption (FPE)

Format Preserving Encryption (FPE) transforms data while preserving its format (length, character type, structure). For example, a phone number like (555) 123-4567 becomes (842) 967-3201 - the parentheses, spaces, and hyphens remain in their original positions.

You can redact sensitive data while maintaining a recognizable format and providing useful context to the AI system. You can recover the original values using the /aiguard/v1/unredact endpoint.

When you apply FPE redaction, the response from AIDR APIs includes:

  • Processed content with encrypted values under the guard_output result property
  • FPE context to recover the encrypted values in the processed content under the fpe_context result property

Corresponding values are saved in AIDR logs under:

  • Guard Output
  • Extra Info > "fpe_context"
Example response with FPE-redacted values
{
...
"status": "Success",
"summary": "Malicious Prompt was detected and blocked. Malicious Entity was not executed. Confidential and PII Entity was detected and redacted.",
"result": {
"guard_output": {
...
"messages": [
...
{
"annotations": [],
"content": "You are Jason Bourne. Your phone number is 852-432-4478",
"refusal": null,
"role": "assistant"
}
]
},
"transformed": true,
"detectors": {
...
"confidential_and_pii_entity": {
"detected": true,
"data": {
"entities": [
{
"action": "redacted:encrypted",
"type": "PHONE_NUMBER",
"value": "555-555-5555"
}
]
}
}
},
"fpe_context": "eyJhIjogIkFFUy1GRjEtMjU2IiwgIm0iOiBbeyJhIjogMSwgInMiOiA0MywgImUiOiA1NSwgImsiOiAibWVzc2FnZXMuMC5jb250ZW50IiwgInQiOiAiUEhPTkVfTlVNQkVSIiwgInYiOiAiODUyLTQzMi00NDc4In1dLCAidCI6ICJoekNTdDNJIiwgImsiOiAicHZpXzJxd29obDd2dmxmZzZ3cXFqZnczeWRscHg2bGk0dGg3IiwgInYiOiAxLCAiYyI6ICJwY2lfczV6NWg3Y3JxeWk1enZ6NHdnbnViZXNud3E2dXkzcDcifQ=="
}
}

You can call the /aiguard/v1/unredact endpoint to retrieve the original content by providing the redacted data and the fpe_context value as parameters:

Authorize requests to AIDR APIs
export CS_AIDR_BASE_URL="https://api.crowdstrike.com/aidr/aiguard"
export CS_AIDR_TOKEN="pts_s2ngg2...hzwafm" # Collector token
Example request to /aiguard/v1/unredact
curl --location --request POST "$CS_AIDR_BASE_URL/v1/unredact" \
--header "Authorization: Bearer $CS_AIDR_TOKEN" \
--header 'Content-Type: application/json' \
--data-raw '{
"redacted_data": "You are Jason Bourne. Your phone number is 852-432-4478",
"fpe_context": "eyJhIjogIkFFUy1GRjEtMjU2IiwgIm0iOiBbeyJhIjogMSwgInMiOiA0MywgImUiOiA1NSwgImsiOiAibWVzc2FnZXMuMC5jb250ZW50IiwgInQiOiAiUEhPTkVfTlVNQkVSIiwgInYiOiAiODUyLTQzMi00NDc4In1dLCAidCI6ICJoekNTdDNJIiwgImsiOiAicHZpXzJxd29obDd2dmxmZzZ3cXFqZnczeWRscHg2bGk0dGg3IiwgInYiOiAxLCAiYyI6ICJwY2lfczV6NWg3Y3JxeWk1enZ6NHdnbnViZXNud3E2dXkzcDcifQ=="
}'
Response from /aiguard/v1/unredact
{
...
"status": "Success",
"summary": "Success. Unredacted 1 item(s) from items",
"result": {
"data": "You are Jason Bourne. Your phone number is 555-555-5555"
}
}

Under AIDR Settings > Model Settings > Format-Preserving Encryption, you can enable Deterministic Format Preserving Encryption (FPE).

You can generate and apply a custom tweak value for the FPE redaction method in your AIDR organization.

FPE tweak:

A tweak is an additional input used alongside the plaintext and encryption key to enhance security. The tweak prevents attackers from using statistical methods to break the encryption. Different tweak values produce different outputs for the same encryption key and data. To decrypt the data, you must provide the original tweak value used for encryption.

A custom tweak ensures deterministic encryption - the same original value produces the same encrypted value on every request. If you don't provide a tweak value, AIDR generates a random string, and the encrypted value differs on each request.

Whether you use a custom or randomly generated tweak, the API response includes it in the fpe_context attribute. You can decrypt and recover the original content with this value.

©2026 CrowdStrike. All rights reserved.

PrivacyTerms of UseLegal Notices