Companies shipping AI security posture tools for enterprises in 2026$10 spent
$10.00 spent
company
product
primary_category
deployment_model
enterprise_features
ai_specific
source_url
verification_notes
Fiddler AI
Fiddler for AI Governance, Risk Management, and Compliance (GRC)
AI Governance
SaaS
["centralized governance and oversight","audit trails and audit evidence","policy enforcement and approval workflows","real-time alerts","human-in-the-loop approvals","fairness and bias tracking"]
Fiddler positions this named GRC module for enterprise AI with centralized governance and oversight for agents and predictive models. The page cites audit trails, policy enforcement, approval workflows, fairness/bias tracking, and compliance support for EU AI Act, SR 11-7, HIPAA, and NAIC.
Monitaur
Monitaur Platform
AI Governance
SaaS
["complete inventory","controls library","collaborative workflows","vendor governance","pre-deployment simulation","continuous production validation","decision logging","evidence automation"]
true
https://www.monitaur.ai/platform
Monitaur markets a dedicated AI governance platform spanning GenAI, Agentic AI, and traditional models. The product page documents centralized inventory, controls, vendor governance, pre-deployment simulation via FlightSim, and production monitoring/decision logging mapped to NIST AI RMF, ISO 42001, and the EU AI Act.
Zenity
Zenity AI Security Posture Management
AI-SPM
SaaS
["automated agent discovery","configuration and permission risk evaluation","secure-by-design guardrails","shadow agent discovery","continuous compliance monitoring","policy enforcement mapped to OWASP/MITRE/NIST","memory and tool access visibility"]
Zenity has a named AISPM offering focused on governing AI agent configuration before runtime. Its product page shows discovery/inventory across SaaS, custom frameworks, and endpoints plus permission risk evaluation, shadow-agent discovery, and policy enforcement aligned to OWASP LLM Top 10, MITRE ATLAS, and NIST AI RMF.
ComplianceAgent is a named Cranium governance module for enterprise AI. The platform page describes intelligent automation for compliance framework completion, document-aware responses, transparent oversight controls, and alignment to EU AI Act, NIST AI RMF, and ISO-backed compliance scoring.
Cranium Detect AI is a named AI-governance/posture module that scans repositories, labels AI code, and generates AIBOMs to inventory datasets, models, and libraries. The product page explicitly ties it to AI inventory, compliance tracking, security posture management, vulnerability reporting, and attack-surface exposure.
CloudSensor is a named Cranium module for AI-governance monitoring. The platform page says it integrates with cloud environments to discover security alerts, monitor unauthorized changes, and assess RBAC as part of visibility and governance for enterprise AI systems.
Protecto
Secure RAG
AI Governance
Hybrid
["role-based access control","mask before ingestion","prompt and response filtering","secure logged unmask workflow","audit logs","data residency and air-gapped deployments"]
true
https://www.protecto.ai/product/secure-rag/
Protecto’s Secure RAG is a named enterprise product for protecting sensitive data in RAG pipelines. Its product page documents RBAC, pre-ingestion masking, prompt/response filtering, logged unmask workflows, audit logs, and on-prem or SaaS deployment for HIPAA/GDPR-style compliance-sensitive use cases.
Protecto
Protecto Privacy Vault
AI Access Control
Hybrid
["sensitive data scanning","intelligent tokenization","controlled de-tokenization","context-based access control","immutable audit trail","policy-based masking","OAuth/SAML/Okta/Azure AD integration"]
true
https://www.protecto.ai/product/privacy-vault/
Protecto Privacy Vault is an AI-specific privacy/access-control product for LLM and agent pipelines. The page documents controlled de-tokenization for authorized users, CBAC for AI agents, policy-based masking, immutable audit trails, and enterprise identity integrations, with on-prem and air-gapped deployment options.
Lakera Guard is a named runtime security product for GenAI applications. Lakera’s product page and docs describe a real-time protective firewall that blocks prompt injections, jailbreaks, data leakage, malicious links, and policy violations, with SaaS and self-hosted deployment options.
Prompt Security includes a named module for securing internally built LLM applications. Its official platform page says Prompt for Homegrown AI Apps eliminates prompt injection, data leaks, and harmful responses, and supports SaaS or on-prem deployment as part of its enterprise AI security platform.
Prompt Security markets a named Agentic AI security module focused on MCP and autonomous-agent risk. The official platform page describes real-time, machine-level visibility, risk assessment, and enforcement for agentic AI, positioning it as a purpose-built enterprise control layer rather than generic app security.
Protect AI
Layer
Runtime Security
Hybrid
["27 turnkey policies","tool and function-call tracking","auto-discovery of AI apps","end-to-end visibility","security framework alignment","SIEM/SOC integrations"]
true
https://protectai.com/layer
Protect AI’s Layer is a named runtime security product for AI applications, including RAG systems and agents. The official page says it stops threats instantly at runtime, tracks full conversation context beyond prompts/outputs, and offers 27 turnkey policies mapped to NIST, MITRE, and OWASP.
Protect AI
Guardian
AI-SPM
Hybrid
["35+ model format scanning","deserialization and backdoor detection","granular security policies","local CI/CD scanning","Hugging Face integration","audit trail of evaluations"]
true
https://protectai.com/guardian
Guardian is Protect AI’s named model-security product for screening first- and third-party AI models before use. The official page documents 35+ model-format scanners, policy controls, CI/CD and local scanning, and Hugging Face integration to detect deserialization attacks, backdoors, and other model threats.
Cisco
Cisco AI Defense: AI Runtime Protection
Runtime Security
SaaS
["automatic guardrail configuration","input attack blocking","output scanning","model-specific policies","network-level visibility and enforcement","RAG and agent protection"]
Cisco exposes AI Runtime Protection as a named AI Defense component for production AI apps. The official page says it automatically configures guardrails per model, blocks prompt injection, prompt extraction, DoS, and command execution, and scans outputs for sensitive data, hallucinations, and harmful content.
Cisco
Cisco AI Defense: AI Model and Application Validation
AI Red Teaming
SaaS
["automated algorithmic assessment","open-source model and file scanning","200-category prompt testing","model-specific guardrail generation","automated reporting","lifecycle automation"]
Cisco’s AI Model and Application Validation is a named AI Defense component for automated AI red teaming and model risk assessment. The official page says it scans open-source models/files for supply-chain threats, tests models with algorithmically generated prompts across 200 categories, and generates model-specific guardrails and compliance reports.
Palo Alto Networks
Prisma AIRS
Runtime Security
SaaS
["AI agent and app discovery","continuous risk assessment","runtime governance","multi-agent red teaming","threat reporting","Microsoft Foundry integration"]
Prisma AIRS is Palo Alto Networks’ named AI security platform for agents, apps, models, and data. The 2026 product page highlights end-to-end AI lifecycle security with discovery, continuous testing, AI-specific controls, runtime governance, and specific 2026 updates for multi-agent red teaming and agentic target profiling.
Orca Security
AI Security Posture Management (AI-SPM)
AI-SPM
SaaS
["AI and ML inventory and BOM","AI misconfiguration detection","sensitive data detection","exposed AI key detection","shadow AI discovery","guided remediation"]
Orca exposes a named AI-SPM offering within its platform. The official page documents AI/ML inventory and BOMs, shadow-AI discovery, AI misconfiguration detection, sensitive-data scanning in models/training data, exposed AI-key detection, and guided remediation across major cloud AI services and packages.
Wiz
Wiz AI-SPM
AI-SPM
SaaS
["AI discovery and inventory","AI-BOM","AI tool identification","AI security rules","exposed AI endpoint detection","AI attack path analysis","AI runtime protection"]
true
https://www.wiz.io/solutions/ai-spm
Wiz markets AI-SPM as a named AI-security capability for securing AI pipelines from code to runtime. The official page documents AI discovery/inventory, AI-BOM, tool identification, AI security rules, exposed-endpoint detection, attack-path analysis, and AI runtime protection for prompt injection and rogue agents.
SPLX
AI Runtime Threat Inspection
Model Monitoring
SaaS
["LLM log upload and scanning","25+ AI threat detectors","drill-down into malicious prompts","agentic workflow transparency","near-real-time risk triage","red-team-intel-informed detection"]
true
https://splx.ai/platform/monitoring
SPLX’s AI Runtime Threat Inspection is a named monitoring product for near-real-time analysis of LLM interaction logs. The official page documents 25+ AI threat detectors, drill-down into malicious prompts, agentic workflow transparency, and red-team-intel-informed detectors for ongoing risk triage and policy-violation discovery.
SPLX
Probe
AI Red Teaming
SaaS
["25+ predefined probes","CI/CD integration","custom probes","custom dataset uploads","multi-modal testing","policy mapping to AI frameworks"]
true
https://splx.ai/platform/probe
Probe is SPLX’s named adversarial testing module for commercial AI systems. The official page documents continuous automated risk assessments, 25+ prebuilt probes, CI/CD integration, custom datasets, multi-modal testing, and policy mapping to MITRE ATLAS, NIST AI RMF, OWASP LLM Top 10, EU AI Act, and ISO 42001.
SPLX
AI Runtime Protection
Runtime Security
SaaS
["real-time guardrails","input and output filtering","custom AI policies","blocked-prompt telemetry","near-zero-latency enforcement","threshold tuning"]
true
https://splx.ai/platform/ai-runtime-protection
SPLX AI Runtime Protection is a named module for production AI defenses. The official page says it detects and blocks jailbreaks, prompt injections, data leaks, and off-topic behavior in real time, with custom AI policies, telemetry, blocked-prompt drill-down, and near-zero-latency enforcement.
SPLX
AI Governance & Compliance
AI Governance
SaaS
["automated compliance mapping","framework coverage across major AI standards","continuous monitoring","custom policy creation","JSON policy import","audit-readiness gap identification"]
true
https://splx.ai/platform/ai-governance-compliance
SPLX offers a named AI Governance & Compliance module for mapping AI risk to policy frameworks. The product page highlights continuous compliance mapping and monitoring, framework coverage including EU AI Act, NIST AI RMF, OWASP LLM Top 10, ISO/IEC 42001, MITRE ATLAS, HIPAA and NIS2, plus custom/imported policy creation.
HiddenLayer
AI Red Teaming
AI Red Teaming
SaaS
["testing across LLMs, agents, and predictive models","prompt injection and jailbreak testing","vulnerability metrics and tracking","regular and ad hoc scans","continuous coverage","risk readiness visibility"]
true
https://www.hiddenlayer.com/solutions/red-teaming
HiddenLayer markets AI Red Teaming as a named solution for continuously stress-testing enterprise AI systems. The official page documents scalable testing across LLMs, agents, and predictive models, prompt-injection and jailbreak testing, vulnerability metrics, scheduled/ad hoc scans, and continuous coverage without extra staff.
HiddenLayer
AI Runtime Security
Runtime Security
SaaS
["behavioral analytics","MITRE-aligned threat detection","automated response","AI guardrails and firewall","agentic runtime visibility","agentic detection and enforcement"]
HiddenLayer’s AI Runtime Security is a named runtime-defense offering for AI agents and applications. The official page highlights behavioral analytics, MITRE-aligned threat detection, automated response, AI guardrails/firewalling, and agentic runtime visibility plus detection/enforcement for prompt injection, unsafe tool use, and data exposure.
HiddenLayer
AI Guardrails
Prompt Injection & DLP
SaaS
["output safety and content controls","policy-driven response shaping","sensitive data protection","MCP and framework traffic inspection","compliant response enforcement","enterprise-wide policy standardization"]
HiddenLayer markets AI Guardrails as a named enterprise AI-safety/security solution. The official page documents policy-driven response shaping, prompt-injection and data-leakage protection, PII/PHI/proprietary-data protection, MCP/framework traffic inspection, and enforcement of legal, regulatory, and brand requirements across AI outputs.
F5
F5 AI Red Team
AI Red Teaming
SaaS
["swarm-of-agents adversarial testing","continuous testing with clear security scores","expansive attack prompt database","agentic fingerprints and audit trails","explainable audit-ready reports","integration with F5 AI Guardrails"]
true
https://www.f5.com/products/ai-red-team
F5 AI Red Team is a named AI-security product, not generic app security. The official page says it continuously red-teams AI models, applications, and agents with a swarm of agents, prompt/jailbreak attack simulation, security scores, detailed logs, and explainable audit-ready reports.
Lakera
Lakera Red
AI Red Teaming
SaaS
["continuous workflow for evaluation and scanning","adversarial and misuse scenario simulation","application-specific risk surfacing","security weakness testing for prompt injection and jailbreaks","regression and drift detection","coverage across models, applications, and agents"]
true
https://www.lakera.ai/lakera-red
Lakera Red is a named commercial AI red-teaming product. Lakera’s 2026 page says it continuously evaluates, scans, and red-teams AI apps and agents for safety, security, responsible-AI, prompt-injection, jailbreak, data-leakage, and drift risks.
Giskard
Continuous Red Teaming
AI Red Teaming
SaaS
["dynamic multi-turn attacks","context-aware attacks using internal business context","OWASP and open-source threat coverage","pre/post-deployment continuous testing","collaborative red-teaming playground","on-prem option for sensitive workloads"]
Giskard sells a named Continuous Red Teaming product for AI agents. The official page describes dynamic multi-turn and context-aware attacks, threat coverage using OWASP/open-source datasets, API-based black-box testing, and both pre- and post-deployment continuous testing.
NeuralTrust
Red Teaming
AI Red Teaming
SaaS
["continuously updated adversarial attack catalog","OWASP and MITRE ATLAS coverage","knowledge-base aware custom tests","automatic reruns when model or KB changes","custom evaluators for format tone style completeness","multi-language testing"]
true
https://neuraltrust.ai/red-teaming
NeuralTrust sells a named Red Teaming product for GenAI applications. The official page lists continuously updated adversarial testing across prompt injection, indirect injection, agentic behavior limits, system-prompt disclosure, unsafe outputs, plus custom knowledge-base-aware tests and OWASP/MITRE ATLAS coverage.
Straiker
Ascend AI
AI Red Teaming
API
["AI-powered attack agents","CI/CD hooks","OWASP MITRE NIST and EU AI Act mapping","continuous scheduled or on-demand testing","agent stack testing across models identities and data","remediation playbooks"]
true
https://www.straiker.ai/products/ascend-ai
Straiker’s Ascend AI is a named adversarial-testing/red-teaming product for agentic AI apps. The 2026 product page describes AI-powered attack agents, CI/CD-integrated assessments, testing across web apps, workflows, models, identities, and data, and compliance mapping to OWASP, MITRE ATLAS, NIST, and the EU AI Act.
F5
F5 AI Guardrails
LLM Firewall
SaaS
["runtime security for deployed AI models and agents","prompt injection and jailbreak protection","data leakage prevention","audit-ready observability and logging","policy enforcement for model and agent privileges","GDPR HIPAA and EU AI Act templates"]
true
https://www.f5.com/products/ai-guardrails
F5 AI Guardrails is a named AI-security product for runtime governance of models, apps, and agents. The official page describes prompt-injection/jailbreak defense, data-leak and policy-violation prevention, content moderation, and audit-ready compliance tooling tied to GDPR, HIPAA, and EU AI Act presets.
NeuralTrust
Generative Application Firewall (GAF)
LLM Firewall
Hybrid
["prompt injection prevention","sensitive data masking","behavioral threat detection","bot traffic detection and mitigation","custom governance policies","rate limiting and DoS mitigation"]
NeuralTrust markets GAF as a named firewall/security layer for LLM apps and AI agents. The official page documents prompt-injection prevention, DLP-style sensitive-data masking, moderation and policy enforcement, behavioral threat detection, bot mitigation, and cloud/on-prem/hybrid deployment.
Straiker
Defend AI
AI Agent Security
API
["prompt injection blocking","data exfiltration detection","MCP and tool security","agent observability and forensics","multimodal threat detection","audit logs and alerts"]
true
https://www.straiker.ai/products/defend-ai
Straiker’s Defend AI is a named runtime security product for AI agents. The official page says it monitors prompts, reasoning steps, and tool calls to block prompt injection, data exfiltration, malicious MCP connections, unsafe tool use, and agent manipulation with low-latency enforcement and forensics.
Wallarm
Protect Agentic AI
AI Agent Security
Hybrid
["prompt and code injection blocking","data leakage prevention","outbound API protection","real-time blocking","custom protection policies","monitoring dashboards"]
Wallarm’s Protect Agentic AI is a named AI-security offering for agents, AI proxies, and APIs with AI features. The official page documents prompt/code-injection and data-leak prevention, protection of outbound agent APIs, real-time blocking, cost-abuse controls, and hybrid deployment.
NeuralTrust
Guardian Agent
AI Agent Security
SaaS
["real-time oversight","policy and compliance enforcement","prompt injection and jailbreak defense","inter-agent safety with identity verification","auditable reports","tool-call monitoring and blocking"]
true
https://neuraltrust.ai/guardian-agent
NeuralTrust’s Guardian Agent is a named product for securing multi-agent systems and tool-calling workflows. The official page highlights real-time oversight, prompt-injection/jailbreak/data-leak defense, tool-call blocking, inter-agent trust controls, and auditable compliance reporting.
NeuralTrust’s MCP Gateway is a named access-control product for AI agents. The official page says it governs agent interactions with tools and data using granular permissions, RBAC-style controls, trust boundaries, and full auditability for each tool access and operation.
Credo AI
Credo AI Platform
AI Governance
SaaS
["AI registry and discovery","shadow AI discovery and classification","risk management and control library","pre-built regulatory policy packs","governance workflows with approval gates","runtime governance and compliance monitoring"]
true
https://www.credo.ai/product
Credo AI sells a named AI governance platform purpose-built for enterprise AI. The product page describes discovery, registry, risk management, compliance policy packs, runtime governance, shadow-AI discovery, and monitoring/reporting across agents, models, and apps in 2026.
Holistic AI
Guardian Agents
AI Governance
Hybrid
["real-time oversight of AI agents","cross-cloud policy enforcement","anomaly detection and risk scoring","kill switch and automatic blocking","privilege revocation and quarantine","PII detection and redaction"]
true
https://www.holisticai.com/guardian-agents
Holistic AI’s Guardian Agents are a named enterprise AI governance/runtime-oversight product. The page says Sentinel and Operative agents monitor, evaluate, and intervene in real time across clouds to enforce policy, block prompt injection, quarantine unregistered agents, revoke privileges, and redact PII in outputs.
IBM watsonx.governance is an active commercial AI-governance product with 2026 agentic additions. IBM’s announcement and docs describe AI agent onboarding risk questionnaires, Guardrail Manager, prompt-template governance, runtime monitoring, red-teaming metrics, and planned agentic red teaming and multi-turn guardrails.
Asenion
Asenion Platform
AI Governance
SaaS
["risk assessment for AI models and data practices","oversight and control across AI lifecycle","testing agents for fairness privacy and security","EU AI Act NIST and ISO 42001 alignment","support for predictive generative and agentic AI","end-to-end governance risk and compliance"]
Asenion is an active AI-governance platform launched from Fairly AI and Anch.AI. The launch post says the platform assesses and mitigates risk across AI models and data practices, provides end-to-end governance/risk/compliance for predictive, generative, and agentic AI, and maps to EU AI Act, NIST AI RMF, and ISO 42001.
WhyLabs
WhyLabs Secure
Model Monitoring
Hybrid
["trace dashboards and detail views","policy dashboard with 1-click rulesets","MITRE ATLAS and OWASP-aligned guardrails","hallucination and misuse monitoring","blocked and flagged interaction metrics","containerized agent in customer VPC"]
true
https://docs.whylabs.ai/docs/secure/intro/
WhyLabs Secure is a named LLM security/guardrails capability within the WhyLabs AI Control Center. The docs describe hybrid deployment, trace visualization, policy management, MITRE/OWASP-aligned guardrails, and protection against hallucinations, data leakage, misuse, and inappropriate content for production AI applications.
Arthur
Agent Discovery & Governance Platform
AI Governance
Hybrid
["agent discovery and living inventory","automated governance with policy-based controls","continuous behavior analysis","centralized dashboards and alerts","RBAC and SSO","local eval engine with no sensitive data leaving"]
true
https://www.arthur.ai/aws
Arthur markets a named Agent Discovery & Governance Platform for enterprise agentic AI. The official page describes agent discovery/cataloging, automated governance with policy-based controls, continuous behavior analysis, observability, RBAC/SSO, and an eval engine that runs next to workloads while keeping sensitive data local.
ModelOp
ModelOp Center
AI Governance
SaaS
["AI system of record","use case intake and registration","policy-based risk assessment and tiering","approval workflows","continuous testing and evidence collection","production monitoring and review"]
true
https://www.modelop.com/ai-governance-software
ModelOp Center is a named enterprise AI lifecycle management and governance product. The official page emphasizes a single AI system of record, intake/registration, policy-based risk tiering, approval workflows, continuous testing and monitoring, and enforceable governance across ML, GenAI, and agents.