⍑
AICyberNav
Terms Privacy Cookies AI Policy Acceptable Use Disclaimer
Legal

AI Use Policy

Effective Date: April 28, 2026 · AICyberNav, a KCENAV property

Security Disclaimer: Security guidance provided by AICyberNav is informational only — not legal counsel or regulatory guarantee. Findings labeled [SEEK EXPERT ADVICE] require professional consultation from qualified security professionals. [SEEK EXPERT ADVICE] from a licensed insurance broker for any binding coverage decisions.

AICyberNav is an AI-powered cybersecurity assessment platform. Artificial intelligence is central to our Service. This AI Use Policy explains how we use AI, what safeguards we have in place, how we handle your data in the context of AI processing, and our commitments to transparency and responsible AI use.

1. How We Use AI

AI is integrated into the following core functions of the AICyberNav platform:

1.1 Conversational Intake

Our AI-powered intake system conducts interactive conversations to gather information about your organization's technology stack, security practices, business operations, and risk profile. The AI asks targeted follow-up questions to build a comprehensive understanding of your security posture.

1.2 Assessment Generation

Based on the information collected during intake, our AI generates security assessments that include threat identification, vulnerability analysis, risk scoring, and prioritized recommendations. The AI cross-references your inputs against known threat databases, the OWASP LLM Top 10, and industry-standard security frameworks.

1.3 Report Creation

The AI compiles assessment findings into structured, actionable reports. These reports include executive summaries, detailed findings, remediation roadmaps, vendor evaluations, and insurance readiness analyses. Each report is generated specifically for your organization based on the data you provide.

1.4 Insurance Readiness Analysis

The AI evaluates your security posture in the context of cyber insurance requirements, identifying coverage gaps and readiness factors that may affect your insurability and premium rates.

2. AI Models and Infrastructure

Model Provider: AICyberNav currently uses OpenAI models accessed through a secure proxy infrastructure. We do not run AI models directly on our servers.

Secure Proxy: All communications between our platform and the AI model provider are routed through an encrypted proxy that ensures data isolation, request logging, and access control. Your data never flows directly from your browser to the model provider.

Model Selection: We select and configure models specifically for cybersecurity assessment tasks. Model versions are tested and validated before deployment to ensure assessment quality and consistency.

3. Data Handling in AI Processing

We take the following measures to protect your data during AI processing:

  • No Model Training: Your assessment inputs, conversation data, and generated reports are NOT used to train, fine-tune, or improve the underlying AI models. We have contractual agreements with our model provider that prohibit the use of customer data for model training.
  • Transient Processing: Your data is transmitted to the AI model for processing and the response is returned. The model provider does not retain your data beyond the processing window (typically seconds).
  • Data Minimization: We send only the data necessary for the specific assessment task to the AI model. Internal system prompts, configuration data, and other users' data are never included in your processing requests.
  • Encryption: All data transmitted to and from the AI model provider is encrypted using TLS 1.2 or higher.
  • Audit Logging: All AI processing requests are logged (without including the full content of your inputs) for security monitoring and incident response purposes.

4. Output Labeling System

To help you assess the reliability and confidence of AI-generated content, all assessment outputs use the following labeling system:

[VERIFIED] Information that has been cross-referenced against authoritative sources such as the National Vulnerability Database (NVD), OWASP publications, vendor security advisories, or established security frameworks. Verified items include specific CVE references, published vulnerability data, and facts from recognized industry standards.

[ESTIMATE] Analysis, scores, or projections generated by the AI based on the information you provided and general cybersecurity knowledge. Estimates include risk scores, likelihood assessments, cost projections, and prioritization rankings. These are informed approximations, not precise measurements.

[SEEK EXPERT ADVICE] Recommendations or findings where the AI has identified significant complexity, legal implications, regulatory requirements, or potential for high-impact consequences. These items require review by a qualified human professional (security consultant, attorney, compliance officer) before action is taken.

These labels are themselves generated by the AI and should be treated as guidance, not guarantees. When in doubt about the reliability of any output, seek independent professional verification.

5. Anti-Hallucination Measures

AI models can generate plausible-sounding but factually incorrect information ("hallucinations"). We implement the following measures to minimize this risk:

  • No Fabricated Metrics: The AI is instructed not to invent specific statistics, percentages, dollar amounts, or quantitative claims without citing a source. If a specific metric appears in an assessment, it is either derived from your inputs, sourced from a referenced database, or clearly labeled as an [ESTIMATE].
  • CVE Citation Requirements: When the AI references specific Common Vulnerabilities and Exposures (CVEs), it must include the CVE identifier (e.g., CVE-2024-XXXX) and is validated against the National Vulnerability Database. CVE references that cannot be validated are excluded from final reports.
  • Source Attribution: When the AI references specific frameworks, standards, or guidelines (e.g., NIST CSF, ISO 27001, OWASP Top 10), it attributes the source and version. General cybersecurity knowledge is presented as such, without false attribution to specific sources.
  • Confidence Calibration: The system is configured to express uncertainty appropriately. Rather than presenting uncertain information as fact, the AI uses qualifying language and appropriate output labels.
  • Structured Output Validation: Generated reports pass through validation checks that flag potential inconsistencies, unreferenced claims, and outputs that fall outside expected parameters.

6. Human Oversight

While our assessments are AI-generated, human oversight is built into our process:

  1. System Design: Our assessment frameworks, prompting strategies, and output templates are designed and maintained by cybersecurity professionals.
  2. Quality Monitoring: We regularly review a sample of generated assessments for accuracy, completeness, and appropriate labeling.
  3. Expert Review Option: Users can request that their assessment be reviewed by a qualified security professional. This optional review adds human verification to the AI-generated findings.
  4. Feedback Loop: Users can report inaccuracies or concerns about AI outputs, which are reviewed by our team and used to improve system prompts and validation rules.
  5. Incident Response: If a systematic accuracy issue is identified in AI outputs, we have procedures to notify affected users and issue corrected assessments.

7. Transparency Commitment

We are committed to transparency about our use of AI:

  • AI Attribution: All AI-generated content on our platform is clearly identified as AI-generated. Reports and assessments include a visible attribution statement indicating they were produced by AI.
  • No Hidden AI: We will not use AI in ways that are hidden from users. If a feature is AI-powered, it is disclosed.
  • Model Changes: When we make significant changes to our AI models or processing pipeline that could affect assessment quality, we will disclose these changes to users.
  • Limitations Disclosure: We clearly communicate the limitations of AI-generated assessments in our Disclaimer and throughout the assessment process.
  • Data Use Clarity: We clearly explain how your data is used in AI processing (see section 3 above) and do not use your data in ways beyond what is disclosed.

8. AI Attribution on Outputs

All reports and assessments generated by AICyberNav include the following attribution:

This assessment was generated by AICyberNav using artificial intelligence. AI-generated content should be independently verified before use in critical security, legal, or regulatory decisions. See aicybernav.ai/ai-policy for details.

9. Changes to This Policy

We may update this AI Use Policy as our AI capabilities evolve and as industry best practices develop. We will notify you of material changes by updating the effective date and posting the revised policy on this page. For significant changes affecting data handling or AI model providers, we will provide advance notice via email.

10. Contact Us

If you have questions about our AI practices, want to report an inaccuracy in an AI-generated output, or would like to request a human review of your assessment, please contact us:

AICyberNav (a KCENAV property)

Email: ai@aicybernav.ai

© 2026 AICyberNav. All rights reserved.
Privacy Terms Cookies AI Policy Acceptable Use Disclaimer
A KCENAV.AI Company