Responsible AI Policy
Responsible AI
Proven Software, Inc. develops artificial intelligence technologies designed to support healthcare organizations.
Human Oversight
AI functionality is provided as a decision-support tool and should not be relied upon as the sole basis for clinical or operational decisions.
AI outputs are designed to support healthcare workflows and do not replace clinical judgment.
Healthcare professionals remain responsible for decisions affecting patient care.
Data Usage
Proven does not train AI models using customer or patient data.
Privacy Protection
AI systems operate within the same security and privacy controls as the Proven platform.
Transparency
Users are expected to review AI-generated outputs before relying on them for operational or clinical workflows.
Commitment
Proven is committed to developing AI responsibly with a focus on privacy, transparency, and patient safety.