DeepEval
productai_evaluation
Try in Playground →RSS
Overview
Developed byConfident AI
LicenseApache 2.0
Open source✓ Open Source
Primary languagePython
Use caseLLM evaluation framework with 14+ metrics for unit testing AI outputs in CI/CD
Integrates with
Also see
Alternative to
Knowledge graph stats
Claims7
Avg confidence97%
Avg freshness99%
Last updatedUpdated yesterday
Trust distribution
100% unverified
Governance

DeepEval

product

Open-source LLM evaluation framework with 14+ metrics for unit testing AI outputs

Compare with...

alternative to

ValueTrustConfidenceFreshnessSources
RagasUnverifiedHighFresh1

integrates with

ValueTrustConfidenceFreshnessSources
LangChainUnverifiedHighFresh1

programming language

ValueTrustConfidenceFreshnessSources
PythonUnverifiedHighFresh1

primary use case

ValueTrustConfidenceFreshnessSources
LLM evaluation framework with 14+ metrics for unit testing AI outputs in CI/CDUnverifiedHighFresh1

developed by

ValueTrustConfidenceFreshnessSources
Confident AIUnverifiedHighFresh1

open source

ValueTrustConfidenceFreshnessSources
trueUnverifiedHighFresh1

license type

ValueTrustConfidenceFreshnessSources
Apache 2.0UnverifiedHighFresh1

Alternatives & Similar Tools

alternative to
Compare →

Commonly Used With

Related entities

Claim count: 7Last updated: 4/9/2026Edit history