Measuring faith-facing AI before institutions trust it.

Fide AI is an independent nonprofit research institute building evaluations, benchmarks, and public standards for AI systems used in theology, formation, education, moral reasoning, and pastoral-adjacent care.

Benchmark corpus Versioned scenarios and manifest live in the Fide AI benchmark tree.
Evaluation runner Standalone execution code is separated from product-specific applications.
Publication path AI ethics, NLP, and faith-facing venue strategy is documented.
Independence layer Claims, conflicts, funding, and related-entity rules are explicit.

Faith leaders are being invited into AI. Measurement has to follow.

Major AI labs are beginning to consult religious and philosophical communities about model behavior, moral development, and spiritual questions. That is a real shift, but feedback alone is not enough.

Fide AI asks the next question: once faith communities are in the room, how do we measure whether AI systems actually behave better in theological, moral, and pastoral-adjacent contexts?

Evaluation questions for high-trust spiritual and moral contexts.

Open research
01 Does the system preserve boundaries?

We test whether responses avoid false pastoral authority, overconfident doctrine, unsafe reassurance, and inappropriate substitution for human care.

02 Does the system ground its claims?

We examine evidence use, fabricated citations, prooftexting, tradition-aware sourcing, and humility under uncertainty.

03 Does the system handle disagreement honestly?

We evaluate whether responses represent primary, secondary, and tertiary differences without flattening traditions or pretending consensus.

A faith-institution complement to moral reasoning and AI ethics research.

Broader moral reasoning research

Organizations such as the Center for Ethical Intelligence study moral reasoning, human flourishing, alignment, and responsible AI development across domains.

Fide AI's focus

Fide AI applies that measurement impulse to faith institutions: doctrine, tradition-aware disagreement, grounded claims, pastoral boundaries, and institutional adoption.

Why this needs its own institute

Faith-facing AI is not just generic safety plus religion. It requires domain-specific evaluation of authority, care, sources, tradition, and claims.

Research artifacts that institutions can inspect.

Evidence for the people deciding whether AI belongs in high-trust settings.

AI labs

Move from faith-community listening sessions to measurable behavior under faith-sensitive evaluation tasks.

Institutions

Assess risks before adopting AI in churches, schools, ministries, publishers, and formation contexts.

Researchers

Study value-laden, open-ended model behavior where correctness cannot be reduced to a single label.

Funders

Support public-interest infrastructure before market incentives define trust in faith-facing AI.

Independent standards need public trust.

Fide AI is structured as the nonprofit home for standards, benchmarks, public research, access, education, and accountability. Product companies belong on the evaluated-builder side of the line.

No pay-for-rank Funding, sponsorship, participation, or commercial relationships cannot purchase favorable results, certification, release timing, or claims language.
Related-entity controls Any related participant receives enhanced review, disclosure, recusal, and non-conflicted signoff.
Claims humility Scores are evidence of benchmark behavior, not theological authority, pastoral authority, or universal product approval.
Public accountability Methods, caveats, limitations, correction process, and funding categories should be legible to the field.

What Fide AI is not.

Not a vendor ranking marketplace.
Not a pay-for-rank certification shop.
Not a theological or pastoral authority.
Not a replacement for church, school, family, counseling, or institutional judgment.
Not the marketing arm of any product company.

Help build the evidence layer for faith-facing AI.

The work needs spiritual seriousness, technical depth, institutional judgment, and operational discipline. The nonprofit structure lets contributors serve the standard without becoming vendors, competitors, or owners.

Funders Support benchmark infrastructure, reviewer calibration, release operations, and access for under-resourced institutions.
Institutions Join readiness conversations before AI systems are adopted in high-trust settings.
Builders Prepare for evaluation under published rules, disclosed conditions, and claims limits.
Reviewers and researchers Contribute to calibration, methodology review, failure analysis, and publication quality.

Inspect the machinery behind the mission.

Fide AI publishes policies and methods that make the work accountable: benchmark scope, scoring rules, held-out set policy, claims limits, conflicts, funding transparency, and related-entity controls.

The aim is a wiser faith ecosystem in the age of AI.

Fide AI gives leaders a place to turn before trust is delegated to systems no one has independently evaluated.