Evaluation Criteria & Process

AI Oracle Prompting Standards

Seedify uses multiple AI oracles (Claude, ChatGPT, Gemini, Grok) with standardized prompts.

Standard Prompt Structure:

You are evaluating a blockchain project milestone for completion.

PROJECT: [Name]
MILESTONE: [Title]
DEADLINE: [Date]

COMPLETION CRITERIA:
[List criteria with targets]

EVIDENCE SUBMITTED:
[Evidence package with links]

EVALUATION TASKS:
1. Verify each criterion is met using provided evidence
2. Check for evidence manipulation (wash trading, Sybil attacks, fake metrics)
3. Assess evidence quality and credibility
4. Flag any ambiguous or borderline cases
5. Provide binary recommendation: PASS or FAIL

OUTPUT FORMAT:
{
  "recommendation": "PASS" | "FAIL",
  "criteria_results": [
    {
      "criterion": "[text]",
      "met": true | false,
      "confidence": "high" | "medium" | "low",
      "evidence_quality": "strong" | "adequate" | "weak",
      "notes": "[explanation]"
    }
  ],
  "red_flags": ["list of concerns"],
  "overall_assessment": "[2-3 sentences]"
}

Seedify Manual Review

After AI Oracle evaluation, Seedify team conducts manual review.

Review Checklist:

Review Outcomes:

  1. Agree with Oracle (Pass): Recommend approval to Community

  2. Agree with Oracle (Fail): Recommend denial to Community

  3. Disagree with Oracle (False Positive): Override with disclaimer, recommend denial

  4. Disagree with Oracle (False Negative): Override with disclaimer, recommend approval

Seedify Disclaimer Format (when overriding Oracle):

Community Vote Presentation

All Community votes receive a standardized report.

Report Structure:

Last updated

Was this helpful?