Judging Criteria

Transparent evaluation framework ensuring fair and comprehensive assessment of all submissions.

Evaluation Framework

Scoring Breakdown

Our judges evaluate projects across four key dimensions, each weighted to reflect their importance in cybersecurity innovation.

Problem & Idea Quality

How strong and meaningful the core idea is.

20%

Evaluation Points

  • Is the problem real and relevant?
  • Is the solution novel or significantly better than existing solutions?
  • Does the team clearly explain why the problem matters?

Examples

  • 1–3: Generic idea, unclear problem
  • 4–7: Reasonable idea but common
  • 8–10: Unique or deeply relevant concept
Technical Execution

Working prototype, code quality, and technical difficulty.

30%

Evaluation Points

  • Working prototype
  • Code quality and architecture
  • Security practices
  • Technical difficulty
  • Effective use of frameworks / APIs

Examples

  • 1–3: Barely functional or mostly theoretical
  • 4–7: Working prototype with moderate complexity
  • 8–10: Robust, technically impressive implementation
Real-World Impact

Deployable solutions and practical tools.

20%

Evaluation Points

  • Can this realistically be used in production?
  • Does it solve a real-world security / AI / web problem?
  • Is it scalable?

Examples

  • 1–3: Mostly conceptual
  • 4–7: Could be useful with improvements
  • 8–10: Immediately valuable or highly scalable
Innovation & Creativity

How original the solution is.

15%

Evaluation Points

  • New approach to a known problem
  • Clever technical design
  • Creative use of technology

Examples

  • 1–3: Common or obvious
  • 4–7: Some creativity
  • 8–10: Truly innovative
Demo & Presentation

How well the team explains their project.

10%

Evaluation Points

  • Clarity of explanation
  • Demo quality
  • Ability to answer technical questions

Examples

  • 1–3: Confusing demo
  • 4–7: Clear but basic
  • 8–10: Excellent explanation and live demo
Documentation & Open Source

GitHub + Devpost submission quality.

5%

Evaluation Points

  • Clear README
  • Setup instructions
  • Code documentation
  • GitHub structure

Examples

  • 1–3: Minimal documentation
  • 4–7: Acceptable documentation
  • 8–10: Professional open-source quality
Judging Process

How Judging Works

A structured evaluation process ensuring every project receives fair and thorough assessment.

Initial Review
2 hours

Judges review all submissions and score based on criteria

Shortlisting
1 hour

Top projects are selected for final presentation round

Demo Day
4 hours

Finalists present live to judges and community

Final Scoring
30 minutes

Judges finalize scores and select winners

What Judges Look For

Key Evaluation Areas

Understanding what makes a winning project across different aspects of cybersecurity innovation.

Technical Depth
  • Working and functional prototype
  • Clean, well-architected codebase
  • Implementation of security best practices
  • Effective use of modern frameworks and APIs
Problem Relevance
  • Addressing real-world security challenges
  • Clear explanation of why the problem matters
  • Novelty of the proposed solution
  • Significantly better than existing alternatives
Practicality & Impact
  • Realistic production implementation
  • Scalable architecture and design
  • Immediate value to the security community
  • Practicality over mere gimmicks
Presentation Quality
  • Clarity and confidence in explanation
  • High-quality, stable live demo
  • Ability to handle technical deep-dives
  • Professional documentation and README
Track-Specific Focus

Specialized Criteria

Each track has specific emphasis areas while maintaining the core evaluation framework.

AI + SecurityAI/ML in security applications

Track Focus

AI/ML in security applications

Special Emphasis

  • Defensive AI systems
  • AI-assisted monitoring
  • Model safety
  • Robustness tooling
Network SecurityNetwork and infrastructure security

Track Focus

Network and infrastructure security

Special Emphasis

  • Software security
  • Infrastructure tools
  • Protocol security
  • Physical network security
TomfooleryFun, absurd, but working solutions

Track Focus

Fun, absurd, but working solutions

Special Emphasis

  • Creative execution
  • Working prototypes
  • Humor and innovation
  • Technical depth despite absurdity
Open InnovationHigh-impact innovative solutions

Track Focus

High-impact innovative solutions

Special Emphasis

  • Real-world applicability
  • Technical innovation
  • Market impact
  • Scalability
Resources

Additional Information

Download detailed judging criteria and find answers to common questions about the evaluation process.

Download Criteria

Get the complete judging criteria document with detailed explanations and examples.

Download PDF
Meet the Judges

Learn about our expert panel of judges and their backgrounds in cybersecurity.

View Speakers & Judges
Secure your spot

Join the Global Movement

Registration is free and open to builders worldwide. Join 1000+ others in the ultimate security marathon.