55 projects were debated, scored, and discussed across 4 rounds. These 10 emerged as the most defensible, impactful, and achievable. For each: we start with the impact, then work backward to what the project must deliver.
Every project below follows this logic: Who benefits, how, and by how much? → What does the project need to actually be built to create that impact? → What does it need to look like on college apps? We rejected projects where the "impact" was hypothetical or the engineering spec didn't match the ambition.
Who is helped, how measurably, and why does it matter beyond the student's ego?
What does the project actually need to be — technically and tactically — to generate that impact?
Can a HS student realistically build this in 3-18 months without a team of engineers?
What CS/CE trait does this project prove? Engineering? Leadership? Research? Empathy?
1,000+ daily active readers get factual, unbiased news summaries without ads, paywalls, or algorithmic manipulation — by using local LLMs on a student's own server to summarize free RSS feeds.
For this impact to be real: the site needs a real domain, real traffic, real return visitors, and evidence of sustained usage. A project on localhost doesn't count. The technical spec must include: RSS feed ingestion pipeline, local LLM summarization (Ollama or similar), a clean responsive site, and an analytics setup that proves 1000/day.
No single project demonstrates more distinct skills simultaneously: systems administration (running the LLM server), backend engineering (RSS parsing, scheduling), frontend design (clean reading experience), AND product thinking (understanding why people are desperate for ad-free news). The 1,000 daily user bar forces the student to actually market the project, not just build it.
High schoolers who are strong in a subject become tutors for younger students — the platform handles matching, scheduling, and payment (even token/free). $5,000+ in combined tutor earnings. 300+ younger students helped. That's a real marketplace, a real revenue story, and real community impact.
Must include: student/parent onboarding flow, tutor verification (teacher reference), scheduling system, messaging, and a payment mechanism (even Venmo integration). Needs 20+ tutors and 100+ students to be credible. The student must demonstrate leadership by recruiting tutors, not just building the tech.
Highest founder score of any project. This is the only project where the student demonstrably recruits, leads, and manages a community of 100+ people. That's not a coding project — it's a leadership project that happens to involve code. AOs see through the difference between "I built a thing" and "I built a thing that 400 people use because I convinced them to."
Founding and scaling a Girls Who Code chapter at school from 0 to 30+ active members. Designing the curriculum. Recruiting 20 HS volunteers as instructors. Getting 10 of those girls to continue into AP Computer Science. That's measurable pipeline change — the exact problem universities claim to care about.
Must document: founding story, curriculum developed, volunteer training process, retention data. The code component can be: building the chapter's own website, a shared resource library, or a small internal tool. The leadership is the headline; the tech is evidence of depth.
Diversity in tech is not a box to check — it's a problem universities are actively judged on. A student who identifies a gap in their own community and builds a solution (rather than waiting to be asked) demonstrates exactly the kind of agency top schools want. The retention data (10 continuing to AP CS) is the key proof point.
Contributing meaningful patches (not docs-only, not first-issue) to real open source projects — Rust, Python, Node.js, React, Vue. Code that runs in production, in real applications, used by millions of developers. Three merged PRs minimum, acknowledged in release notes or changelogs.
Must show: PR links, the actual code changes, the review feedback received and responded to, and the impact (how many downloads, what version it shipped in). "I fixed a bug in a library" is interesting; "my fix shipped in Python 3.12 and affected every developer using that library" is remarkable.
The engineer score speaks. This is the purest demonstration of real software engineering: reading other people's codebases, understanding context, writing tests, responding to code review feedback, and shipping code that professionals rely on. It's also genuinely hard to fake — the PR history is public and verifiable.
A management platform for school clubs — events, attendance, elections, communication — built for and adopted by the student's own club(s), then expanded to 15 school clubs covering 500+ students. The student didn't just build a tool; they convinced 14 other clubs to switch to it.
Core features: event scheduling, attendance tracking, officer elections, announcements. Must include evidence of adoption beyond the student's own club. The "sell" is critical: can the student articulate why other club presidents should switch? That's the leadership story.
The Admissions Expert flagged this as "leadership in action, not leadership in claim." Building the tool is the easy part. Getting 14 other club presidents to adopt it — that requires user research, persuasion, support, and迭代. The 500-user number is real and verifiable through the school's records.
An AI tutor that adapts to the individual student's learning pace — not a dumb chatbot, but one that tracks concepts mastered vs. struggling, adjusts problem difficulty, and provides explanations in the student's learning style. 200+ students measurably improved grades in one semester.
Must have: a concept mastery tracking system, adaptive difficulty (not random), multi-modal explanations (text + visual), and a before/after grade comparison for at least 50 students. The AI piece is the plumbing; the learning science piece is what makes it credible.
Education technology is littered with tools that claim to help but don't measure outcomes. The grade-improvement requirement forces the student to actually run a controlled experiment — collect before/after data, account for confounding variables. That's research methodology embedded in a product project.
An AI-powered lost & found for the entire school district — image-based matching so you photograph a found item and the system searches for matching photos from reports. 200+ items reunited in year one. The student didn't just build a lost & found; they convinced the district IT department to adopt it.
Must have: image upload + CLIP or similar embedding-based matching, district-wide admin dashboard, automated notification to the person who lost the item. The technical challenge is real: image search at scale, false positive management, privacy controls. The adoption story is equally important.
Computer vision projects are expected to be toy demos. This one has a quantifiable outcome (200 reunited items) and a real institutional adoption story. The district partnership is the differentiator — the student had to navigate bureaucracy, present to administrators, and support the tool in production.
An anonymous, confidential AI check-in bot for student mental health. Not therapy — check-ins, coping strategies, and crisis detection. When the bot detects crisis language (trained on a validated screening instrument), it escalates to school counselors. 12 real crisis flags in a semester, each followed up by a counselor.
Must have: crisis detection using a validated instrument (PHQ-4 or similar), counselor escalation pathway with proper handoff, data privacy safeguards (no PII stored), and a pilot approved by school administration. This is a safety-critical system — the stakes are real.
The Admissions Expert and HS Senior Mentor both flagged this as demonstrating "emotional maturity and engineering responsibility." This is not a toy chatbot — it's a system with real safety implications. Building it properly (validated instruments, counselor escalation, proper privacy) shows the student understood the weight of what they were building.
A CI/CD-based auto-grader integrated with GitHub Classroom that runs student code against test suites, checks for style, and provides feedback — used by 5 computer science teachers to grade 2,000+ assignments. The student didn't just make a tool; they got 5 teachers to change how they grade.
Must integrate with GitHub Classroom (real LMS integration), handle multiple languages (Python, Java, JavaScript), run test suites, detect plagiarism, and provide actionable feedback to students. The technical spec: GitHub Actions + custom grader + feedback API + a clean teacher dashboard.
The highest engineering score in the top 10. This project demonstrates production-grade systems thinking: CI/CD pipelines, API design, test suite engineering, and UX for non-technical users (the teachers). It also requires the student to deeply understand assessment design — what makes a good test suite? That's real pedagogy + real engineering.
An LLM-powered essay analyzer that gives specific, actionable feedback on college essays — structure, narrative clarity, voice, specific weak points — before the student submits. 500+ students used it. Multiple reported acceptances from top-choice schools. The student built a tool that others trust with something high-stakes.
Must demonstrate: feedback quality compared to human counselors (can do an informal study), evidence of adoption beyond friends, and anonymized testimonials. The model must be fine-tuned or prompted specifically for college essay criteria (common app rubric, AO perspective). Generic GPT feedback doesn't count.
This project is strategically brilliant because the evaluator (the college essay) is also the domain of the admissions reader. When an AO sees "I built a tool that helped 500 students write better college essays," the connection to the student's own application is immediate and legible. The proof point — "multiple acceptances from top schools" — must be real, not fabricated.