For each of the 10 shortlisted projects: exact tech stack, implementation phases, milestones, success metrics, and the essay angle that ties it together. This is the actionable plan.
All tools are free. Laptops: school Chromebooks + library laptops. No budget required.
Duration: 2 weeks, 3 hours/day
Campers: 20–40 (grade 6–9)
Volunteers: 10 HS students (recruited + trained by you)
Cost: $0 — use school facilities, free tools
School board approval — submit proposal 4 months in advance. Include safety plan, volunteer background check process, curriculum outline.
Pre/post survey — use a validated "CS interest" Likert scale (5 questions). Get IRB exemption: this is educational program evaluation, not human subjects research.
Parental consent — required for photography, data collection, and any communication channel used.
Curriculum ownership — at least 40% of curriculum must be original work (not from GWC standard curriculum). Document what you created.
Sustainability plan — document how the camp continues after you graduate. Identify a successor and create an operations guide.
Start with a specific camper. Describe what they said on Day 1 vs. what they built on Day 10. Then zoom out: why was this a CS problem to solve, not just a teaching problem? What did you learn about your own capacity to lead and build something that lasts?
Commitment: Weekly meetings (1 hour) for school year
Members: 10–30 active members
Curriculum: 30 weeks (one per meeting)
Goal: Every member builds and ships one project
Official GWC affiliation — apply at girlswhocode.com/clubs. Must have faculty sponsor and complete the affiliation process.
AP CS pipeline documentation — counselor signs off on which members enrolled in AP CS the following year. FERPA releases obtained from families.
Student-led structure — you are the facilitator, not the teacher. Members should eventually lead sessions. Document this progression.
End-of-year showcase — public demo day where all members present projects. Invite school board members, parents, local press.
The most powerful version: "I started this chapter because my friend told me she didn't think girls were good at coding." Or: "I was the only girl in my AP CS class, and I decided to change that for the girls coming after me." Then document the specific moment when a member you thought would drop actually became a leader.
CLIP embedding for image similarity. No facial recognition. FERPA-compliant architecture.
Users: School community (students, staff)
Photos: Item photos submitted by finders and owners
Matching: CLIP cosine similarity threshold > 0.75
Storage: Supabase (free tier), 1GB limit managed
CLIP image matching — extract embeddings for all submitted photos. Match using cosine similarity. Threshold must be tuned: too low → false positives, too high → missed matches.
District adoption — pitch to district IT. Must present FERPA compliance documentation, data retention policy, and privacy safeguards. No facial recognition.
Quantified impact — track: items submitted, items matched, match rate, average time-to-match. Before/after comparison with prior year's lost property reports.
No PII — no user accounts with identifying info. Use anonymous nicknames. Items linked only to "finder" and "owner" roles, not individuals.
Start with the specific lost item that mattered most to you — the jacket, the notebook, the phone. Then explain why a database wasn't enough (visual similarity matters). The district pitch story is the leadership anchor: what did you learn about persuading institutions?
Clubs: 15 (starting with own, expanding)
Features: Event calendar, attendance, officer elections, announcements, file sharing
Users: Club officers (admin) + members
Adoption beyond your own club — this is non-negotiable. The leadership story is the 14 other clubs. You must conduct user research, demos, and handle objections.
Real usage data — analytics showing attendance rates, event creation, active users per club. Not just installed — actively used.
Officer election module — verifiable, tamper-resistant voting for club leadership transitions. This is the feature that makes it institutionally valuable.
Data portability — clubs can export their data. This is required for school district compliance and builds trust.
The hardest part wasn't the code — it was convincing 14 other people to change. Start with the most resistant club president. Describe what they said, what you did, what changed. The technical work is table stakes; the adoption challenge is the essay.
Stripe Connect handles split payments between platform and tutors. Essential for legitimacy.
Tutors: HS students with teacher recommendation
Students: K–12, focus on elementary/middle
Subjects: Math, Science, English (expandable)
Revenue: Small platform fee (10%) + Stripe fees
Tutor verification — each tutor requires a teacher recommendation letter (uploaded, reviewed by you). This is the quality signal that separates the platform from random internet strangers.
Real payments — Stripe Connect for tutor payouts. Even $10 tutored sessions count. The economic transaction is the proof of genuine value exchange.
Recruitment — recruiting 100 tutors is as much work as building the platform. Plan: school announcements, flyers, HS counselor referral, flyer at local middle schools.
Outcome tracking — ask tutoring pairs to report grade improvements at end of semester. Document with parent/student permission.
"I charged $15/hour to help a 4th grader learn fractions. It was the hardest money I ever earned — because it required actually understanding how she thought, not just showing her the algorithm." The marketplace framing is secondary; the tutoring relationship is primary.
Target: Python, Rust, Node.js, React, Vue. Projects with active maintainers who respond to issues.
Projects: 1–3 major open source repos
PRs: 3 merged, non-trivial
Code review: Document all feedback received
Cost: $0
Non-trivial PRs — "good first issue" labeled bugs that require understanding the codebase. Not: docs fixes, typos, formatting. Must touch core logic.
Code review response — maintainers will request changes. How you respond to feedback is the evidence of professional communication and growth mindset.
Ships in production — PR must be merged AND shipped in a release. Include the release version in your documentation.
Individual contribution — if contributing to a project with other contributors, your specific contribution must be clearly delineated.
Start with a specific code review comment that made you realize you were wrong — and what you learned from it. The best version: "A senior maintainer told me my solution was technically correct but stylistically catastrophic, and they were right." That's intellectual humility, which AOs value more than right answers.
Frontend: React/Vite with dark theme. Backend: Go collects metrics from system APIs. InfluxDB stores time-series data.
Platforms: Linux, macOS, Docker
Features: System stats, calendar, weather, todos, service status
Target users: Homelab enthusiasts with server setups
300 GitHub stars — requires: beautiful UI, comprehensive README with screenshots, installation guide, active maintenance. No abandoned repos.
Organic community discovery — post to r/homelab, not spam. Respond to every issue and PR. Real community engagement required for credibility.
YouTube references — reach out to homelab YouTubers with a clear value proposition. Don't ask for features; show them how it solves their specific use case.
Documentation — full README, architecture diagram, Docker compose examples, and troubleshooting guide. Documentation quality is the differentiator.
"I wanted a dashboard that looked the way my server felt — precise, clean, and exactly right." The combination of engineering and aesthetic sensibility is rare in CS applicants. The essay should be about caring about something most people wouldn't think to care about, and making it beautiful anyway.
Docker sandbox for secure code execution. GitHub Actions for CI/CD pipeline. React dashboard for teachers.
Languages: Python, JavaScript, Java
Test frameworks: pytest, Jest, JUnit
Teachers: 5 (pilot group)
Feedback: Line-level diff + test results
GitHub Classroom integration — teachers must not change their existing workflow. Auto-grading should trigger automatically on student submissions via GitHub Classroom webhooks.
Docker sandbox security — student code must be executed in isolated Docker containers with resource limits (CPU, memory, time). Security is non-negotiable.
Line-level feedback — not just "test failed." Show the specific line, the expected vs. actual output, and a hint pointing to the bug location.
Plagiarism detection — integrate MOSS or similar for similarity detection across submissions. Flag potential cheating for teacher review.
The most powerful version: "I spent 3 hours grading my classmates' code by hand and made three errors. I knew a computer could do it better — and then I made it happen." Start with the personal frustration, end with institutional change. The 5-teacher adoption is the proof of scalability.
No PII stored. All conversation data encrypted at rest. Anonymous by design. PHQ-4 screening integrated as clinical instrument.
Protocol: PHQ-4 screening instrument
Escalation: Counselor notification with handoff script
Users: Anonymous, school-provided access
Privacy: FERPA compliant, no PII
PHQ-4 validated instrument — use the actual PHQ-4 (4-question anxiety/depression screen) as the clinical backbone. Not a chatbot with personality — a structured clinical tool.
School board approval — submit comprehensive proposal including: privacy architecture, counselor escalation protocol, data retention policy, risk assessment. This is non-negotiable for launch.
Counselor escalation protocol — written script for handoff. Documented cases where escalation was triggered. Counselor sign-off on every escalation.
No PII, no storage of conversations — conversations are ephemeral. Only session outcome (screen result + escalation flag) is stored. This is the privacy architecture.
Regular safety audits — counselor reviews bot outputs monthly for appropriateness. Document audit findings.
Start with the moment you realized a friend was struggling and you didn't know how to help — and then ask: could a machine do what I couldn't? The essay should grapple with the genuine ethical complexity: when should AI step in, and when should it step back? AOs want to see that you thought hard about the hard parts.
Prompt engineering + RAG for AO-style feedback. Optional: fine-tune on a corpus of successful essays and AO evaluations.
Users: HS students writing college essays
Feedback: Structure, voice, specificity, cliché detection, impact
Validation: 20-essay blind study vs. counselors
AO-caliber feedback — feedback must be grounded in actual AO perspectives. Build a RAG knowledge base from: what top AOs say about essays, common mistakes, what "voice" actually means.
Validation study — 20 essays rated by both AI and a professional college counselor. Document correlation on each dimension. Publish methodology. This is the credibility document.
Usage analytics — track: unique users, essays processed, feedback rounds per essay, user satisfaction rating. Not just installed — actively used.
Acceptance documentation — with permission, document: which schools, what essays received AI feedback, admission outcome. This is the ultimate validation.
The ultimate meta-essay: "I built a machine to evaluate the most human thing a student writes — their college essay — and then I fed my own essay through it." The validation study story is also the research methodology story, which demonstrates exactly the kind of intellectual rigor top schools want to see.