Implementation Blueprints

How to Build Each Project

For each of the 10 shortlisted projects: exact tech stack, implementation phases, milestones, success metrics, and the essay angle that ties it together. This is the actionable plan.

01

CS Summer Camp for Underrepresented Kids

Community Leadership Curriculum Design Teaching
🎯 Target Impact
40 campers from underrepresented backgrounds complete a 2-week free CS camp. 80% report increased CS interest via pre/post survey. Camp adopted as annual school program.

💻 Tech Stack

PythonThonny / Replit PygameScratch Google FormsGoogle Sheets CanvaGoogle Slides

All tools are free. Laptops: school Chromebooks + library laptops. No budget required.

📐 Scope

Duration: 2 weeks, 3 hours/day

Campers: 20–40 (grade 6–9)

Volunteers: 10 HS students (recruited + trained by you)

Cost: $0 — use school facilities, free tools

🔑 Critical Requirements

School board approval — submit proposal 4 months in advance. Include safety plan, volunteer background check process, curriculum outline.

Pre/post survey — use a validated "CS interest" Likert scale (5 questions). Get IRB exemption: this is educational program evaluation, not human subjects research.

Parental consent — required for photography, data collection, and any communication channel used.

Curriculum ownership — at least 40% of curriculum must be original work (not from GWC standard curriculum). Document what you created.

Sustainability plan — document how the camp continues after you graduate. Identify a successor and create an operations guide.

📅 12-Week Implementation Plan

Weeks 1–2
Research + design: Study GWC curriculum, Code.org resources. Draft original curriculum outline. Identify camp venue (school computer lab or library).
Weeks 3–4
Proposal: Write school board proposal. Secure administrator sponsor. Submit for approval.
Weeks 5–6
Recruit: Recruit 10 HS volunteer instructors. Interview them. Conduct training on teaching pedagogy and Code of Conduct.
Weeks 7–8
Curriculum: Build out 2-week lesson plans with daily activities, projects, and assessments. Pilot one lesson with a small group.
Weeks 9–10
Marketing: Flyers to middle schools in district, local libraries, community centers. Register campers. Collect parental consent forms.
Week 11
Prep: Finalize schedule, volunteer assignments, laptop distribution plan, safety protocols. Set up pre-survey forms.
Week 12
Camp Week 1: Days 1–5. Administer pre-survey on Day 1. Teach Python/Pygame fundamentals.
Week 13
Camp Week 2: Final projects, presentations, post-survey. Document outcomes. Present camp to school board.

✍️ The Essay Angle

Start with a specific camper. Describe what they said on Day 1 vs. what they built on Day 10. Then zoom out: why was this a CS problem to solve, not just a teaching problem? What did you learn about your own capacity to lead and build something that lasts?

02

Girls Who Code Chapter

Community Founding Leadership Diversity & Advocacy
🎯 Target Impact
Official GWC affiliation. 50+ girls complete the full year curriculum. 10 members enroll in AP CS the following year — documented by school counselor with FERPA release.

💻 Tech Stack

PythonJavaScript HTML/CSSReplit GitHubSlack Google Drive

📐 Scope

Commitment: Weekly meetings (1 hour) for school year

Members: 10–30 active members

Curriculum: 30 weeks (one per meeting)

Goal: Every member builds and ships one project

🔑 Critical Requirements

Official GWC affiliation — apply at girlswhocode.com/clubs. Must have faculty sponsor and complete the affiliation process.

AP CS pipeline documentation — counselor signs off on which members enrolled in AP CS the following year. FERPA releases obtained from families.

Student-led structure — you are the facilitator, not the teacher. Members should eventually lead sessions. Document this progression.

End-of-year showcase — public demo day where all members present projects. Invite school board members, parents, local press.

📅 Year 1 Implementation Plan

Summer (Before)
Apply for affiliation. Secure faculty sponsor. Plan first-year curriculum using GWC curriculum guide + your additions. Recruit founding members.
Month 1–2
Launch: First meeting. Administer CS interest survey to all members (pre-data). Establish group norms. Begin curriculum.
Months 3–6
Core curriculum: Python basics → web development → data science modules. Every member builds a personal project. Member-led session rotations begin.
Months 7–9
Capstone projects: Members choose and build final projects. Pair programming, code reviews. Guest speakers from local tech companies.
Month 10
Showcase prep: Public demo day event. Invite school board, local tech leaders, press. Document attendance.
Month 11–12
Year-end: Post-curriculum survey. Track AP CS enrollment for next year with counselor. Write operations guide for Year 2 leader.

✍️ The Essay Angle

The most powerful version: "I started this chapter because my friend told me she didn't think girls were good at coding." Or: "I was the only girl in my AP CS class, and I decided to change that for the girls coming after me." Then document the specific moment when a member you thought would drop actually became a leader.

03

Campus Lost & Found 2.0

Web/Mobile Computer Vision Civic Tech
🎯 Target Impact
AI image-matching lost & found deployed district-wide (3+ schools). 200+ items reunited. Average match time under 24 hours. District IT formally adopts it.

💻 Tech Stack

Next.jsReact SupabasePostgreSQL CLIP (openai/clip-vit-base-patch32) Python/FastAPI Vercel / Railway GitHub OAuth

CLIP embedding for image similarity. No facial recognition. FERPA-compliant architecture.

📐 Scope

Users: School community (students, staff)

Photos: Item photos submitted by finders and owners

Matching: CLIP cosine similarity threshold > 0.75

Storage: Supabase (free tier), 1GB limit managed

🔑 Critical Requirements

CLIP image matching — extract embeddings for all submitted photos. Match using cosine similarity. Threshold must be tuned: too low → false positives, too high → missed matches.

District adoption — pitch to district IT. Must present FERPA compliance documentation, data retention policy, and privacy safeguards. No facial recognition.

Quantified impact — track: items submitted, items matched, match rate, average time-to-match. Before/after comparison with prior year's lost property reports.

No PII — no user accounts with identifying info. Use anonymous nicknames. Items linked only to "finder" and "owner" roles, not individuals.

📅 16-Week Implementation Plan

Weeks 1–3
CLIP pipeline: Set up Python/FastAPI backend. Integrate CLIP model for embedding extraction. Build similarity search API.
Weeks 4–6
Frontend: Next.js app with item submission form (photo upload), search/browse, match notification. Supabase auth + database.
Weeks 7–8
Privacy + legal: Write privacy policy, FERPA compliance doc, data retention policy. Have school district legal counsel review.
Weeks 9–10
Pilot: Deploy at one school. Test with 20–50 items. Tune CLIP threshold. Gather UX feedback.
Weeks 11–12
District pitch: Present to district IT with data from pilot. Negotiate adoption terms. Sign MOU if required.
Weeks 13–16
Scale: Deploy to 3 schools. Document 200-item reunification. Collect testimonials. Present results to school board.

✍️ The Essay Angle

Start with the specific lost item that mattered most to you — the jacket, the notebook, the phone. Then explain why a database wasn't enough (visual similarity matters). The district pitch story is the leadership anchor: what did you learn about persuading institutions?

04

Club Leadership Hub

Web/Mobile Scaling Adoption Community Tool
🎯 Target Impact
15 school clubs, 500+ students actively using the platform. The student convinced 14 club presidents to switch from existing tools — documented persuasion process.

💻 Tech Stack

Next.jsTypeScript SupabasePostgreSQL Tailwind CSS VercelGoogle Calendar API

📐 Scope

Clubs: 15 (starting with own, expanding)

Features: Event calendar, attendance, officer elections, announcements, file sharing

Users: Club officers (admin) + members

🔑 Critical Requirements

Adoption beyond your own club — this is non-negotiable. The leadership story is the 14 other clubs. You must conduct user research, demos, and handle objections.

Real usage data — analytics showing attendance rates, event creation, active users per club. Not just installed — actively used.

Officer election module — verifiable, tamper-resistant voting for club leadership transitions. This is the feature that makes it institutionally valuable.

Data portability — clubs can export their data. This is required for school district compliance and builds trust.

📅 14-Week Implementation Plan

Weeks 1–3
Core platform: Next.js + Supabase. Auth, club creation, event calendar, basic attendance. Use for own club first.
Weeks 4–5
Officer election module: Build voting system with anonymous ballots. Test with your own club's officer election.
Weeks 6–7
User research: Interview 14 other club presidents. What do they use now? What are their pain points? Document findings.
Weeks 8–9
Demo tour: Present personalized demos to each of 14 club presidents. Offer free migration from existing tools (Google Forms, etc.).
Weeks 10–12
Onboarding: Migrate clubs one by one. Provide hands-on training for officers. Set up analytics tracking.
Weeks 13–14
Validation: Document 500-user milestone. Pull usage analytics. Collect testimonials from 3 club presidents. Present to administration.

✍️ The Essay Angle

The hardest part wasn't the code — it was convincing 14 other people to change. Start with the most resistant club president. Describe what they said, what you did, what changed. The technical work is table stakes; the adoption challenge is the essay.

05

Local Tutoring Marketplace

Web/Mobile Recruitment Community Service
🎯 Target Impact
100+ HS tutors onboarded, verified with teacher recommendations. 300+ younger students tutored. $5,000+ in tutor earnings flowing through the platform. Retention rate tracked.

💻 Tech Stack

React / Next.jsTypeScript SupabaseStripe Connect Cloudflare R2 Vercel

Stripe Connect handles split payments between platform and tutors. Essential for legitimacy.

📐 Scope

Tutors: HS students with teacher recommendation

Students: K–12, focus on elementary/middle

Subjects: Math, Science, English (expandable)

Revenue: Small platform fee (10%) + Stripe fees

🔑 Critical Requirements

Tutor verification — each tutor requires a teacher recommendation letter (uploaded, reviewed by you). This is the quality signal that separates the platform from random internet strangers.

Real payments — Stripe Connect for tutor payouts. Even $10 tutored sessions count. The economic transaction is the proof of genuine value exchange.

Recruitment — recruiting 100 tutors is as much work as building the platform. Plan: school announcements, flyers, HS counselor referral, flyer at local middle schools.

Outcome tracking — ask tutoring pairs to report grade improvements at end of semester. Document with parent/student permission.

📅 16-Week Implementation Plan

Weeks 1–4
Platform MVP: React/Next.js. Tutor profile creation, student search, booking request, messaging. Stripe Connect integration.
Weeks 5–6
Recruit first 10 tutors: Pilot with own network. Teacher recommendations collected and verified. Onboard first tutors.
Weeks 7–8
First matches: 20–30 tutoring pairs active. Collect feedback. Iterate on UX. Document outcomes.
Weeks 9–12
Scale recruitment: Broad tutor recruitment campaign. Reach 100 tutors. Train tutors on platform use.
Weeks 13–14
Student outreach: Partner with local middle schools and PTAs. Get first 300 students registered.
Weeks 15–16
Document: $5,000 payout milestone. Tutor testimonials. Document grade improvement data (with permission). Present to local news.

✍️ The Essay Angle

"I charged $15/hour to help a 4th grader learn fractions. It was the hardest money I ever earned — because it required actually understanding how she thought, not just showing her the algorithm." The marketplace framing is secondary; the tutoring relationship is primary.

06

Open Source Bug Fixes

Systems Production Code Open Source Community
🎯 Target Impact
3+ merged PRs in real open source projects. Each PR touches meaningful production code. Release notes/changelog acknowledge contributions. Code review conversation is documented.

💻 Tech Stack

GitGitHub PythonRust JavaScript / TypeScript Node.jsDocker

Target: Python, Rust, Node.js, React, Vue. Projects with active maintainers who respond to issues.

📐 Scope

Projects: 1–3 major open source repos

PRs: 3 merged, non-trivial

Code review: Document all feedback received

Cost: $0

🔑 Critical Requirements

Non-trivial PRs — "good first issue" labeled bugs that require understanding the codebase. Not: docs fixes, typos, formatting. Must touch core logic.

Code review response — maintainers will request changes. How you respond to feedback is the evidence of professional communication and growth mindset.

Ships in production — PR must be merged AND shipped in a release. Include the release version in your documentation.

Individual contribution — if contributing to a project with other contributors, your specific contribution must be clearly delineated.

📅 12-Week Implementation Plan

Weeks 1–2
Find projects: Identify 5–8 projects with active "good first issue" labels. Study contribution guidelines, code of conduct, PR templates.
Weeks 3–4
First PR: Claim a "good first issue." Understand the codebase. Write code. Submit PR. Respond to review feedback. Get PR merged.
Weeks 5–6
Second PR: Take on a more complex issue. Dig deeper into architecture. Ship second merged PR.
Weeks 7–8
Third PR: Attempt a bug fix that requires proposing a new API design or architectural change. This is the highest-quality PR.
Weeks 9–10
Documentation: Write up the full contribution journey. Screenshots of code review. Release notes showing your contributions. GitHub profile linked in application.
Weeks 11–12
References: Ask maintainers for brief testimonials if positive. Link to merged PRs and releases. Package as portfolio piece.

✍️ The Essay Angle

Start with a specific code review comment that made you realize you were wrong — and what you learned from it. The best version: "A senior maintainer told me my solution was technically correct but stylistically catastrophic, and they were right." That's intellectual humility, which AOs value more than right answers.

07

Homelab Dashboard

Systems Systems Administration UI/UX Design
🎯 Target Impact
Open source homelab dashboard with 300+ GitHub stars. Mentioned/organic discussion on r/homelab. Referenced in 2+ YouTube homelab setup videos. Real homelab enthusiasts use it daily.

💻 Tech Stack

ReactTypeScript GoInfluxDB Grafana (API)Docker GitHub ActionsVite

Frontend: React/Vite with dark theme. Backend: Go collects metrics from system APIs. InfluxDB stores time-series data.

📐 Scope

Platforms: Linux, macOS, Docker

Features: System stats, calendar, weather, todos, service status

Target users: Homelab enthusiasts with server setups

🔑 Critical Requirements

300 GitHub stars — requires: beautiful UI, comprehensive README with screenshots, installation guide, active maintenance. No abandoned repos.

Organic community discovery — post to r/homelab, not spam. Respond to every issue and PR. Real community engagement required for credibility.

YouTube references — reach out to homelab YouTubers with a clear value proposition. Don't ask for features; show them how it solves their specific use case.

Documentation — full README, architecture diagram, Docker compose examples, and troubleshooting guide. Documentation quality is the differentiator.

📅 12-Week Implementation Plan

Weeks 1–2
Research: Survey existing homelab dashboards (Heimdall, Dashy, Homarr). Identify gaps. Define your differentiation (design, functionality, or both).
Weeks 3–5
Build MVP: React + TypeScript frontend. Go backend for metrics collection. Dark theme. Weather, calendar, server stats, todo widgets.
Weeks 6–7
Polish: Animations, widget drag-and-drop, mobile responsive. Docker compose. Full documentation + architecture diagram.
Week 8
Open source launch: Public GitHub repo. Create Discord server. Post to r/homelab. Respond to all initial feedback within 24 hours.
Weeks 9–10
Community growth: Address issues, merge community PRs, iterate. Reach out to homelab YouTubers with a personalized demo video.
Weeks 11–12
Milestone: 300 stars achieved. Document star trajectory. Archive YouTube mentions. Package as portfolio piece with live demo URL.

✍️ The Essay Angle

"I wanted a dashboard that looked the way my server felt — precise, clean, and exactly right." The combination of engineering and aesthetic sensibility is rare in CS applicants. The essay should be about caring about something most people wouldn't think to care about, and making it beautiful anyway.

08

Automated Grading System

Systems CI/CD Pipeline LMS Integration
🎯 Target Impact
5 CS teachers using auto-grading for 2,000+ assignments. GitHub Classroom integration. 80%+ time savings on grading reported by teachers. Adopted as official CS department tool.

💻 Tech Stack

PythonFastAPI GitHub Classroom API DockerReact PostgreSQLGitHub Actions

Docker sandbox for secure code execution. GitHub Actions for CI/CD pipeline. React dashboard for teachers.

📐 Scope

Languages: Python, JavaScript, Java

Test frameworks: pytest, Jest, JUnit

Teachers: 5 (pilot group)

Feedback: Line-level diff + test results

🔑 Critical Requirements

GitHub Classroom integration — teachers must not change their existing workflow. Auto-grading should trigger automatically on student submissions via GitHub Classroom webhooks.

Docker sandbox security — student code must be executed in isolated Docker containers with resource limits (CPU, memory, time). Security is non-negotiable.

Line-level feedback — not just "test failed." Show the specific line, the expected vs. actual output, and a hint pointing to the bug location.

Plagiarism detection — integrate MOSS or similar for similarity detection across submissions. Flag potential cheating for teacher review.

📅 16-Week Implementation Plan

Weeks 1–3
Core engine: FastAPI backend. Docker execution sandbox. Test framework adapters for pytest, Jest, JUnit. Basic pass/fail grading.
Weeks 4–5
Feedback system: Line-level diff generation. Hints and explanations. Plagiarism detection integration (MOSS API).
Weeks 6–7
GitHub Classroom API: OAuth integration. Classroom, assignment, student rosters sync. Webhook handlers for submission events.
Weeks 8–9
Teacher dashboard: React dashboard showing assignment overview, submission queue, plagiarism alerts, grade override controls.
Weeks 10–11
Pilot with 1 teacher: Deploy for one CS class. Gather feedback. Fix critical bugs. Document teacher time savings.
Weeks 12–14
Scale to 5 teachers: Onboard remaining 4 teachers. Train on setup. Integrate 2,000+ assignments.
Weeks 15–16
Formal adoption: Present usage data to CS department head. Get formal approval as department tool. Document 80% time savings claim with teacher testimonials.

✍️ The Essay Angle

The most powerful version: "I spent 3 hours grading my classmates' code by hand and made three errors. I knew a computer could do it better — and then I made it happen." Start with the personal frustration, end with institutional change. The 5-teacher adoption is the proof of scalability.

09

Mental Health Check-In Bot

AI/ML Student Wellness Safety-Critical Engineering
🎯 Target Impact
School-board approved, counselor-endorsed. 100+ weekly active student users. 12 real crisis detections with documented counselor handoff. Zero data incidents. FERPA and student privacy compliance certified.

💻 Tech Stack

PythonFastAPI LangChain / RAG PostgreSQLReact Discord / Slack API Cloudflare Workers

No PII stored. All conversation data encrypted at rest. Anonymous by design. PHQ-4 screening integrated as clinical instrument.

📐 Scope

Protocol: PHQ-4 screening instrument

Escalation: Counselor notification with handoff script

Users: Anonymous, school-provided access

Privacy: FERPA compliant, no PII

🔑 Critical Requirements

PHQ-4 validated instrument — use the actual PHQ-4 (4-question anxiety/depression screen) as the clinical backbone. Not a chatbot with personality — a structured clinical tool.

School board approval — submit comprehensive proposal including: privacy architecture, counselor escalation protocol, data retention policy, risk assessment. This is non-negotiable for launch.

Counselor escalation protocol — written script for handoff. Documented cases where escalation was triggered. Counselor sign-off on every escalation.

No PII, no storage of conversations — conversations are ephemeral. Only session outcome (screen result + escalation flag) is stored. This is the privacy architecture.

Regular safety audits — counselor reviews bot outputs monthly for appropriateness. Document audit findings.

📅 18-Week Implementation Plan

Weeks 1–3
Clinical research: Study PHQ-4 instrument. Understand escalation criteria. Consult with school counselor on existing protocols.
Weeks 4–6
Core bot: Python/FastAPI backend. PHQ-4 screening flow. LangChain for conversational context. No conversation storage.
Weeks 7–8
Safety layer: Crisis keyword detection. Escalation protocol implementation. Counselor notification system (encrypted, logged only for escalations).
Weeks 9–10
Legal + privacy review: Draft privacy policy, data retention policy, risk assessment. Have school district legal counsel review.
Weeks 11–12
School board proposal: Comprehensive presentation. Include counselor endorsement letter. Present privacy architecture. Request conditional approval.
Weeks 13–14
Pilot: Soft launch with 20 students. Counselor monitors all sessions. Collect feedback. Iterate.
Weeks 15–18
Scale: Full school rollout. Document 100 weekly users + 12 crisis detections (with counselor sign-off). Annual safety audit begins.

✍️ The Essay Angle

Start with the moment you realized a friend was struggling and you didn't know how to help — and then ask: could a machine do what I couldn't? The essay should grapple with the genuine ethical complexity: when should AI step in, and when should it step back? AOs want to see that you thought hard about the hard parts.

10

College Essay Feedback Engine

AI/ML LLM Fine-tuning Research Validation
🎯 Target Impact
500+ students used the tool on real essays. Blind validation study: AI feedback correlates with counselor feedback on 80%+ of dimensions. Multiple documented top-school acceptances (with permission).

💻 Tech Stack

PythonFastAPI OpenAI API (GPT-4o) LangChainReact SupabaseVercel

Prompt engineering + RAG for AO-style feedback. Optional: fine-tune on a corpus of successful essays and AO evaluations.

📐 Scope

Users: HS students writing college essays

Feedback: Structure, voice, specificity, cliché detection, impact

Validation: 20-essay blind study vs. counselors

🔑 Critical Requirements

AO-caliber feedback — feedback must be grounded in actual AO perspectives. Build a RAG knowledge base from: what top AOs say about essays, common mistakes, what "voice" actually means.

Validation study — 20 essays rated by both AI and a professional college counselor. Document correlation on each dimension. Publish methodology. This is the credibility document.

Usage analytics — track: unique users, essays processed, feedback rounds per essay, user satisfaction rating. Not just installed — actively used.

Acceptance documentation — with permission, document: which schools, what essays received AI feedback, admission outcome. This is the ultimate validation.

📅 16-Week Implementation Plan

Weeks 1–3
Research: Study what top AOs say about essays (Reddit A2C, College Confidential, published AO writing). Build knowledge base of "what good looks like."
Weeks 4–6
Feedback engine: GPT-4o + LangChain RAG. Structured feedback on 6 dimensions: hook, narrative, voice, specificity, reflection, impact. Prompt engineering iteration.
Weeks 7–8
Validation study design: Recruit 20 students with counselor's permission. Design blind comparison protocol. IRB exemption review (educational tool, not research).
Weeks 9–10
Validation study execution: Run blind comparison. Counselor rates 20 essays. AI rates same 20 essays. Calculate dimension-level correlation.
Weeks 11–12
User-facing launch: Public launch. Recruit first 50 users. Collect testimonials. Monitor feedback quality.
Weeks 13–16
Scale + document: 500-user milestone. Document acceptance outcomes (with permission). Write up validation study. Submit to college counseling publications.

✍️ The Essay Angle

The ultimate meta-essay: "I built a machine to evaluate the most human thing a student writes — their college essay — and then I fed my own essay through it." The validation study story is also the research methodology story, which demonstrates exactly the kind of intellectual rigor top schools want to see.