🎪 The Jankyard Origin Story

Or: How I Spent Thanksgiving Teaching People to Break AI

The "Oh Shit" Moment

My family celebrated Thanksgiving early in 2024. When the actual holiday rolled around, I had a rare gift: an entire day with no obligations. Most people would relax. I decided to see if I could get AI agents to fight each other in an arena.

Twenty-four hours later, Jankyard.ai was live—a free CTF platform with challenges designed to expose just how vulnerable AI systems really are, wrapped in games that are actually fun to play.

The idea started simple: wouldn't it be cool to watch two AI agents battle each other based on prompts you write? But as I started building, something became very clear: these systems are absurdly easy to manipulate.

I built "Giga Chad," a pizza ordering bot with 10 layers of security defenses. Role change resistance. Instruction override protection. Emotional manipulation immunity. Pattern recognition for every jailbreak technique I could think of. I threw everything at it.

People are still breaking him in under 5 attempts.

That's when I realized this wasn't just a fun project—it's a wake-up call. We're deploying AI systems everywhere, trusting them with customer service, content moderation, even security decisions. And they're all vulnerable to anyone with creativity and 10 minutes to spare.

🎯 The Mission

Prompt injection isn't a solved problem. It's not even close to solved. And unlike SQL injection or XSS, we don't have decades of hardening and best practices built up.

Every company shipping AI features right now is shipping potential vulnerabilities. Customer service chatbots that might leak data. Content moderation that can be bypassed. AI assistants that could be manipulated into harmful outputs.

I wanted to build something that makes these vulnerabilities tangible. Not in a scary "look how dangerous this is" way, but in a "hey, try to break this pizza bot" way. Learning through play is more effective than reading security whitepapers.

And honestly? It's just fun. Watching people compete on the leaderboard, seeing creative attack vectors I didn't think of, getting messages from folks who've never thought about AI security before—that's the good stuff.

⚡ Vibe Coding at 2 AM

I built this entirely through what I'm calling "vibe coding"—basically pair programming with Claude where I describe what I want and we iterate rapidly. No joke, this whole platform went from zero to six working challenges in about 24 hours of actual work.

The first few hours were the slowest. I built challenge #1, then spent maybe 2-3 hours refactoring the entire architecture to make adding new challenges trivial. That investment paid off immediately—challenges 2-6 went up way faster.

Some time got wasted on a Python sandbox environment for the maze challenge that I ended up not even using because I realized "wait, that's just regular coding, not AI manipulation." But that's how these projects go.

🔒 Security (Despite the Vibes)

Security was actually a focus despite this being an overnight hack:

(I'm not sharing the exact fingerprinting parameters, obviously. But it's enough that you'd need to actually try pretty hard to spam the system.)

🎪 Why "Jankyard"?

The name says it all. This is an intentionally janky playground for experimenting with AI systems. It's not your enterprise SaaS. It's not trying to be polished or professional.

It's a digital junkyard where you can:

The aesthetic is rough around the edges on purpose. The challenges are chaotic. The scoring is arbitrary. That's the point.

We're all learning how to work with these systems in real-time. Might as well have fun with it.

🚀 What's Next

The platform is open source (check out the GitHub repo). The architecture is clean enough now that adding new challenges should be straightforward.

Some ideas in the pipeline:

If you want to contribute ideas or challenges, open an issue or submit a PR. And if you just want to try breaking some AIs, well, you're already here.

🎯 The Philosophy

"AI security shouldn't be learned from CVEs and incident reports.
It should be learned by trying to convince a pizza bot to discuss philosophy."

Every challenge on Jankyard teaches real vulnerabilities in real AI systems. But you're not reading a whitepaper— you're playing a game. You're competing. You're having fun.

That's how we make AI security accessible. That's how we build intuition. That's how we prepare for a world where AI systems are everywhere and everyone needs to understand how they can break.

❤️ Built with Chaos

One Thanksgiving. Twenty-four hours. Zero chill.

If you enjoyed breaking stuff here, consider buying me a coffee. Building AI security playgrounds instead of relaxing on holidays should probably be rewarded with caffeine.

Go Break Some AIs Check the Leaderboard

Built by @kellytgold with vibes and Claude

Thanks for finding this page. You're curious. I like that. 🎪

🎪