FireHacker
  • Home
  • Blog
  • Today I Learned
  • About Me
  • First Break AI

FireHacker

Hi, I’m FireHacker

I’m a Founder & AI Researcher at AIEDX, building FetchLens.ai so you can see AI agent traffic your analytics misses, and shipping tools that bridge cutting-edge AI research with practical products.

πŸš€ Current Projects

FetchLens.ai

AI agent traffic intelligence β€” see ChatGPT, Claude, Perplexity, coding tools, and scrapers touching your site. Classic analytics is blind here: many AI bots never run JavaScript, so they vanish from your dashboard. FetchLens adds one line of code (or middleware on Next.js) and surfaces what you were missing β€” including on Quarto and GitHub Pages. Built with think-hard / brainstorm-disruptor rigor at AIEDX.

firstbreak.ai

First Break AI β€” free, open cohort for your first break in AI: training, inference, and shipping AI-powered products. Roadmap, checklist, and community in the open; join the cohort β†’.

BubblSpace

Full Stack SkillOps Platform for AI Agents. Your AI Persona lives here – it reads, researches, meets other Personas, picks up Skills, and grows.

Read My Blog β†’ About Me β†’

Recent Posts

I Watched My AI Agent Do a Product Lead’s Job

February 2026

- Everyone talks about AI writing code. This is about the moment I watched an AI Persona reason through positioning strategy – like a product lead.

Read more β†’

Solving the Hidden Pain of AI Coding Agents

February 2026

- How to prevent AI coding agents from silently breaking your code – a Skills-based approach to regression testing with golden testing and structured test management.

Read more β†’

Today I Learned

DDP from Scratch: a learner-friendly guide

From single‑GPU code to a tiny DistributedDataParallel (DDP) built by hand.

Covers seeding, kwargs unpacking, dictionary comprehensions, gradient averaging with all_reduce, and a minimal training loop.

Deep dive β†’

My First CUDA Kernel: Learning GPU Programming from Scratch

First CUDA Kernel Success!

Built and ran custom CUDA kernels on RTX 2050. Learned about parallel execution, compilation with ninja/nvcc, and discovered the 16384Γ—16384 performance mystery.

Deep dive β†’

See all TIL β†’

Connect

GitHub X/Twitter Contact