AI SCIENTIST · NEURAL ARCHITECT · FULL STACK DEVELOPER
Full stack engineer who ships products and publishes research. Sometimes both at once.
I've been building on the web for over twenty years. Full stack, frontend to backend, databases to deployment. I ship real products: AI platforms, content systems, developer tools, games. The kind of work where you own it end to end and nobody else is going to fix it if it breaks at 3am.
My interest in AI goes back further than most people expect. I was tinkering with chatbots in Visual Basic in the early 2000s, trying to give them something closer to human memory. That question stuck with me. Now I train and fine-tune LLMs, build inference infrastructure, and publish research on training methods like ASRL and Progressive LoRA Merging.
On the research side, I work on neural architectures that don't follow the standard playbook. I study how biological systems wire themselves and adapt, and try to translate those principles into something you can actually run. I built an analog neural network chip from discrete electronics to get closer to how synaptic computation really works at the hardware level.
I also care about the practical side of AI. I built Toxigon, which ended up ranking #9 globally in ChatGPT citations alongside Wikipedia and Forbes. I built Hitonet, a full AI chat and API platform. I released Hito-2B as an open-source model with structured nested reasoning. These aren't experiments. They're products people use.
Most interested in the problems where the right answer hasn't been written down yet.
A long-running investigation into biologically-inspired computation: how organic systems wire, adapt, and generalize, and what that means for building architectures beyond the transformer paradigm.
Designed and fabricated from discrete electronics, no digital gates. Replicates biological synaptic computation at the hardware level.
Developed reflection-based CoT reasoning for LLMs independently, before it was adopted by leading AI labs.
View Repo →A training paradigm that alternates between supervised and reinforcement learning phases for more stable, efficient model training.
Read Paper →Incrementally merges LoRA adapters during training, enabling efficient multi-task fine-tuning without catastrophic forgetting.
View Repo →
100% AI-generated content platform ranked #9 globally in ChatGPT citations, alongside Reddit, Wikipedia, and Forbes.
Visit →
Full-stack AI chat and API platform serving proprietary models with OpenAI-compatible endpoints.
Visit →
Open-source 2B reasoning model built on Qwen3.5. Uses structured nested cognitive tags and observable self-correction. +35 points over base on GSM8K.
View on HF →
Public AI tools suite launched on Product Hunt. 152 AI-powered utilities across 14 categories.
Visit →In 2025, a study of 10 million ChatGPT prompts found Toxigon at 4.1% of global citations, sitting next to Wikipedia, Reddit, and Forbes.
"We found one of the most influential sources in ChatGPT Search."Daniel Drabo, Peec AI
"Toxigon is the seventh most cited source for best electric cars searches, ranking above Cars.com and Reuters."Debra Williamson, LinkedIn
"I had never even heard of this website."Nate Tower, after finding it at 4.1% alongside Wikipedia and Forbes
"Toxigon appears in ChatGPT's top 10 citations at 4.1%, suggesting specialized relevance for certain query types."Eyeful Media, AI Search Citations Analysis
Open-sourced a 2B model that scores 60% on GSM8K where the base model sits at 25%. The trick isn't scale. It's structured nested reasoning with observable self-correction.
Read article →LARQL decompiles transformer weights into a queryable graph. SQL over FFN knowledge. Insert facts without retraining. Inspect what the model actually knows.
Read article →Full fine-tuning replaces a model's identity but costs $10,000. Progressive LoRA Merging does the same thing on one GPU for under $500.
Read article →Everyone trains SFT first, then RL. I alternated them inside each epoch. The model converged in 3 epochs instead of 12.
Read article →AI agents promise to remember. They can't. Here's why context compaction and stateless sessions break every memory system, and how a compaction agent plus QMD fixes it.
Read article →A solo experiment to prove SEO is about novelty, not rules. How Toxigon ended up at 4.1% of ChatGPT global citations alongside Wikipedia, Reddit, and Forbes.
Read article →If you're working on something interesting, reach out.