About Research Projects Blog Contact
Ouissam Drissi

AI SCIENTIST · NEURAL ARCHITECT · FULL STACK DEVELOPER

Ouissam Drissi

Full stack engineer who ships products and publishes research. Sometimes both at once.

About

I've been building on the web for over twenty years. Full stack, frontend to backend, databases to deployment. I ship real products: AI platforms, content systems, developer tools, games. The kind of work where you own it end to end and nobody else is going to fix it if it breaks at 3am.

My interest in AI goes back further than most people expect. I was tinkering with chatbots in Visual Basic in the early 2000s, trying to give them something closer to human memory. That question stuck with me. Now I train and fine-tune LLMs, build inference infrastructure, and publish research on training methods like ASRL and Progressive LoRA Merging.

On the research side, I work on neural architectures that don't follow the standard playbook. I study how biological systems wire themselves and adapt, and try to translate those principles into something you can actually run. I built an analog neural network chip from discrete electronics to get closer to how synaptic computation really works at the hardware level.

I also care about the practical side of AI. I built Toxigon, which ended up ranking #9 globally in ChatGPT citations alongside Wikipedia and Forbes. I built Hitonet, a full AI chat and API platform. I released Hito-2B as an open-source model with structured nested reasoning. These aren't experiments. They're products people use.

Most interested in the problems where the right answer hasn't been written down yet.

Research & Publications

Organic Neural Architectures

A long-running investigation into biologically-inspired computation: how organic systems wire, adapt, and generalize, and what that means for building architectures beyond the transformer paradigm.

Ongoing

Analog Neural Network Chip

Designed and fabricated from discrete electronics, no digital gates. Replicates biological synaptic computation at the hardware level.

Hardware

Reflection-Based Chain-of-Thought

Developed reflection-based CoT reasoning for LLMs independently, before it was adopted by leading AI labs.

Sep 2024 View Repo →

ASRL: Alternating Supervised and Reinforcement Learning

A training paradigm that alternates between supervised and reinforcement learning phases for more stable, efficient model training.

Sep 2025 · IJSET Read Paper →

Progressive LoRA Merging

Incrementally merges LoRA adapters during training, enabling efficient multi-task fine-tuning without catastrophic forgetting.

Dec 2025 View Repo →

More coming.

Active work in progress on novel architectures.

Hugging Face →

Projects

Hitonet.com

Hitonet.com

Full-stack AI chat and API platform serving proprietary models with OpenAI-compatible endpoints.

Visit →
Hito-2B on Hugging Face

Hito-2B

Open-source 2B reasoning model built on Qwen3.5. Uses structured nested cognitive tags and observable self-correction. +35 points over base on GSM8K.

View on HF →
Toolena.com

Toolena.com

Public AI tools suite launched on Product Hunt. 152 AI-powered utilities across 14 categories.

Visit →

People Noticed

In 2025, a study of 10 million ChatGPT prompts found Toxigon at 4.1% of global citations, sitting next to Wikipedia, Reddit, and Forbes.

"We found one of the most influential sources in ChatGPT Search."
Daniel Drabo, Peec AI
"Toxigon is the seventh most cited source for best electric cars searches, ranking above Cars.com and Reuters."
Debra Williamson, LinkedIn
"Toxigon appears in ChatGPT's top 10 citations at 4.1%, suggesting specialized relevance for certain query types."
Eyeful Media, AI Search Citations Analysis
#9
globally in ChatGPT citations

Blog

AI / Research

Hito 2B: A Small Model That Actually Reasons

Open-sourced a 2B model that scores 60% on GSM8K where the base model sits at 25%. The trick isn't scale. It's structured nested reasoning with observable self-correction.

Read article →
AI / Tools

The Model Is a Database. You Just Couldn't Query It Until Now.

LARQL decompiles transformer weights into a queryable graph. SQL over FFN knowledge. Insert facts without retraining. Inspect what the model actually knows.

Read article →
AI / Research

Body Snatching: Growing a New Model Inside Someone Else's Skeleton

Full fine-tuning replaces a model's identity but costs $10,000. Progressive LoRA Merging does the same thing on one GPU for under $500.

Read article →
AI / Research

ASRL: Why Supervised and Reinforcement Learning Should Take Turns

Everyone trains SFT first, then RL. I alternated them inside each epoch. The model converged in 3 epochs instead of 12.

Read article →
AI / Engineering

OpenClaw's Memory Problem: Why Your Agent Can't Keep Promises

AI agents promise to remember. They can't. Here's why context compaction and stateless sessions break every memory system, and how a compaction agent plus QMD fixes it.

Read article →
SEO / Research

I Ranked Next to Wikipedia in ChatGPT. Here's What Actually Happened.

A solo experiment to prove SEO is about novelty, not rules. How Toxigon ended up at 4.1% of ChatGPT global citations alongside Wikipedia, Reddit, and Forbes.

Read article →
View all posts →

Get in Touch

If you're working on something interesting, reach out.