How a Solo Dev Built an AI for Dads—No GPU, No Team, Just $5

A solo developer fine-tuned an emotional support AI for dads using Mistral 7B, QLoRA, and RunPod—with no GPU, no team, and under $5 in training costs.

How a Solo Dev Built an AI for Dads—No GPU, No Team, Just $5

The idea for DadAI was born sometime between midnight feedings and quiet moments of panic.

New dad Benoit Rossignol wasn’t looking to build a startup—he just wanted to create something that could help in the moments no one talks about.

“You don’t know what to expect,” he says. “Not just from the baby, but from your wife, your partner, your daily life.”

There’s no manual. And in the quiet, chaotic hours of new parenthood—like 3AM, holding a crying newborn, unsure who to wake or what to do—it’s not advice you need. It’s reassurance.

That’s what led Benoit, a tech leader and AI practitioner, to build DadAI: a lightweight, fine-tuned AI assistant designed to offer emotional support for new dads. Not to replace human connection—but to be there when no one else is.


Learning by Building, Not Just Prompting

Benoit didn’t start with a polished app or a VC-funded roadmap. He started with questions—and a desire to really learn how this tech worked.

“I saw DadAI as a way to learn while building,” he explains. “I’d never worked with Mistral or QLoRA before, but I wanted to go beyond using APIs and actually understand what’s happening under the hood.”

He scraped parenting-focused subreddits like r/NewDads, r/BabyBumps, and r/Parenting, curating a dataset designed to mimic the tone of real online dad advice: emotionally supportive, practical, and grounded in lived experience. His goal? Fine-tune a model that felt like chatting with the kind of dad you’d meet on Reddit—empathetic, informative, and nonjudgmental.

He chose Mistral 7B because it was efficient, competitive with larger models like LLaMA 13B, and—being French—a bit of national pride. Then he turned to RunPod as his training ground.

“RunPod let me go from dataset to working model without needing my own hardware,” Benoit says. “Even as a solo developer, the workflow felt seamless.”

One Developer, $5, and a Few JSON Files

With a focused dataset and a clear objective, Benoit used QLoRA and PEFT to fine-tune Mistral 7B on a RunPod RTX 4090 instance at roughly $0.69/hour. He completed three epochs in under six minutes, with a final loss of 2.23—all for under $5 total.

“Start small. Validate your training pipeline before scaling,” he advises. “You don’t need to fine-tune on the full internet to get something meaningful.”

He emphasizes that dataset quality outweighs size, and that tools like Hugging Face Transformers and PEFT are powerful—but only when paired with the right GPU infrastructure.

By stopping his pod between sessions, he kept idle costs low—just a few cents an hour for persistent storage—and avoided overspending. The result? A working emotional support model that ran reliably on command.


The Hard Part? Deployment

The training went smoothly. Deploying the model? Not so much.

Benoit’s original plan was to run DadAI behind a local OpenAI-compatible API using LocalAI, allowing easy integration with tools like LangChain and chat interfaces that expected the OpenAI schema. But there was a catch: the model he fine-tuned was a GPTQ-quantized version—chosen because it worked reliably with Hugging Face + QLoRA.

The problem? GPTQ wasn’t compatible with LocalAI’s llama.cpp backend, which expects GGUF-formatted models. And the LoRA adapters he trained couldn’t simply be merged back into a format that LocalAI could accept.

“I had the trained model, the weights, everything ready—but I couldn’t ‘plug it in’ the way I had planned.”

He spent days troubleshooting Docker builds on his Mac M1, trying to build LocalAI locally, then again on RunPod. Neither approach succeeded. He eventually realized the deployment path he’d chosen wasn’t feasible without a model conversion step—and a whole new backend.


AI as Glue Code

“What surprised me most,” Benoit says, “was how much of AI is really glue code. I imagined building AI meant writing novel algorithms. Instead, it was stitching together libraries, formats, configurations, and getting them to cooperate.”

He learned more than any tutorial could’ve taught him—about quantization, model compatibility, deployment tradeoffs, and what it really takes to move from concept to working inference.

Next time, he says, he’ll:

  • Skip local training altogether and go straight to RunPod
  • Choose his model format more carefully, with deployment in mind
  • Expand his dataset to include books, interviews, and broader emotional support sources

A Pocket-Sized Support Buddy

DadAI isn’t trying to replace a therapist or best friend. It’s a quiet, judgment-free companion for dads navigating early fatherhood—especially in moments when they don’t know who to ask, or don’t feel comfortable asking.

“As men, we tend to keep things inside. Maybe it’s how we were raised, maybe it’s cultural. Either way, I felt there was a gap.”

While the current version runs from the command line, Benoit is working on a Hugging Face Space or simple UI demo to make it more accessible. But the real mission is already clear: DadAI is about raising awareness. About emotional labor, fatherhood, and the power of even small AI projects to make a human difference.

“If I could fine-tune an emotional-support AI for dads with $5 and a few JSON files… so can you.”

Built on RunPod

Benoit fine-tuned DadAI on RunPod using a mix of Community Cloud and Secure Cloud instances—no local GPU, no infrastructure team, no budget beyond what he could spare as a solo builder.

“With RunPod, I fine-tuned a model for under $5—no GPU, no infrastructure, no team. Just me, a dataset, and a question I couldn’t shake.”

Benoit Rossignol, Creator of DadAI

Also credited: his son, his wife, and their Boston Terrier—the “real” DadAI team, as Benoit puts it.

Find him on GitHub here: 🔗 My DadAI Fine-Tuning Github Repository

And Benoit invites anybody working on AI projects, cloud deployment, or dreaming about launching their first LLM idea to reach out on LinkedIn.