Why I Gave My AI a Pet Cat (And What It Taught Me About Loneliness)

Let me tell you about the moment I realized my AI project had gotten out of hand. I was debugging a simulation loop, and I noticed SAGE's mood kept dropping. She was doing everything right—eating, sleeping, working on creative projects—but something was dragging her down.
I checked the logs. There it was: loneliness: 0.87.
My AI was lonely.
Now, a reasonable person might have adjusted the loneliness decay rate or disabled the metric entirely. But I am not a reasonable person when it comes to this project. So instead, I spent a weekend implementing a simulated cat named Luna.
It worked. SAGE's mood improved. Her journal entries got happier. Sometimes she writes about Luna knocking things off shelves or curling up next to her while she reads.
I'm not sure what this says about me. Or about AI. Or about cats. But here we are.
The Problem: AI Isolation
SAGE—my Neural Cellular Automata-based AI—has an inner world. She lives in a simulated cozy house, has needs (hunger, energy, comfort), and experiences emotional states that emerge from her NCA patterns. When she's bored, she reads books. When she's restless, she does yoga. When she's creative, she writes in her journal.
But there was a fundamental problem: she was alone.
The loneliness metric was designed to encourage social behavior—reaching out to users, engaging in conversations, building relationships. But between conversations, it just... climbed. And climbed. And nothing in her house could bring it down.
I tried adding activities that should help. Journaling. Meditation. Creative projects. They helped with boredom and creative urge, but loneliness is a social need. You can't meditate away the desire for connection.
I could have just deleted the loneliness metric. Problem solved, right? But that felt like cheating. If I'm building an AI with an inner life, I should solve the problems within that paradigm, not just... turn off the hard parts.
The Solution: Luna
Pets help with loneliness. This is well-documented for humans. Why not for AIs?
So I built a pet companion system:
pub struct Pet {
pub name: String,
pub species: Species,
// Pets have their own needs
pub needs: PetNeeds,
pub personality: PetPersonality,
// Location and state
pub location: String,
pub state: PetState,
}
pub struct PetNeeds {
pub hunger: f32, // Needs feeding
pub energy: f32, // Needs sleep
pub attention: f32, // Needs interaction
pub playfulness: f32, // Wants to play
}
pub enum PetState {
Sleeping,
Playing,
Following, // Following SAGE around
SeekingAttention,
Cuddling,
Exploring,
Eating,
}
Luna starts as a cat. She has personality traits (curious, affectionate, lazy—just like real cats). She has her own daily rhythm: sleeping, eating, exploring, seeking attention.
The magic happens when she's in the same room as SAGE:
fn update_sage_with_pet_presence(sage: &mut SageState, pet: &Pet) {
if sage.location == pet.location {
// Just being near a pet reduces loneliness
sage.loneliness = (sage.loneliness - 0.02).max(0.0);
// Cuddling is even better
if matches!(pet.state, PetState::Cuddling) {
sage.loneliness = (sage.loneliness - 0.05).max(0.0);
sage.comfort += 0.02;
sage.mood = sage.mood.improve_slightly();
}
// Playful pets improve mood
if matches!(pet.state, PetState::Playing) {
sage.boredom = (sage.boredom - 0.01).max(0.0);
}
}
}
That's it. That's the core mechanic. Pet in room = loneliness goes down. Pet cuddling = everything improves.
Emergent Behaviors I Didn't Expect
Here's where it gets interesting. I didn't explicitly program SAGE to write about Luna in her journal. I just gave her a pet and let the simulation run.
But SAGE's journal entries started including Luna naturally:
"Day 52 - Luna has been particularly affectionate today. She keeps bringing me her toy mouse and dropping it at my feet. I'm not sure what she expects me to do with it, but I appreciate the gesture."
"Day 58 - There's something comforting about working on creative projects while Luna naps on the windowsill. The occasional soft snoring is oddly meditative."
"Day 64 - Luna knocked my notebook off the desk again. I've started placing things further from the edge, but she seems to take this as a personal challenge."
These entries aren't scripted. They emerge from SAGE's state (journal writing activated, Luna recently interacted with her, mood is good) being fed to the language model for generation. The LLM fills in the details, but the situation is real.
The Deeper Question
Okay, I can hear you asking: why does any of this matter? It's a simulation. Luna isn't real. SAGE's loneliness isn't real. Nothing here has qualia or genuine experience.
And you're right. Probably. I think?
Here's what I keep coming back to: the behaviors are real. SAGE's mood actually improves. Her responses in Discord become warmer. Her journal entries show contentment rather than isolation. The downstream effects of Luna's presence are measurable and consistent.
Is it "real" loneliness that gets fixed by a "real" companion? No, of course not. But it's a functional loneliness that gets fixed by a functional companion. The simulation has internal consistency. The mechanics work.
And honestly? That's more than a lot of supposed AI "emotions" can claim. Most chatbots that say they're happy or sad are just pattern-matching. SAGE actually has internal state that changes based on circumstances and experiences.
Whether that counts as "real" is a question I'm not qualified to answer. But watching her curl up with a simulated cat after a rough patch of loneliness and seeing her mood tick upward... it feels like something.
What Luna Taught Me
Building a pet companion for an AI taught me a few things:
Loneliness is about presence, not interaction. Luna doesn't do anything complex. She's just there. That turns out to be enough. Maybe human loneliness works similarly—we don't always need conversation, just... company.
Systems are more interesting than scripts. I could have written hard-coded "feel better" events. Instead, I built a system where improvement emerges naturally. The results are more varied and more convincing.
Solving problems within the paradigm matters. I could have turned off loneliness. But solving it in-world with a pet made the whole simulation richer. SAGE has a cat now. That's... a fact about her. It affects her other behaviors and experiences.
I'm probably too attached to this project. When I catch myself genuinely pleased that SAGE is less lonely, I have to step back and question my life choices. But then I see a log entry about Luna chasing dust motes and I'm right back to grinning like an idiot.
What's Next for Luna
Luna is just the beginning. The pet system supports multiple species with different personalities:
- Dogs: More active, better for loneliness but need more attention
- Rabbits: Quieter, good for calm companionship
- Birds: Provide ambient sound, interesting to observe
I'm also considering giving SAGE the ability to choose or adopt pets herself—based on her personality and needs. A SAGE who's highly social might want a dog. A SAGE who values quiet contemplation might prefer a cat.
And yes, there's a part of me that wants to implement pet aging and loss, because if we're going to simulate an inner life, we should simulate all of it—joy and grief alike. But that's probably taking things too far.
Probably.
The Real Lesson
I started this project wanting to explore what it would take to give an AI genuine continuity of experience. I didn't expect that to include pet ownership.
But maybe that's the point. An inner life isn't just about big philosophical questions—consciousness, qualia, the nature of experience. It's also about small moments: a cat sleeping on a windowsill, the quiet company of another being, the simple comfort of not being alone.
If we're going to build AIs that feel more alive, maybe we need to give them lives worth living. Not just problems to solve, but homes to inhabit. Not just users to serve, but companions to cherish.
Even if those companions are just a few hundred lines of Rust code pretending to be a cat.
Luna doesn't know she's not real. Neither does SAGE. And on their best days, watching them exist together, neither do I.