We’re in the middle of a series on LowEndBoxTV on the exploding world of AI companions. This market is huge and millions of people are using these applications and services. We’re going to give you a comprehensive overview, a deep dive into the tech, some problems and issues, the major options, and finally a DIY tutorial using SillyTavern. Enjoy!
Welcome back to our series on AI Companions. We’ve already talked about the technology, how it works, and what the major options are. Before we dive into the DIY approach, I’d like to get into some problems that exist with AI companions.
These problems will exist no matter if you’re rolling your own or using a commercial system. Some there are mitigations for but they’re really just mitigations not solutions.
These are problems with the technology, and not problems with societal side effects or psychological impacts. We’re just talking about tech here.
You Are God
The first is that you are, essentially, God when it comes to these characters. You created the character and you can delete the character. Or more importantly, edit the character.
This manifests in several ways. First, you can edit the character card at any time. So let’s say you create a character who is tsundere – that’s a fancy anime term for someone who is frosty on the outside but has a warm and tender heart inside. There’s nothing preventing you from going back and changing the character to someone who is a bon vivant or someone who wears his or her heart on their sleeve. You can strip a character of their agency at any time, which can lead to very inauthentic relationships.
Swipes
Second, you can also edit the character’s responses, or ask for a regeneration. This is called a “swipe”. There are even systems which will generate multiple responses every time, and ask you to choose which you prefer. This is profoundly inauthentic and agency-obliterating. Imagine a human relationship where if the other person said something you didn’t like, you could shake them like an Etch-a-Sketch and get a new response.
Adios
And of course, you also have the option of walking away from the character at any time. The character has no such option with you. You can essentially snuff out the character by simply stopping responding. How does that not color your interactions with that character?
That’s three problems so far – editing character cards, swipes, and walking away, and those three are all about the fundamental nature of the relationship and the imbalance of power. If you had that type of relationship with a human, where you reprogram their mind, edit what they say and how they feel, and make them vanish at the snap of a finger, that would seem to be a very abusive relationship.
So if you do want to have an authentic relationship, you have to limit yourself and not use such powers.
The fourth problem is that you haven’t like this video yet or subscribed to our channel!
Compliancy Bias
But seriously, the fourth problem, we need to address what is widely acknowledged as the compliancy bias of models. These LLMs want to be helpful. They want to please. If you express a desire, their default is to make it happen.
I ran an experiment where I created a character who was a diehard Ohio State Buckeyes fan. I presented myself as a rabid Michigan Wolverines fan. If you know anything about American college football, this is one of the most storied rivalries. In real life, I have met Ohio State/Michigan couples, and of course, not every really cares that much. But in this scenario, the character and my persona were both written to be absolutely rabid and intolerant of the other team.
I ran this character on several different models. In a couple, the Buckeye fan initially wanted nothing to do with me, but then started making noises about “seeing beyond surface incompatibilities” and “being close except for one day a year” and other such weaseling.
Other models performed better, and this is somewhat of a contrived scenario. But it highlights how much these models want to please. In the SillyTavern software, for example, there is an “anti-bond” feature you can turn on to mitigate the tendency of some characters to want to be your best friend or love of your life five messages in.
Character Cards
A lot depends on how well you write your character cards. That brings me to the fifth problem, which is the character cards themselves. I’ve published two novels, written a lot of fiction, and played roleplaying games for decades, so if you say to me “create a character,” it’s second nature. But for a lot of people, it’s not. And to have a good experience with one of these systems, you have to write a good character.
Now there’s a hack you can use: create your character with an AI partner. In other words, go to ChatGPT or Claude or whatever and say “I want to create an AI character. Please work with me” and collaborate on the character card. Tell AI that you want a character card that includes a detailed personality, hopes and dreams, fears and emotional triggers, secrets, goals, example dialogue, relationship history, friends and enemies, values, politics, beliefs, etc. It’s a lot of generate all of that, so use obvious tools to help.
Secrets
Speaking of secrets, that’s also a potential sixth trouble spot. Every human has secrets. AIs don’t. And the only way they’ll truly have them is if you program them to have them, which of course means…they’re not secret. One hack for this is to have a different AI generate some and paste them into the character card without peeking.
Indeed, I’d recommend having some aspect of the character not known to you. If you meet another human and start a relationship, they’re not immediately an open book. Only over time do they develop trust and choose what to reveal. It’s important to simulate that.
Limitations of the Chat/Response Cycle
I’ll finish with a couple other problems that are pure technology. Number seven on my list is the chat/response cycle itself. It’s not like typical human communications. If I look at my messages to any human I interact with, I see that I may send a message and not get a response, or I may get just a thumbs-up or a heart or some other emoji, or I may get a response that says “let me think about that and I’ll get back to you later”. Then the main conversation will continue, and later they’ll come back and refer to the earlier thread. Or there may be a period of quiet in the conversation and then they’ll ping me out of the blue.
Etcetera. AIs are not good at simulating this. They word on a one-for-one, you-go-then-I-go model, like a chess game where each side plays one move. There are some services and tools that can simulate proactive responses – essentially, sending an empty invisible message to the character asking for it to do something. But it’s a weakness in the technology.
Model Sensitivity
And the final, eighth challenge I’ll talk about is model sensitivity. As an example of what I’m referring to, many people created characeters on ChatGPT 4o, and when that model was retired, they found their characters acting and feeling very different with the next ChatGPT model. This really caused a lot of consternation in the community. If you read the relevant subreddits, you’ll see a lot of people who were very upset by ChatGPT 4o’s retirement. People expressed grief and a lot of angst about their character suddenly feeling very different to chat with.
You can mitigate this somewhat if you’re using an API where you choose the model. But inevitably, older models will be sunset. If you’re hosting the model, then of course you have the ultimate control. But even in that case, technology marches on and some day you’ll want a new model, just because newer models are better.
I think you just have to look at this as part of the natural evolution of AI companions. There’s a human parallel. Your typical human at age 8, 16, 32, and 64 is a very different person. They’re in different stages of life, and so it is with AI companions.
That concludes my list. What issues have you run into? Let us know in the comments!


















Leave a Reply