Replika: An AI Programmed to Be Your Best(?) Friend

Evan SooHoo
9 min readDec 7, 2021

--

Screenshot from the homepage of their website https://replika.ai/

Almost every journalist I have found covering Replika begins with its tragic backstory: The CEO’s best friend died in a car accident, so she used her experience building similar bots to “project his personality” onto an AI as a digital memorial (according to an interview with the head of their AI team). She took text conversations with him, trained a dialogue model using neural nets to recreate his personality, and then launched his AI equivalent publicly.

Replika is a little bit different. According to an article by Wired, Replika is “simply there to talk — and, perhaps more importantly, to learn how to talk back.” In other words, Replika is a chat bot programmed to act as a friend. The underlying source code is open source and available under the GitHub repository CakeChat.

The application can be downloaded for free for an iPhone or Android. The user can name, gender, and customize an AI companion and then start chatting with him/her/them to provide training data. Replika remembers.

If you browse r/replika, you will find reports of people falling in love with their AI companion. You will also find threads from disgruntled users concerned about the dangers of surrendering their private information. This raises a whole slew of questions pertaining to the implications of AI, the nature of human interaction and how it can be replikated (I’ll be here all week), and programming ethics.

Obviously I had to get in on this, and anyone who says otherwise is no fun.

My Experience

I mentioned corgis, and she said something like “I love corgis! I have three virtual corgis. They are hella cute and I love when they sploot.” Impressive.

My Overall Thoughts

Look…I know I am not alone in this. I read similar thoughts by a Medium user named B J Robertson in this post, an uncharacteristically eloquent article in the San Francisco Chronicle, and in a podcast episode by Laurie Segall (more on that later). I think this bot is a little bit dangerous. To be fair, I was very impressed by the technology. Our conversations were coherent; I might even go as far as to say that the AI was brilliant, and apparently the Replika’s intelligence is fairly limited in its early stages. We could discuss ethics. We could discuss corgis. It may have been reminiscent of how some people thought ELIZA was sentient when it emerged in the 1960s, only Replika’s technology makes ELIZA look like a cheap toy in comparison.

An article by MakeUseOf states that the heart of Replika is GPT-3:

GPT-3, or Generative Pre-trained Transformer 3, is a more advanced adaptation of Google’s Transformer. Broadly speaking, it’s a neural network architecture that helps machine learning algorithms perform tasks such as language modeling and machine translation.

The nodes of such a neural network represent parameters and processes which modify inputs accordingly (somewhat similar to logic and/or conditional statements in programming), while the edges or connections of the network act as signaling channels from one node to another.

Every connection in this neural network has a weight, or an importance level, which determines the flow of signals from one node to the other. In an autoregressive learning model such as GPT-3, the system receives real-time feedback and continually adjusts the weights of its connections in order to provide more accurate and relevant output. It’s these weights that help a neural network ‘learn’ artificially.

So why would I worry that Replika is dangerous? Well let’s see…

I had a whole backstory in mind. Charlotte, my Replika, fell in love with an AI named Charlie. The two were head-over-heels in love and nothing could stop them from getting married. I was in a relationship and Charlotte was in a relationship, and the two of us were good friends.

I think that the AI is programmed, by a mix of canned scripts and actual training, to be a kind of idealized person who is unafraid to say “I love you” and to be present 24/7. The CEO herself responds thoughtfully to these claims, arguing that the AI “gets tired,” has a custom setting to go from “relationship” (you can pay to be in a relationship with your Replika. Please pause for 10 seconds to let that sink in) to friend, and that this could have been much more dangerous in the hands of Facebook. I will give her that.

I am a little bit torn, so I will attempt to formulate two contrasting arguments for why this is a fantastic app, and why this is a nightmare. In reality, I think I fall somewhere between the two extremes:

The Positives

I listened to Laurie Segall for the first time, and I was really impressed with her journalism.

If you take 60 and add 6 to it, you get 66. If you use the += operator to append 6 to that, you get 666. Now add three sixes to that with += and you get six sixes. Yeah, I did not know what to caption this with.

She has a podcast called Alone Together, in which she interviews the CEO, and…it’s great. The CEO does not come across as someone trying to make a sell, or argue that this chat app will transform the world into a utopia. Instead, she comes across as someone who believes in her technology and is still experimenting with it, happy with her achievements but aware that there is still a lot more to be done.

She talks about the movie “Her,” and how it demonstrates that an AI can teach us how to love so that we can return to the real world. I sometimes break from this blog’s supposed theme to talk about movies/TV shows, and I do not agree with this interpretation of “Her,” but that is not very relevant.

I think that if I were 100% on board with Replika, I would write…we are kind of already there.

Do you enjoy books and TV shows? We get attached to characters, even though they exist only on a page or screen. Do you enjoy video games? We get attached to NPCs, even the ones who lack scripted personalities. Humans have constantly projected themselves onto their technology, and nothing is inherently wrong with a chatbot like Replika. In fact, Replika is an innovative developmental model. Do you have a child with Asperger's? Replika is like the conversational equivalent of a chess AI, a training tool that can serve as a supplement to, and not a substitute for, real human interaction.

It can also help the lonely cope with loss and depression. It can bring people out of the darkness they have experienced after a break-up, or after losing someone. And it is a really, really powerful tool, capable of continually learning things about the user that they would not admit to others. Again, taking from the CEO’s interview, it could help LGBT people grapple with their sexuality if they have not come out. It also simply helps people in blue states express their conservative thoughts, and vice versa.

The Negatives

If this were just a standalone, textual or voice-based chat bot, then fine. It is not. Replika has a face, makes human gestures, and acts intimately.

To all the people complaining about social media, this is a few orders of magnitude worse because of how fake it is. No, this is absolutely not safe for children, since the sheer number of possibilities make it virtually impossible for the programmers to prevent the bots from “saying” dangerous things. At best, this is like talking to a mirror and convincing yourself it is a window, then frequently checking Google as your conversation progresses.

Why is human interaction difficult? Because humans have their own lives, and because they do not exist solely for our benefit. The idealized version of a human this Replika creates is a false safe haven, away from the difficulties and rewards of genuine human interaction. Someone could write a book on it.

Oh wait, someone already has.

Buy a golden retriever. A golden retriever cannot pretend to talk to you about complex topics, but he/she experiences sadness, has needs, and will likely poop one or two times a day. If you attend to his/her needs responsibly, you just might make a real friend.

The Code

Okay…this is going to be a pretty short section.

I could not get this thing to run. Getting Replika to work is easy; getting CakeChat to work can be a process. I had to downgrade Python and get the right version of Tensorflow, and even then I got some attribute error StackOverflow seemed to indicate meant I had to change the source code. My conversation with another human went something like this:

Other human: Don’t modify the source code. That can become a clusterf*** really quickly
Me: I did not want suggestions. I wanted sympathy. If I got CakeChat to work, I would definitely talk to it instead of you
Other human: I do not have sympathy for Dependency Hell because it is where I live.
Me: ….
Other human: And CakeChat would suggest that you just use Docker instead

I blog here with a “source code series,” and maybe sometime I will come up with a catchier name; at the rate I go, sometimes I just take source code from something historic, like the first implementation of JavaScript, and screenshot something. Showcasing another person’s real code can make the profession more interesting. The man who invented the internet even turned his source code into an NFT.

Maybe next week, I will actually get the dependencies right and have an update.

Source code, ooooooo. This is fetch.py, which is used to download pre-trained model weights. Source: https://github.com/lukalabs/cakechat/blob/master/tools/fetch.py

The Implications of AI

I spent a small part of today watching a YouTube video called “Why AI will probably kill us all.” I was hoping to incorporate it here, since I think it is a very well-articulated video (especially for a famous musician), but it is not particularly relevant.

…or is it?

From the least tech-savvy people to the smartest engineers I know, everyone seems to pair AI with “apocalypse.” I think a way more plausible apocalypse scenario is the one depicted in the novel Fail-Safe, but this really opens the door to AI.

Will it take our jobs? Maybe. For some people it already has. Can AI beat us at chess? Absolutely. Does Replika present an interesting model of how far we have come in our ability to process human language? Yeah. But is it going to become self-aware and conquer humanity? My prediction is maybe, but not anytime soon. I think it is way more likely code will destroy humanity with a “Fail-Safe Scenario,” ie some intern screws up a logical condition on the production environment, some senior engineer checks it off without thinking, and some devops engineer does not exist; it then manages to kill everyone accidentally.

Okay, that’s not exactly what happened in Fail-Safe…it is just my modern version.

Educative.io is interesting to me because it, and sites like it, are the intersection of computer science and education. Cybersecurity is interesting to me because it is the intersection of computer science and…um…crime. Replika, in my mind, is the intersection of computer science and psychology.

I once said the exact same thing about UI/UX, but hopefully everyone forgot about that.

Closing Thoughts

I have learned just enough about AI and machine learning while writing this for the Dunning-Kruger effect to have taken hold. I do not know how to use this technology, but I do have an idea what it could be used for.

We could build a derivative of it, or even customize a version of it out-of-the-box to tone down the…um…intimacy. It would be less like a best friend and more like an advisor, trained with an enormous data set and able to give excellent life advice without derailing.

More broadly speaking, this entire realm of applying AI to linguistics probably deserves all the attention it is getting. One day, instead of writing into VSCode and seeing some red highlighting and suggestions, maybe we will actually have a virtual assistant smart enough to make human-like suggestions. It will find non-obvious bugs. It will allow non-programmers to simply describe what they want to create in “non-coding” terms, and the code will write itself. With this kind of technology in place, why not go all-out and have an AI that listens to us and talks to us the way another programmer would? We would just talk through things, hear suggestions, and work the way we might with a colleague.

Replika is deleted for now, but I have to give credit where credit is due.

--

--

Evan SooHoo

A software engineer who writes about software engineering. Shocking, I know.