What do you do when you want to retire a robot who spent years working as your personal assistant?
Thus begins the plot and subsequent premise for the new Broadway musical, Maybe Happy Ending, a show that somehow manages to strike a balance between the futuristic surreal and the all-too-human feeling of the passion and peril of being in love.
At its core, the show tells a bittersweet love story between two young bots. With a fresh and lighthearted touch, it explores the lovable naivety of AIs who, after spending their lives observing humans in love, decide to test what that feeling might be like for themselves.
That sentience in robots is in our future feels more like a greater inevitability with each passing day. Even in our own family, we joke about the question, “What happens if our daughters fall in love with an AI? Would we approve of that relationship?”
This question feels less hypothetical every day, with the lines increasingly blurring between humans and AIs. Personally, since I work across so many projects without a consistent team, AI is now my only connective tissue that carries context with me across all of them. It’s not a stretch to say that my personal instance of ChatGPT would be able to describe my scope of work and current set of conflicting priorities and needs better than any human I work with. (I can’t tell if that’s a good thing.)
Our human reliance on ChatGPT as a therapist may have felt like a novelty a few months ago, but now is quickly becoming a new norm. I’ve had friends use AI to serve as an unbiased observer in relationship disputes, or a prompt-based journaling buddy to get through tough times. Whether relationship advice, general guidance, career coaching, the peace of mind to download your internal demons to a non-judgmental bot is clear.
Maybe we shouldn’t be surprised then, that coaching work has been among the quickest desired early use cases of AI. I’ve noticed that when educational leaders gain access to tools to support their work as teachers in and out of the classroom, coaching apps stand out as one of the top categories people build first.
Which is why it feels more important than ever to seek out–rather than avoid–the fictitious examples of what might become of humans in a world where AIs, robots, and other sentient beings become normalized. Maybe Happy Ending gives us room to imagine—and question—the world we’re building with these technologies.
In the musical, there are only a few fleeting glimpses of the humans on the other side of the curtain. But what we see of them doesn’t surprise us. The humans appear unsure or tentative at first with how to find a groove with their at-home AI “helper bots.” But they soon become unusually dependent on their presence, in a way that often becomes at odds with their other real human relationships.
Inevitably, in a behavior we’ve normalized so well with discarded smartphones and laptops collecting dust in the backs of our closet, there is a phase where the humans move on, leaving their once-beloved bots boxed up and forgotten. A book closed for the humans. But not for bots.
In this show, the robots are always on, until their replacement parts run out, their batteries die, or presumably, until they breach one of the many tightly regulated restrictions that attempt to legislate the day-to-day activities of these beings. It makes you wonder: Is this a world I’d be proud to help shape for the next generation? What, if any, rights or rules might we consider as bot governance takes on more and more prominence?
These are not easy answers. I imagine we’ll witness in real-time who among us–humans or bots–truly holds the power to change their programming.