We all know that friend who isn’t afraid to tell it like it is—the one who’s brutally honest with zero sugarcoating, the one who can really roast you. But here’s the question: could an AI pull off the same thing?
I've noticed in recent months that there have been a few AI apps that have had viral moments with people voluntarily dumping in information about themselves, and letting the algorithms destroy them. All in good fun, of course.
The first one I saw was from about six months ago, when the AI app builder, Glif created a "Wojak Meme" builder. The prompt was simple: Upload your photo pic and a link to your Twitter profile. Then the AI would crawl your tweets, and draw out some biting observations based on some overly exaggerated stereotypes.
I just ran mine again today, but instead of my Twitter feed, I fed it my resume. Here's what it came up with:
(Cuts deep. But also, kind of real talk?)
This Glif micro-app went viral, with thousands of tech folks posting their roasted memes about themselves on Twitter. (You can see the full list of app runs, and try yours here.)
Seeing how people embraced these brutal yet entertaining insights got me wondering—why are we drawn to these public self-roasts?
Then, a few months later, I spoke with a friend of mine who runs a popular TikTok channel for teenagers, who told me that they recently ran a livestream of an AI character who just roasted anyone publicly who left a comment on the video stream. The kids couldn't get enough of it and just kept joining, kept sharing comments, all with the glee of seeing what the AI would say about them next.
It was a top stream on TikTok for about 15 minutes.
Another month or two went by, and another roast-generation app went mini-viral, this one also based off your Twitter feed (via Wordware).
Naturally, I just had to give it a try. You can see the result on the left.
Here's the line that killed me: "You claim to be in 'emerging tech,' but your most groundbreaking discovery seems to be finding a picture frame in the trash."
Um. Ouch. I laughed for five minutes straight. Who would have thought that a programmatically designed generative algorithm could puncture your bubble of self-perception in seconds?
Like the other two examples (the live TikTok stream and the Glif app), the genius of these apps was that the content was created to be shared.
Wordware shared a bit more on their part of this process here.
That we not only tolerate these deep cut moments and savage burns—but in fact embrace them, then share them publicly—reveals a lot about human psychology and behavior.
Learning Through Roasts
I've been thinking a lot about the power of the roast as a learning opportunity.
If we have the resilience and grit to handle a little bit of no-holds-barred public feedback on social media, could we tap into this power in other learning environments? What if, instead of receiving carefully worded critiques, a student got a “deep cut” roast on their college essay? Could an AI-generated roast serve as a humorous, third-party “NPC” (non-player character) in an evaluation process, providing the kind of blunt feedback that teachers might shy away from?
This idea reminds me of how we use humor to teach and challenge perspectives. Take The Book of Mormon as an example. The show’s creators, also known for South Park, knew they couldn’t just throw punches at an entire religion on Broadway without alienating the audience. So they introduced Elder Cunningham (Josh Gad), as the chaotic, outlandish foil to Elder Price (Andrew Rannells), the straight-laced, moral anchor.
This way, it wasn’t the show itself making wild, inappropriate comments—it was Elder Cunningham, the character. Through his outrageous remarks, we all got a few laughs in, and a glimpse at some uncomfortable truths. But just when we really started squirming in our seats, Elder Price was there to bring us all back with a gentle, but firm reminder of morals and ethics. In other words: The chaos agent broke down walls, and the straight character restored a sense of balance.
After I re-ran my Glif Wojak Meme generator with my updated resume, I turned to ChatGPT and fed it both the resulting image and my resume, asking for feedback about not only the biting comments but how I might respond to some of that criticism in a real hiring process.
Here's what it came up with:
Not bad.
That's the thing about a good roast. It might sting a little bit at the time. You might get a little pissed off, but then, when you really think about it, you realize maybe there was a little truth to the direct real talk. And in fact, that kind of thing is really helpful.
I wonder...rather than use AI as the obedient rule-follower in evaluations, grades, or critiques, what if we flipped the role? Make the AI the chaos agent, the one who says the thing that no teacher, boss, or mentor would ever dare to say directly. In other words: Make the human the good guy (or girl).