Do you even playtest?

TL;DR: This morning I realized that playtesting—traditionally a game development tool—holds untapped value for software development. With this approach, we can not only help improve user experience but uncover new use cases, and ultimately lead to a more refined and versatile product, and tons of examples for us to tell the broader vision of where we could go once we are fully open and permissionless. Sometimes, stepping back and letting users 'play' with your software is all you need, I guess.

Full Story

A long long time ago in a browser tab far awy from this, actually yesterday, I had the joy of participating in a user feedback session for a new generative AI tool. Usually, though, I'm on the other side, so I thought it might be worth sharing my learnings. The tool was a generative coding tool I had never used before. I had a bit of an idea of what it does, but I really approached this with the mindset of not knowing what it truly does. This session was my first in-depth interaction with the tool.

My first observation was that this user testing session was actually a really good way to onboard someone to something new. It felt like I had a companion with me that could nudge me further, which made me think quite a bit about how this type of personal onboarding and research could be something I'll do a lot more of, or something we should do a lot more when we work on novel paradigms.

One perk I got out of it was a lot of credits to go wild with the tool later on my own, and this blog post is really about what happened next. As usual, running and taking a shower later is kind of my way to have a conversation with myself, and today we explored what we could do with those credits. I wanted to burn them by breaking what they built.

Not in a mean way; I like new tools, and one of my favorite ways to play around is by stress testing them with dumb ideas I have. So I did exactly that this morning.

It suddenly struck me that there's a significant aspect often overlooked in our software development approach, which reminded me of the days when I was working more closely with triple-A game design studios: the importance of playtesting.

Playtesting, a common practice in game development, involves users trying out the game to find bugs or design flaws before the final release. But it's not only about bugs and design fixes; it's actually about something a lot more fundamental. It's about learning how people play, how they use it, and what makes it fun. The way they play is often somewhere between how we, as builders, intend and complete misunderstanding. But those misunderstandings are often the most inspiring thing, as players often come up with much more interesting ways things can turn out.

Observing this is key to making a good game, either because you want to prevent it, or because this is actually where the fun lies. So playtesting is not just a thing to fine-tune late-stage in the game, but it's a common part at any stage. If it doesn't make fun, why even do it? This is even more important in sandbox games; you want to see how people create and what they create, and how they completely abuse what you created.

A lot of the tools we're building today that involve generative tooling are de facto sandboxes. They invite players to play, to imagine, to hack, to mod, to alter. The thing is, though, as a builder, I know from myself we often end up with some form of tunnel vision, where we don't see beyond our initial use cases. This is not to criticize the teams; I am guilty too. We are so deep we can't unsee the things we've done. So playtesting should maybe be standard for us too.

The insight here is quite straightforward: involving users in the testing process can lead to the discovery of unexpected use cases and potential improvements. This kind of engagement goes beyond conventional testing; it's about inviting users to push the software to its limits, to 'play' with it in ways we as builders and developers might not have anticipated. This approach can reveal invaluable insights into the product's versatility and user experience, offering a clearer direction for its development.

This also connects to some of the things we create in web3. If we build modularly, we need people to truly mess around with it before even launching, to misunderstand our pure intentions and take it to the next level. Those playtesters not just break our things; they expand our playfield. They might come up with much better use cases than we can envision because of our tunnel vision, and that we happily put on our landing pages. If we can build on how people use it early on, we also have more fodder for when we launch to inspire builders. We already have a sense and can anticipate what people might do, or even can nudge them by sharing some of the early ways others used it.

So my engagement this morning with the AI tool, fueled by the initial user research session, which was a great onboarding, turned into an exploratory playtest of sorts. I found myself not just using the software but testing its boundaries, inadvertently finding new ways it could be used or areas where it could be enhanced. And obviously, I spammed my friend live with real-time feedback and ideas.

This personal observation has led me to believe that software development could benefit greatly from integrating playtesting into its processes. But, as in game design, I think it's a role, not just a side gig. It needs to be monitored, explored, captured, and synthesized into actionable items for the rest of the team to either ignore or build upon. But I have not seen a software team that works in modular interoperable software or in generative AI tooling that leaned hard into playtesting with their own top-secret group of hobby hackers.

This is not a call to make testing more rigorous, although you probably should, especially in web3. We do not do enough actual user feedback and design crit, but about exploring to have a dedicated team of mercenaries that are hired to imagine along your way. By doing so, builders and developers can gain insights that are not always evident from within the development bubble, and if you set it up, you have a ton of examples to further fuel your VC deck with what this unlocks, or a ton of material for teams to latch on to how your tool or infrastructure can be used.

Content: Ordered, Structured, and kindly massaged by ChatGPT

Loading...
highlight
Collect this post to permanently own it.
RM logo
Subscribe to RM and never miss a post.