Yesterday at EdTech Week in New York City, I co-led a workshop with the Decoded Futures team called "AI-Driven Transformation: How Educators and Nonprofits are Reimagining Impact in NYC."
Our goal was to share key insights from our work, particularly around what is helping or hindering nonprofit adoption of AI in education and workforce development. Through our research and early partnerships, we've been exploring a central question: What problems are best suited for AI?
What I realized early on was that the main obstacle wasn’t understanding AI’s technical capabilities (though, to be sure, there is a lot to learn here). By contrast, the real problem seemed to be figuring out how to spark people’s creativity, curiosity, and problem-solving skills. After all, without this important foundation, it’s hard for anyone to imagine how AI could be used to solve problems in new and unconventional ways.
It begs the question: Can creative problem solving in the context of AI be taught? And if so, how?
What's an AI-shaped problem?
Over the past 6 months, I've noticed that most people tend to view AI in extremes—either that it replaces their entire job, or that it shouldn't be used at all. But the reality is much more nuanced.
One thing I’ve learned from many tech entrepreneurs I admire is that you can’t fully grasp the potential without of any new technology without using it firsthand. That's why I've been experimenting with new AI tools and workflow every week, and I've been using this blog to show my work. These experiments have sharpened my sense of what kinds of problems AI is really good at solving—often small, repetitive tasks that build into something larger.
For example, I’ve used AI to:
Achieve dramatic productivity gains, like writing an ebook in a week, then turning it into a paid online course the next day
"Remix" or reframe my own content, like turning personal stories into a comic, or converting blog posts into interactive quizzes and worksheets
Streamline communication to synthesize technical concepts outside of my own domain expertise, such as when I spent 8 hours in the ER or when I worked on a team of exclusively open source systems engineers
Offloading repetitive tasks, like designing my family's vacation, planning weeknight meals, or building proposals for clients
As you can tell, these are small, incremental problems–none is “earth-shattering” on its own. But over time, they've added up in a significant way for me. This process has also helped me tune my radar for when to bring AI into a process. What I've learned is, no problem is too small for AI to help.
This gradual layering of small AI wins has enabled me to shift from improving my own workflow to tackling larger, team-oriented problems—a process we’re now experimenting with teaching to others.
The process of progression in AI-shaped problems
At last week's Decoded Futures inaugural workshop event, we worked with Playlab, a nonprofit that's building public AI infrastructure for teaching and learning, to introduce this problem-solving framing.
One thing we’re testing is how to help people "tune their radar" for AI opportunities in their workflows. Here's a slide we used to help define an "AI-shaped problem" at both events:
If you're a startup founder or entrepreneur, you probably notice a lot of similar flavors to how you might scope something like an MVP (or a minimum viable product).
Similarly, we are finding that it really helps to first see the end-to-end cycle of a small problem that you can solve with AI before you layer on complexity, additional elements, or multiple stakeholders.
For this reason, we've starting to encourage folks to build first for themselves, then for their teams, and finally, for either their organization or for their external users. Here are some of the questions and prompts we are encouraging people to consider as the progress from individual problems to system-wide problems.
Put another way:
You can't build for your end users until you know how to build something for your team.
You can't build for your team until you know how to build something for yourself.
So let's start by identifying and solving a personal workflow problem using AI.
Technical challenges or adaptive challenges?
During a keynote speaking session at EdTech Week, SUNY Chancellor of Education John King spoke about how people often confuse technical challenges with adaptive challenges, drawing from the book "Leadership Without Easy Answers."
This idea resonated deeply with me in thinking about how we approach AI adoption. When we ask people about their AI use, I notice that most conversations about AI focus on technical questions.
Common Early Questions About AI
"How do we learn more about AI?"
"What are the right tools to use?"
"What policies or resources already exist out there?"
"What are other organizations doing with AI?"
"What kind of team or internal resourcing do we need?"
While these are important, they miss a crucial piece of the puzzle. AI adoption isn't just a technical problem to solve—it's an adaptive challenge. It’s about rethinking how we work and embracing a mindset that lets us integrate AI into our problem-solving processes.
What’s nice about an adaptive mindset is that it doesn't just help us get AI-ready. It helps us prepare for any change, whether it's the latest tool, a cutting-edge development, a new policy, or an emerging ethical concern. So, when the next shiny new thing lands on our laps, instead of dropping everything to pivot in that new direction, we can instead take a moment to pause, reflect, and just come back to basics and ask, quite simply: "What is a problem we’re trying to solve? (And how might AI help?)"
And if you still need a little help shaping that prompt for your team, start by asking that question to yourself. You might be surprised to learn that no problem is too small.