AI Learning Summit

This post was originally published on the USV blog and can be found here.

Last week USV and Reach Capital co-hosted the AI Learning Summit. We welcomed over 75 builders, founders, educators, students, and policy makers into our library for a day of demoing fresh projects and discussing the ways in which learning and AI will intersect.

This dialogue was a continuation of one USV has been having for years about the education transformation, and we think recent advancements in AI will unlock fresh opportunities to drive value and broaden access to education.

As we hoped for, the event raised more questions than it did answers, and so we thought we would lay some of these questions out here in hopes of spurring further dialogue around them.

Should students show their work?
While demoing Pressto, a platform teaching writing composition to students, Danny Stedman and Kieran Sobel pointed out that hand-held calculators first faced scrutiny when brought to the mainstream market in the 1970s. Educators feared calculators would harm math instruction. Today history would show that calculators were a step change in enabling students to engage with advanced mathematics. After some period of ambiguity over how they should be used, the biggest difference in education post-calculator was a focus on showing the work. More white space appeared on worksheets and the expectation grew around students not just providing answers, but showing the steps to get there.

It remains to be seen whether calculators are a fair analogy for AI. If so, it would suggest that students, in the same way that they are required to show their working in the white space of math tests, may also be required to show a record of their interactions with AI to produce an assignment. We are curious what a record like this could look like and how it would engender trust with educators.

What will differentiation look like in AI learning?
Many of the builders we have met are building learning tools on top of existing LLMs like GPT4. We question whether market power is more likely to sit with the infrastructure layer or application layer of AI, and therefore what differentiation in AI-based learning will look like.

Given the obvious preference for chat-based learning tools, we wonder whether user experience will be one of the greatest opportunities for differentiation. Duolingo stands out for its gamification and Quizlet for its quick and simple flashcard format. As the space continues to develop, we are looking forward to seeing what new innovations await in making learning more delightful. It makes us think about a return to USV’s thesis 1.0, first tweeted by Brad in 2011: “invest in large networks of engaged users, differentiated by user experience, and defensible through network effects.”

If making mistakes is key to learning, does the same stand for AI hallucinations?
Educators are rightfully concerned that AI hallucinations could be harmful to students. For example, ChatGPT has been shown to occasionally hallucinate false information if prompted about a minor historical event with relatively sparse coverage. If a model does not fully understand the question asked it risks inventing false material, which is then misrepresented as fact to students. One might counter that, in the same way that we don’t have 100% accurate teachers, we should extend a similar grace to models. We question whether we’ll have to accept at least some hallucinations if we want to accept AI-based learning. For now, hallucinations seem to be providing a great learning opportunity in and of themselves.

What role does emotion play in learning?
During the demo of Amira Learning, a tool that teaches elementary students how to read, Fred raised a question that can be paraphrased as: “In what ways is it helpful for an AI tutor to have more patience than a parent in teaching a student to read, and in what ways could this patience have unintended consequences?” This question gets at a larger point that the relationship between a teacher and a student is an intimate experience.

Students are not just learning math or english, but how to be a human in the world. As a result, we must approach learning as an emotion-laden process and exercise AI accordingly.

Can AI-integrated learning exist without a paywall?
USV is particularly excited about companies working to bring down the cost of education. However a repeated comment during our Summit was the costliness of running LLMs today. This suggests that AI learning tools may necessitate at least partial paywalls to be offered sustainably. We are at an interesting moment where learning is both getting far more accessible through scalable technology but the cost is, at least temporarily, significant to deliver in this way.

Despite the paywalls, these tools are a huge advancement when compared to the cost of personal tutors today. However paywalls still render the tool inaccessible to a significant number of students. How can we create broad access to these tools so that we do not end up with more inequities in education than we have already?


Please get in touch if you are building in this space, have thoughts on these questions, or are asking questions we have not yet have thought of.

Loading...
highlight
Collect this post to permanently own it.
In transit logo
Subscribe to In transit and never miss a post.