Cover photo

AMA: Getting Roasted by College Students on Responsible AI

Some reflections (and updated answers) to an excellent Q&A session during a 2-hour new media lab that I taught at my Alma Mater yesterday

Bringing AI into my old classroom

Yesterday I went back to my Alma Mater to teach a 2-hour interactive lab with a mix of journalism, design, and engineering students on how to conceptualize and deploy a custom agent with ChatGPT.

Journalism school, where old meets new (Image source: Gamma.app)

One of my goals for the session was to illustrate how powerful it can be to use AI to augment your personal and professional workflows. Embracing the "show, not tell" approach, I cracked open my personal ChatGPT instance to demo a collection of micro-apps that I developed, such as a “proposal builder” for scoping client work and a “text message newsletter” that curates and remixes messages from my building’s WhatsApp thread.

Like a lot of groups I visit with when talking about AI, I noticed a healthy mix of optimism and skepticism in the promise and pitfalls of AI. But given the journalism lens of the classroom experience, I fielded much harder-hitting questions than usual, which impressed me.

I answered some fine in the moment, and to others, I thought of a few better answers later on. Given that I write so much about AI innovation on this blog, I decided to paraphrase and re-answer some of the questions that stood out the most to me from yesterday’s course. (Someone please fact check me on this, to make sure I haven't crossed into Medill F territory!)


Q1: “Aren’t you worried that someone will come along and start marketing their own proposals and content under your own name?”

(OK, getting into ownership and IP issues right off the gate from the custom GPT proposal builder that I showed off. Let's go.)

What I said: Not really. The things I shared were all my personal private GPTs, which means nobody else can access them. But you’re right in that someone could come along and copy all of the content in my blog and start a rival blog in my same tone and style. Or pretend to be Bethany Crystal in another content channel. This is why I am excited for the convergence of blockchain technology (which provides a signature on chain) to intersect with AI, to prove provenance.

What I should have added: The fact that my entire blog is on paragraph.xyz, an onchain publishing platform, means we are already taking steps in that direction.


Q2: "Aren't you worried that you're going to lose sight of the knowledge you already have by cutting corners in getting work done in quicker, streamlined ways where you aren't fully immersed as a subject matter expert?"

(A thoughtful take on the "Is AI cheating?" question.)

What I said: Yes. And also no. I have noticed a growing dependency on using AI to automate certain workflows for myself (like writing). But on the other hand, I've put in the time already to know what good looks like, and I hold myself accountable to the same standard of content output as had I not used AI.

What I should have said: That I see my use of AI in two buckets:

  1. Learning: Using AI to acquire skills I don't yet possess.

  2. Streamlining: Leveraging AI to enhance tasks where I'm already an expert.

    Fifteen years into my career, I have a deep personal content library of things where I'm a hyper-specific expert. If I were still in school, I'd likely use AI more as a learning tool, or a way to build a foundation for my growing expertise down the road.


Q3: “Did you ask your neighbors permission to remix their text message conversations into a newsletter?”

(Excellent point. I had created a text-based micro-newsletter for my neighbors with AI based solely off a group chat.)

What I said: I told my neighbors I was doing it, but I didn’t ask for explicit buy-in upfront. To me, the details of the information shared felt innocuous enough (without personally identifiable information) to run a few context remixes. There’s also a future state world outside of a prototype where neighbors might “invite” a text message AI agent into their thread to do this work on their own.

What I should have said: You’re right; I didn’t follow traditional journalism protocols and perhaps should have. With new technologies, some of my work pushes boundaries, which might lead to overstepping. This underscores the need for more people to engage with this work and raise important questions like these.

Another conversation topic: How a current boundary of DALL-E created images are not always what they are chalked up to be (image source: DALL-E)

Q4: “Aren’t you worried about displacing your local newspaper the West Side Rag by creating these neighbor newsletters with AI? Isn’t that a bad thing?”

(Another excellent question. Clearly a fellow New Yorker is in the house...)

What I said: Not really? The details of my text message newsletter are so so local (ie: someone new moving into our building, the result of the 311 call the day we lost water) that even a local neighborhood paper wouldn't see a benefit to cover it. If anything I might be surfacing more opportunities for new types of journalism jobs (ie: curators) in other places.

What I should have added: I've observed that people occasionally share articles from the West Side Rag directly; or from other publications onto our chat thread. A scaled-up version of this prototype might invite more active cross-pollination of curated content, which could instead actually increase visibility for the articles and publications that matter most to our community.


Q5: "Hi, so during class I've been opening up your blog and reading some of your posts, like the one where you talk about everyone is a developer or when you designed a comic book yourself. Aren't you worried that by proliferating this much content that you are taking away from the true craft of the human touch of actual legitimate content creators and reducing human creativity?"

(OK first off, GOOD RESEARCH. Should have seen it coming that undergrads with laptops out would be Googling me in real time.)

My prompt for this was: Muckracking journalist Ida Tarbell meets a robot (image source: Gamma.app)

What I said: My use of AI depends on where I want to differentiate myself. I'm not a developer or a designer. I'm a writer. I'm long-winded, I overshare, and I publish a lot of content, but I'm not aiming to displace real artists. In the case of the comic book example, I just wanted to share the story of how I kickstarted a block association, but I knew that a traditional narrative structure wouldn't engage most readers. I decided to publish it as a comic book because I asked myself: What's the best way to tell this story?

What I could have added: I stand by my original answer, but it's a great question. One additional reframe to consider: In startups, we often debate the question "buy vs. build?" (aka: do you buy existing software or build something custom in house)? I've noticed many founders choose to keep core functions in-house (for the parts of their business that they consider most defensible) while using available software for non-essential areas. Maybe this is true in AI, too.


Q6: "You have little kids, right? Do you let them use AI?"

(I see what you did there. And well played.)

What I said: I use AI to help me design around my kids a lot, like when I'm at a museum and I want to reword the content on museum displays for a four-year-old to understand it. The weekend ChatGPT launched coincided with potty training my two-year-old. Despite numerous attempts and Sesame Street on repeat, she wouldn't transition from her plastic potty to the bathroom’s "big kid" potty. On a whim, I asked ChatGPT to draft a letter from Elmo with potty training tips. And it worked. So, do I use AI with my kids? Yes. Would I trust an AI-powered Elmo doll for my four-year-old? Probably not yet.

What I should have added: This is tricky because AI has a broad definition and there's a fine line between using AI to assist my child and letting them use an iPad alone. Whether I like it or not, my kids will grow up AI-native, and I'm striving to navigate this world with them by setting appropriate boundaries.


Here's the image we came up with together, me and ChatGPT. It's not perfect, but it conveys the point I'm trying to make well enough for now. (image source: DALL-E)

Learning into the new-age journalistic mindset

One of the reasons I was so excited to go back to campus and collaborate with students is because I believe that people with broad, generalist skill sets are uniquely positioned to tackle the challenges and opportunities of AI adoption. This new-age journalistic mindset—asking tough, nuanced questions without obvious answers—might be exactly what we need right now.

Yes, deep technologists will continue to develop the underlying frameworks for tools like large language models. But as the barrier to entry for coding and app development nears zero, it’s anyone’s game to invent new applications and solutions by crashing together diverse ideas. And to make sure we're stress-testing the rules, parameters, and boundaries in ethical ways.

Yesterday’s class wrapped up with live demos of several student-driven projects, all conceptualized and deployed within a single session. All of these apps pertained uniquely to the real-time student experience on campus today. They posed questions like: "What should I wear?" "Where should I eat?" and "How should I study?"

If these MVPs (and these important questions) can come out from just one two-hour session, imagine what’s possible when more of us are iterating, ideating, and tinkering like this every day.

Loading...
highlight
Collect this post to permanently own it.
Hard Mode First logo
Subscribe to Hard Mode First and never miss a post.
#ai#journalism#technology#learning