Cover photo

So, maybe 'prompt engineering' is for real?

Integrating Large Language Models (LLMs) into software infrastructure isn't just another step in technological evolution—it's a paradigm shift in how we build and interact with code. I've always relied on the predictable nature of traditional programming: input A always leads to output B. But now, as I experiment with LLM-powered applications, I'm grappling with a new reality where probabilistic outputs and natural language prompts are becoming integral parts of the development process. This shift from deterministic code to probabilistic AI is not just changing how we write software—it's reshaping the very way we conceptualize and construct software systems. Also, it's breaking my brain.

I have a computer science undergrad. I learned to write C in college. I worked as a software engineer for a few years after school and I've worked in the software space my whole adult life. Over the years, I've built personally or as a part of a team lots of things. Most recently I've been experimenting a lot with LLMs first to build software again and now to do some writing. Coding with the assistance of LLMs has been incredible. Writing with LLM assistance hasn't felt like the same order of magnitude improvement but it has also been transformative. My next project involves me building on top of LLMs and I expected the leap in capability around natural language interaction but I wasn't prepared for the implications to my software stack and how it's tuned.

My first project I'm undertaking involves using LLMs as a part of the infrastructure to synthesize natural language communications and perform transformations on it. I have built software before and I understand that process. One of the comforting things about code is that given the same inputs, it will always generate the same outputs. If there's a bug in the code, in an odd way, you can count on that bug. If you understand the circumstance that trigger the bug, you can reproduce it consistently.

Working with an LLM is putting this probabilistic black box into the mix that truly enables natural language processing that is unparalleled. Doing that piece in another way would be, for a small team, essentially impossible. But now we have these APIs and it's really blowing my mind a little as I try to operationalize it for the first time. The prompting is code and configuration. The benefit of prompting is that we can use natural language to interact and direct the LLM. It's easier. The challenge is that natural language is so imprecise.

I'm early yet so I don't have takeaways or recommendations but it's hard not to wander about how this progresses. Early ChatGPT days I thought "prompt engineering" was too cute by half. Now, as I continue to fine tune the proper incantation that can refine my desired outputs, I have to think maybe there's something to it.

I'm just getting started but as I dive deeper into LLM-integrated development, I'm realizing this isn't just a new tool—it's a fundamentally different approach to building software. The shift from predictable code to probabilistic outputs is forcing me to rethink how I design, test, and debug systems. Honestly, I have far more empathy now for all of the widely publicized missteps of the big players in operationalizing these technologies. While the potential is exciting, the challenges are significant. We'll need to develop new methods for ensuring reliability and consistency in AI-augmented systems. It's still early days both for me and this tech. But one thing's clear: the skills that make a good builder are evolving, and we'll all need to adapt.

Loading...
highlight
Collect this post to permanently own it.
Progress Over Perfection logo
Subscribe to Progress Over Perfection and never miss a post.
#ai#prompting