LLM Chat, the New Command Line?
A blinking cursor waiting for your input. You type, you hit enter, and magic happens. This simple chat window has taken the world by storm. We're all interacting with Large Language Models this way, and it's undeniably powerful. But lately, I've had this nagging feeling of déjà vu.
The way we interact with LLMs today feels like a modern version of the Command-Line Interface (CLI) era, before GUIs made computers easy for everyone.
As a kid I just barley scratch MS-DOS. No icons, no mouse clicks to guide you. Just you, a keyboard, and the prompt:
C:\>
Operating those systems meant mastering a specific vocabulary of commands. My monitor was full of sticky notes for the essentials commands. The slightest typo or an incorrect option, and you'd be greeted by an error at best.
Now, fast forward to today's LLM chat windows. On the surface, it's the complete opposite. We use natural language, typos are often gracefully handled, we can be vague, conversational. The barrier to entry for a basic interaction is incredibly low. Anyone can ask, "What's the weather like?" or "Tell me a joke." This is a world away from the rigid syntax of the CLI, and that's a huge leap in accessibility.
But here’s where the analogy starts to click for me.
While getting something from an LLM is easy, getting exactly what you want, consistently, reliably, and with nuance that's a different game altogether. Suddenly, we're not just chatting; we're crafting. We're iterating. We're learning the "magic words." This is the rise of the new "super muscle": Prompt Engineering.
Just think about the difference between those two prompts.
Write a story about a cat.
and
Write a 500-word short story in the style of Neil Gaiman, focusing on a cynical, street-wise ginger tabby cat in a rain-soaked, neo-noir city. The cat, named 'Spike,' should be an unwilling detective in a case involving a missing mechanical songbird. Emphasize atmospheric descriptions and Spike's internal monologue. The tone should be melancholic with flashes of dark humor. Conclude with a bittersweet, unresolved ending.
The first prompt might give you a simple, perhaps charming, children's story. The second is an instruction set. It's guiding the LLM, constraining it, defining roles, specifying format and tone. It's like learning the command line flags and piping capabilities to achieve a very specific outcome. The LLM power user knows how to layer context, provide few-shot examples or demand JSON as output.
The Illusion of Simplicity
CLIs never pretended to be easy. LLM chat windows, with their friendly, conversational interface, can lull us into thinking it's all effortless. But the gap between casual use and expert manipulation is becoming significant. Are we seeing a new divide between those who can merely chat and those who can truly instruct these AIs?
The Discoverability Gap
GUIs allowed users to explore an application's functionality with relative ease. Menus, icons, toolbars, and contextual right-clicks, inviting you to browse features and understand the scope of what's possible.
With an LLM's chat window, how do you "browse" its full potential? Its vast capabilities aren't laid out visually. Uncovering what an LLM can do often requires prior knowledge, extensive experimentation, or stumbling upon functionalities, rather than intuitive exploration of the interface itself.
The "Black Box"
With old CLIs, commands were (mostly) deterministic. ls -l
always did the same thing. With LLMs, even with precise prompting, there's an element of surprise. We're crafting commands for an entity that learns and evolves, and whose internal workings aren't fully transparent.
I can't help but wonder if our current chat windows are the "terminal" phase for interacting with LLMs. Are we all, to some extent, beta-testing the raw interface before more intuitive, perhaps even "GUI-like," layers emerge to harness this power for specific tasks without requiring everyone to become a prompt magician?
We see glimpses of this already. AI-powered features embedded in apps, specialized AI writing tools with structured inputs, code generation plugins or embedded widgets.
Conclusion
The beauty of today's LLM interface lies in its simplicity: anyone can start, yet the mastery required for truly skilled interaction is immense. It strongly echoes the days of the Command Line Interface, powerful but demanding. Just as the GUI revolutionized accessibility for computers then, we have to wonder what intuitive interfaces will emerge for AI. I'm betting the next AI interface evolution isn't far off at all.