Beyond Automation: 6 Do’s and Don’ts of LLMs

It can be difficult for me to predict in advance what will be obvious to others and what needs stating. This holds true when applying common sense to using various LLM (“AI”) programs effectively.

Assuming this is one of those cases, here are some “obvious” pointers for text-based applications like GPT:

1. You Have Agency. Use It.

While it’s an oversimplification to say LLMs are the next generation of predictive text, this comparison is nevertheless accurate and hopefully instructive. While amusing, asking an advanced predictive text system to write a novel one word at a time isn’t effective. Remember, a tool’s efficacy is determined by the context of its use, and the user’s skill and awareness.

2. Augment, Don’t Replace.

A corollary to this point is that in most use-cases, it’s best to view LLM automation as cybernetic enhancement — a force multiplier when used effectively — rather than a substitute.

Let’s be frank, some corporations will inevitably replace human roles with AI. To some extent this falls into the historical pattern of technological “innovation” being valued over society itself. Through eagerness to show greater profits to shareholders, this may very well outstrip even the basic capabilities of the technology being employed. This represents a misuse of AI rather than an inherent flaw, but it’s an aspect of our current society we must address if we are to avoid some of the largest material harm that can be done with this tool.

3. Check. Its. Work.

When you’ve pinpointed tasks suitable for automation, always compare the AI’s output against your objective and adjust your approach based on its accuracy.

4. Clarify Your Workflow.

The stages at which AI is implemented will impact outcomes as much as your engagement methods. For commissioned visual work, I might employ AI during thumbnailing stages but not in the final execution. In writing a research article, it might assist in organizing a basic outline, but it’s less suited for constructing the entire article. Conversely, there are contexts where a raw output might serve well: for instance, streamlining some of your tasks when running a private roleplaying game, or perhaps in creating the illustration for a quickly dashed off blog post such as this one…

In organizational contexts, the need for forethought intensifies considerably.

5. Always Be Iterating.

Treat every query as a test, and outcomes as evolving projects. Methods will differ across apps, platforms, and applications, but constant iteration is essential. This holds true for most any format.

6. Don’t Trust. Verify.

LLMs have a curious propensity to confabulate, which can yield unexpected and sometimes novel results. However, blindly trusting these outputs can be risky, with consequences ranging from humorous to disastrous, depending on the application.

While we haven’t delved into AI’s ethical challenges here, following these simple guidelines can assist in mitigating high-risk outcomes for both individuals and organizations.

Previous
Previous

The Last Litany of Lenev

Next
Next

The Speed of Digital Myth