The Gist
- Prompt significance. Prompt engineering is crucial in formulating questions for AI-based solutions, guiding users to leverage AI effectively in marketing.
- Enhancing experience. By addressing overlooked data details and employing transfer learning, prompt engineering improves user experiences and response accuracy.
- Marketer adoption. As AI becomes more prevalent, marketers must learn prompt engineering techniques to optimize results and make informed decisions based on AI-generated data.
With all the super-duper global excitement about AI, especially among content marketers, you will likely hear the word “prompt” repeatedly. But are you also hearing anything about prompt engineering?
Many suggestions for what to do with ChatGPT or Bard revolve around prompts, but deeper discussions about prompt engineering deserve attention. While the word "engineering" conjures up the idea of technical expertise, prompt engineering gets into the methods for formulating questions when learning AI-based solutions. As such, prompt engineering serves as the guiding principle for understanding how to best leverage AI.
In this post, I will examine the fundamentals of prompt engineering and explore how marketers can incorporate it while keeping customer experience as a priority.
What Are Prompts?
Prompts are essentially words interpreted as instructions for a language model. They can be conveyed in various forms, such as a brief question, a paragraph, a bulleted list or a description. Prompts are designed to resemble natural speech, making them more user-friendly than typing characters in a text window.
The Types
When using Bard or ChatGPT, you provide instructions either as an imperative, like "Divide 1245 by 38," or as a question, such as "What is a conversion rate?" The model typically interprets the words in segments: instructions, context and input data. Context and input data (or a provided example) help refine the prompt, ensuring the model understands the specifics. Once the segments are identified, the model generates a response.
The Steps
Sometimes prompt refinement involves multiple steps, incorporating various prompt engineering formats along the way. One increasingly popular format is Chain of Thought (CoT) prompts. CoT prompts consist of a series of intermediate steps that guide the language model toward the final output. They are particularly effective for answers that necessitate multiple steps to acquire the correct details. It’s a thought process akin to a decision tree, but the results are presented as brief texts rather than a graphical representation.
The Subcategories
There are several key subcategories of CoT: zero-shot, one-shot and few-shot. A shot refers to an example provided to illustrate the desired output. This technique aims to guide the model in explaining its reasoning, thereby adding a minimal training step to the initial foundation provided by the large language model (LLM).
So, a zero shot would be the example I described earlier (“Divide 1245 by 38”) because there is no example to show the model. A one-shot prompt, in contrast, shows an example of the output needed. Here is what it looks like in Google Bard and ChatGPT:
Note that I gave an example with one place behind the decimal. Yet the answer kept several decimal places with both Bard and ChatGPT.
Inaccurate Results
It should also be noted that LLMs can return slightly inaccurate math results. LLMs were trained on text. When I once tried to divide 367 by 15, even with a given example of how I wanted to return an answer with one digit after the decimal, ChatGPT chose 15 and 7/8 — and the 7/8 fraction was not correct. As people are discovering with AI, multiple shots are often needed to get good answers.
Self-Consistency & Other Techniques
Another prompt engineering technique is self-consistency, a prompt technique that ensures that a set of generated response texts are consistent with each other. This is done by asking the model to generate multiple responses to a prompt, and then selecting the response that is most consistent with the other responses. For example, if I wanted a product description for the “Piero,” a new water-resistant smartphone my imaginary company is launching, I would write the following prompt:
The model would respond by generating a description that incorporates the three descriptions, then creates a description that incorporates the key phrases to be consistently maintained. This is what I got when I used Bard.
The ChatGPT version gave me a more basic response without explanation.
Self-consistency is meant to ensure that the generated response is accurate and complete through the key phrases.
CoT and self-consistency are not the only variations of prompt engineering techniques. Another type, Least-to-Most, breaks a request to solve a problem into subcategories. The result is a series of prompts-response pairs that allow users to problem solve by identifying the hierarchy of steps the model should take.
Learning Opportunities
Maximum lengths are a keyword indicator, telling the model how long the combined prompt and response should be. All these options play into styling the prompt context and input so the language model delivers the desired result more effectively.
Related Article: Insider's Look at Google Bard and How It Can Help Marketers
How Prompt Engineering Enhances the User Experience with AI
Address Overlooked Details
Prompt engineering is beneficial for addressing overlooked data details within an LLM. Large language models generally perform well with straightforward prompts, relying on training data to associate words with appropriate instructions. However, some models are trained up to a specific date, so they must rely on known corresponding patterns from the training data to comprehend the request. This is particularly relevant for models trained to a certain date.
Transfer Learning
Prompt engineering employs variations of transfer learning, an effective machine learning technique that enables a model to learn from one task and apply that knowledge to a different, yet related, task. As a result, users can integrate new information about places and events beyond a specific date or apply a heuristic to complex information to generate accurate responses. Without this approach, the non-deterministic nature of certain models might sometimes produce responses based on data that are, in reality, poor responses to prompts.
Differing Ways of Accepting Instructions
Another factor to consider is that prompts for each user interface and supporting platform differ in how they accept instruction, context and input data. MidJourney prompts introduce modifications of these prompt engineering concepts. MidJourney users can customize content type, such as media rendering, by using definitive phrases like "high definition" or adjust composition by incorporating photography or video recording terminology as a prompt detail, such as "Ultra Wide Angle." These resemble maximum lengths, except the model adapts the output according to photographic specifications rather than length.
As users become familiar with ChatGPT and other AI platforms, they will learn to apply the heuristics generated from a prompt to obtain valuable responses, rather than relying on a zero-shot approach that creates a "genie in a bottle"-like prompt. A positive indicator for marketers is when they and their peers effectively combine prompt responses where feasible.
What Marketers Should Gain from Prompt Engineering?
Acquiring Foundational Skills
As AI becomes increasingly prevalent, marketers must acquire foundational skills in managing prompts. To fully benefit from prompt engineering, they should view its iterative nature as analogous to the optimization mindset employed in analytics. In analytics, users optimize digital media, such as websites, to improve conversions from digital marketing campaigns. Prompt engineering follows a similar approach, but the optimization is applied to an algorithm instead of digital media. Utilizing AI effectively requires critical evaluation of input to obtain the best responses.
Understanding Risks
Marketers must also understand the potential risks associated with the information or actions derived from the data provided. For example, large language models generate content that seems plausible but may not be grounded in reality. This results in proposed outcomes that appear reasonable but prove to be impractical when implemented. By design large language models do not realize that they don’t know what they don’t know, leading to made-up details at times.
No Genie-in-a-Bottle
Marketers need to conduct thorough assessments of the results in relation to the specific situation at hand. The decimal math examples highlight potential issues that may arise. Repetitive and consequential content influences AI-generated information. As prompts are fine-tuned, users should pay attention to recurring decisions. These patterns can reveal sustainable customer preferences. Regrettably, many users tend to treat ChatGPT as a genie-in-a-bottle, expecting it to cater to their every demand. However, marketers must be prepared for the AI's verbose output and stay vigilant in parsing valuable details from irrelevant ones.
Related Article: Top 5 ChatGPT Propmpts for Customer Experience Professionals
Prompt Resources
Keeping Track & Conducting Reviews
By conducting thorough reviews, users can identify best practices for crafting prompts. Marketers should keep track of useful resources to stay updated on the latest strategies. For example, Discord allows users to inquire about prompt engineering, suggestions or feature updates. ChatGPT has a dedicated Discord community focused on learning prompts, staying informed about feature updates and providing support. MidJourney also offers a similar community. Furthermore, there is an overarching Discord group called Learn Prompting, where users can gain insights from other AI tool-users.
Another general resource is the GitHub repository of Democratizing Artificial Intelligence Research, Education, and Technologies (DAIR.ai). The repository explains basic and advanced prompts, as well as examples from the current crop of AI resources.
Final Thoughts on Prompt Engineering
The application of prompt engineering is gaining prominence as AI solutions and features attract attention at breakneck speed. There is growing concern that AI capabilities may surpass human understanding of how to optimally utilize these tools, as evidenced by initiatives like the Future of Life Institute's letter requesting a pause in AI development, as Dom Nicastro reported.
In the meantime, marketers aiming to enhance their understanding of AI tools should adopt an experimental approach with specificity. As Shane O’Neill noted in his post, AI is not a broad technology that spans the entire martech stack. The best approach marketers can take to understand AI customer experience is to understand the prompts designed to operate AI platforms properly.