With the rise of Large Language Models (LLMs) such as GPT, prompt engineering has become increasingly important in the field of natural language processing. Prompt engineering involves crafting high-quality prompts that can guide the behaviour of LLMs, improving their accuracy and relevance. In this post, let’s take a look at how to design prompts to get the most out of one of the most famous LLMs, ChatGPT.
Intuition
Large Language Models (LLMs) predict the probability of a sequence of words in a given context. In other words, given a sequence of words W0….Wn, an LLM predicts the Wn+1.
Hence the quality of the prompt and subsequently the response depends on the context created by the prompt.
Let's take a straightforward example to understand why context matters:
We want to complete the sentence, “I want to [LLM RESPONSE]”
Now with absolutely no context, you can pretty much have millions of options to choose from (Basically, anything under the sun). However, adding a simple context, “I am hungry. I want to [LLM RESPONSE]” narrows it down to just a few options.
Now even though this example was very trivial, this pattern holds true for any prompt no matter how complex the thing it is trying to achieve. The better the context that can be inferred from your prompt, the better response the model will generate.
Exploring different types of prompts
Summarization
Text summarization is probably one of the most valuable applications of LLMs. You can quickly generate a short explanation for various concepts.
To get a good summary, give prompts that have quantifiable parameters. For example, instead of the input - summarize this text, prompts like Explain to me like I’m 5 or give me an N sentence summary work much better.
I tried this with ChatGPT for one of my blogs, and here are the outputs:
You can further experiment with other prompts or other parameters. For example, the author in this post claims, 11 is a good age to get a good result from the ‘Explain to me like I’m _____ prompt’!
Text generation with the writing style
After summarization, writing quick drafts is another useful application for LLMs.
In addition to the topic of the draft, you can ask it to write a blog in a particular author’s style.
For example, write an essay in the style of X. As an example, here’s a short note on the meaning of life in the style of Elon Musk, APJ Abdul Kalam, and me!
I think it does a pretty good job of detailing the aspects which it thinks are important for that individual. In addition, notice how the prompt for me is slightly different than for famous people. I can not give a concrete reason why a different prompt is required for me, however, experimenting more with this can help us to catch such behaviours.
Testing your ideas
In addition to using the styles for drafts, you can use them to test your ideas. For example, based on a position you take, you can see if you get a useful rebuttal by using the style of an expert.
Now although this should not be taken as absolute truth, it does provide a useful way to find out opposing ideas that we might find difficult to come up with due to a position we have taken.
Coding
The most talked about and “scary” use case of ChatGPT. Even though a lot of doomsday scenarios have already been painted vividly, I feel that at least in the short term, ChatGPT is the best tool created for programmers yet.
My main use case is to quickly generate templates for whatever code I want to write and then add in the customization according to what I want to do. A word of caution here is to check the versions of the libraries that are being used in the code. I found that it will regularly use old versions even if the libraries were updated before the training cut-off.
Here’s an example of generating a simple CRUD API in NodeJS:
/*
QUERY:
Write an API in NodeJS that has separate end points for each of the CRUD operations
The APIs take a dictionary as input and perform CRUD operations on MongoDB
*/
const express = require('express');
const bodyParser = require('body-parser');
const { MongoClient, ObjectID } = require('mongodb');
const app = express();
const port = process.env.PORT || 3000;
const url = 'mongodb://localhost:27017';
const dbName = 'mydatabase';
const collectionName = 'mycollection';
app.use(bodyParser.json());
// Create
app.post('/api', async (req, res) => {
try {
const client = await MongoClient.connect(url);
const db = client.db(dbName);
const result = await db.collection(collectionName).insertOne(req.body);
res.send(result);
client.close();
} catch (err) {
res.status(500).send(err);
}
});
// Read
app.get('/api/:id', async (req, res) => {
try {
const client = await MongoClient.connect(url);
const db = client.db(dbName);
const result = await db.collection(collectionName).findOne({ _id: new ObjectID(req.params.id) });
res.send(result);
client.close();
} catch (err) {
res.status(500).send(err);
}
});
// Update
app.put('/api/:id', async (req, res) => {
try {
const client = await MongoClient.connect(url);
const db = client.db(dbName);
const result = await db.collection(collectionName).updateOne({ _id: new ObjectID(req.params.id) }, { $set: req.body });
res.send(result);
client.close();
} catch (err) {
res.status(500).send(err);
}
});
// Delete
app.delete('/api/:id', async (req, res) => {
try {
const client = await MongoClient.connect(url);
const db = client.db(dbName);
const result = await db.collection(collectionName).deleteOne({ _id: new ObjectID(req.params.id) });
res.send(result);
client.close();
} catch (err) {
res.status(500).send(err);
}
});
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
Another great use case is to quickly write and test regex.
/*
QUERY:
Write a regex to match all the pages on the website "<https://leetcode.com>"
*/
^https:\\/\\/leetcode\\.com\\/.*$
/*
Explanation:
^ matches the beginning of the string
https:\\/\\/leetcode\\.com\\/ matches the protocol and domain of the website
.* matches any characters that appear after the domain (i.e. any path or query parameters)
$ matches the end of the string
*/
There are many more ways to program ChatGPT for coding. Here’s a useful link that has many resources for the same.
🧾TLDR Review
Here is a compiled list of just the prompts that are explored in the article:
🔑 For Summarization use:
Explain to me like I’m 5/11 (Try other values)
Give me an N-point summary
🔑 For Text generation in a particular style use:
Write [TYPE_OF_TEXT] on [TOPIC] as if written by [NAME_OF_PERSON]
[Links to your own blogs]. Write in a similar style to me
🔑 For testing your ideas use:
[YOUR_VIEW] Respond as if you are [PERSON_NAME] responding to the position.
🔑 For coding use:
For Regex: Write a regex for [REQUIREMENT]
For API templates: Write a [API_SPEC] for [API_FUNCTIONS] in [TECH_STACK_CHOICE]
Conclusion
LLMs are here to stay. They will be integrated into a myriad of different applications that we use every day. Instead of complaining and trying to put them down, it’s much more productive to try and get better at using them to improve our output as well as save time. In addition to this, the more you play with it, the more you’ll realize that we still have a long way to go to get something that will replace humans in every task!
Until then, keep experimenting with different types of prompts and exploring the various ways to make ChatGPT and other LLMs your friend!
📢Announcement
I have enabled the chat feature for Decoding Coding. Click here for the details and join the chat to let me know what you would like me to cover next!
That’s it for this issue. I hope you found this article interesting. Until next time!
Let’s connect :)