About the author:
Originally from the San Francisco Bay Area, Alexander Dubovoy is a Berlin-based coder and musician. He graduated from Yale University in May 2016, where he wrote an award-winning thesis on the history of jazz in the Soviet Union. Since graduating, he has worked as a freelance web developer. He teaches at Le Wagon, whose web development bootcamp he did in 2018, and loves helping students with tricky technical problems. He also manages an active performance schedule as an improvising musician. He loves to combine his passions for technology and music, particularly through his work at Groupmuse, a cooperative concert-presenting organization.
It happens at some point for every developer: you get stuck! Either a bug comes up that you don’t know how to solve, or you don’t know how to get started building a particular feature. In the past, your first instinct would probably have been to search the web (via Google, DuckDuckGo, Ecosia, etc.) to see if the right information was floating around somewhere out there. Probably, at some point you would have ended up scrolling through answers on StackOverflow. Of course, that’s still a perfectly acceptable workflow and sometimes the most efficient one. But, the landscape has changed radically with the emergence of LLMs (”Large Language Models”), including ChatGPT, Claude, and Deepseek. Now, your first instinct might actually be to ask an AI to help you solve the problem. But how can you get the AI to give you a helpful answer?
Error Message Copypasta
The first way that we’ve all probably tried using an LLM to solve a bug is this:
Me: NameError (undefined local variable or method ‘age’ for main:Object)
In this case, I just pasted an error message directly in. The response I get is:
AI: The error message: NameError (undefined local variable or method ‘age’ for main:Object) means that in your Ruby code, you are trying to use a variable or method named age that hasn’t been defined or is not accessible in the current scope.
That’s not a bad answer, if we consider how little the AI knows about my code, but it’s probably not going to help me solve my problem.
So, how can I get the AI to give me a better answer? By applying some of the techniques of prompt engineering, I can improve the results significantly.
Be Specific
Probably the most important rule when working with AIs that they they don’t always make correct assumptions. In the case above, when I just gave the AI the error message, it had to make several assumptions, among them:
- My code is in Ruby
- I want to fix this error message (rather than have it explained to me, or some other request)
- The code is missing a variable definition
In order to write a more successful prompt, however, I want to limit the number of assumptions the AI has to make by providing it more context and more explicit instructions. Indeed, specificity is probably the most important basic principle of prompt engineering.
Include Relevant Context
The easiest way to reduce the number of assumptions the AI needs to make is by providing it with relevant context. In the case above, i can significantly improve the AI’s response by giving it my code (in addition to the error message):
Me: if age > 18
puts “You are an adult”
else
puts “You are a child”
end
NameError (undefined local variable or method ‘age’ for main:Object)
By seeing the code and the error message, the AI will already do a much better job and should be able to give me a solution to my problem right there.
But the code itself isn’t the only form of context I can give. In the case above, my code is simple enough that the AI will be able to correctly assume what I want to do, but it might not be able to if my code is more complex. So, I generally will want to give it a better idea of the purpose of my code. Let’s assume I’m trying to build a checkbox feature. I could say: “I want my checkboxes to work,” but that’s not going to give it much info. I’m better off saying “I want my checkboxes to check when you click on them,” but even that isn’t unambiguous enough to cover all cases. So, I’m probably best off if I explain that “I want my checkboxes to check when you click on them and uncheck when you click on them again.”
I also will generally want to give the AI some idea of what tools/frameworks I’m using. For example, I might explain “I’m using Ruby version 3.3.5.”
Avoid Irrelevant Context
All this being said, it’s important to note that we want to avoid giving the AI irrelevant context. An example of this would be pasting in a large file of hundreds of lines of code. Not only will processing all this information cause the AI to take longer to respond, it means it may fixate on entirely the wrong area of the code. A simple way to ensure I provide the right context when bug fixing is to include only the line of code that’s referenced in the error message (and maybe a few surrounding lines). If the AI needs more information or needs access to more files, it will ask you, or you can provide more information in a follow-up message.
As part of avoiding giving irrelevant context, you also want to avoid bringing up things you don’t want the AI to do. AIs respond much better to positive instructions than they do negative ones. At some point, I was working with a prompt like:
Me: I’m working in Rails 7 with Importmaps. Don’t give me answers that related to Webpack.
No matter what I did, it would keep giving me Webpack-related advice. When I simply removed the last sentence, it actually behaved far better.
Keep the Conversation Flowing
The context can also grow over time. One of most effective ways to continuously provide the AI with context is to keep your conversations with it on a particular topic open for a longer period of time. When you keep one conversation going, the AI will learn about how your codebase operates and be able to answer with greater accuracy.
I usually try to start with a message explaining what my app is and what stack it’s using. For example:
Me: The Spoon is a restaurant review application, similar to Yelp. Our goal is to connect users to the best restaurants out there by a rating and comment system. Our app is build in Ruby on Rails 7.1.3.4 with importmaps for our JavaScript and a Postgres database.
Once the initial message has been sent, I won’t need to re-explain my app each time or deal with answers that might only be relevant for a different framework/stack. And, as I keep asking it about specific questions and problems, it will get further insight into how my codebase works.
So, taking into account the techniques we’ve learned so far, let’s improve my prompt from before:
Me: I’m building an app in Ruby 3.3.5 relating to voter registration.
AI: Great! How can I assist you with your Ruby 3.3.5 app for voter registration? Are you looking for help with design, coding, specific features, or something else?
Me: I’m trying to write some code that should print “You are an adult” if your age is greater than 18 and “You are a child” if your age is under 18, but I’m getting an error message.
if age > 18
puts “You are an adult”
else
puts “You are a child”
end
NameError (undefined local variable or method ‘age’ for main:Object)
AI: The error NameError (undefined local variable or method ‘age’ for main:Object) means that the variable age hasn’t been defined before you use it in your if statement.
You need to assign a value to age first. For example, you can get the age from user input or set it manually:
Example with manual assignment
age = 20
if age > 18
puts “You are an adult”
else
puts “You are a child”
end
Or, if you want to get the age from user input:
print “Enter your age:
” age = gets.chomp.to_i
if age > 18
puts “You are an adult”
else
puts “You are a child”
end
Make sure age is defined and has a value before your if condition. Would you like me to help you with input handling or something else?
It did much better and gave me two potential solutions to my problem!
Give Instructions
Even if you give the AI exactly the right context though, it might still not respond in the way that you want. That’s because the AI is still making one very important assumption: how we want it to respond. Similarly, if you ask an AI a question, then there will be many different ways for it to respond, since questions are inherently somewhat open-ended. That’s why, if you need a certain type of response, you are better off giving an AI a command (a “prompt”) as a statement than asking it a question. And, no, you probably don’t need to say “please” or “thank you”. For example:
Me: Assess the following statement by saying “true” if it is correct or “false” if it not: “Whales are mammals”
AI: true
With these kinds of clear instructions, LLMs behave far more consistently. Let’s take a look at some prompts that are particularly effective and you may want to try:
- “Explain in simple terms”
- “Explain this error message”
- “You’re a pro debugger. Spot potential errors in this code that should do X”
- “Explain this concept like I’m a beginner”
- “Refactor this method to make it 10 lines or less” (helpful for dealing with linters like
rubocop)
You can also check out a longer list of AI prompts here. But there’s no need to rely exclusively on prompts that other people have tried already; you can make your own up and experiment with ones that you find helpful.
Zero-Shot
So far, these prompts have all been what are called “zero-shot prompts”. This term means that they tell the AI what it is supposed to do but don’t give examples of successful output.
Few-Shot Prompting
A prompting method that I’ve personally found extremely effective is few-shot prompting, which is where we give the AI some examples to rely on. For example, here’s a prompt that I recently used for which Claude was able to give me exactly the output I wanted:
I’m using a flashcard app to learn Brazilian Portuguese. Can you help me make a CSV to import to Anki? It should have the format “Front,Back” with two columns. I want to practice Brazilian verb conjugations. I want it to cover all standard forms (-ar, -er,= ir) and all common irregular verbs (ter, pôr, etc.) in all tenses. A good sample would be
“Conjugate the verb comer in the present indicative”,”eu como, ele/ela/você come, nos comemos, eles/elas/vocês comem”
“Conjugate the verb trabalhar in the present indicative”,”eu trabalho, ele/ela/você trabalha, nos trabalhamos, eles/elas/vocês trabalham”
It probably would have been difficult to describe the format of the response that I wanted fully in the abstract. But by giving the AI a sample of two lines of the exact file I wanted it to generate, it was able to stick to the format I wanted.
Chain-of-Thought Prompting
Another useful prompting technique is chain-of-thought, in which you tell an AI to break down its process step-by-step instead of simply giving you an answer. A prompt that fairly consistently gets LLMs to take this approach is “Explain your reasoning step-by-step for this problem.”
Me: Explain your reasoning step-by-step for this problem. Write a Ruby script to open a CSV file and print out each line
AI: Certainly! Here’s the step-by-step reasoning to write a Ruby script that opens a CSV file and prints out each line:
- Require the CSV library: Ruby has a built-in CSV library that helps in reading and writing CSV files. You need to require it first.
- Open the CSV file: Use CSV.foreach method to open and iterate through each row of the CSV file. This method reads the file line by line, which is memory efficient.
- Print each line: For each row, you can print it out. Each row is usually an array of fields.
- Handle file path: Specify the path to your CSV file.
And then it provided me with a sample script. Now that I have the set of logical steps the AI took to reach its conclusion, I might ask questions such as “How do you know which file path to use in step 4?”, and I can get it to adjust the final output based on feedback on individual steps.
Have AI Help You Learn
AI isn’t just for problem solving! Once you’ve gotten more comfortable with prompt engineering techniques, you might start finding great opportunities to help you learn. Every developer has probably had this experience: “Well, I fixed the bug but I don’t know why and all I know is I should never touch this line of code ever again.” That’s exactly the kind of situation where you might turn to an AI and ask “Explain this code to me line-by-line in simple terms.” You can also use a similar technique for broader concepts like “Explain the difference between a class and an instance in Ruby in simple terms.” And, you can try one of my favorite techniques, which is to see if an explanation I wrote myself is actually correct and understandable: “Verify that my understanding is correct: In Ruby, a class is a definition of some properties and methods that various instances might share, and an instance is when a single object that follows that definition is instantiated.”
Of course, we all know that AIs sometimes give wrong answers or “hallucinate”. You shouldn’t trust an AI response without verifying it yourself, nor should you use AI-generated code that you don’t understand. Instead, you can “push” an AI and ask it to explain its answer or explain how its code should work. And, there’s no harm in also Googling the issue and seeing if the answers you find online match the AI’s approach.
As developers, we’re witnessing a transformative moment in how we problem-solve and learn. LLMs can be powerful companions in our coding journey. By using prompt engineering techniques, we can get the most effective responses out of these tools and have them:
- Help us debug complex issues
- Explain difficult concepts
- Provide alternative approaches to solving problems
- Accelerate our learning process and understanding
The key is to approach AI with a critical mind, clear communication, and a willingness to learn. Experiment with different prompting techniques, be specific about your context, and always verify the solutions provided. In doing so, you might discover some amazing uses for AI that no one else has tried before!