Breaking Down Problems for LLMs

We all have some problem we are trying to solve. First off, if you can convert the problem to text, or increasingly pdfs, images, video, etc., then you might be able to use AI for some or all of the solution.

Let’s focus on text for now. In this case, we have Large Language Models (LLMs), like ChatGPT, at our disposal.

Now, simply inputting your entire problem into the LLM may not work great or even at all. I talk about this at length in a recent YouTube video, focusing on a feature Amazon recently released.

Simply put, LLMs are fantastic at these sort of tasks:

  • Summarize this
  • Answer this Question based on this context
  • Sentiment Analysis
  • Translate this
  • Generate 10 Ideas for this
  • Write Code to complete this
  • See if there are any errors this text
  • Etc.

Thus if your original problem can be made easier to solve by solving one of these sub-problems, then LLMs might be a good tool for the job.

How one prompts the LLM makes a huge difference on the results. See another recent video of my where I discuss some common tricks.

Once we have a sub-problem in mind, it is sometimes the case that vanilla LLMs are not enough for the job. We can soup them up with things like

Thus we have a lot of tricks up our sleeves to make use of LLMs in practice.

Leave a Reply