• Home
  • Docker
  • Kubernetes
  • LLMs
  • Java
  • Ubuntu
  • Maven
  • Big Data
  • Archived
LLMs | Prompt Engineering
  1. Prompt Engineering
  2. Types of prompts
  3. Prompt chaining

  1. Prompt Engineering
    Prompt engineering is the process of re-writing the prompt to guide the model toward generating the preferred output.

    A prompt consists of multiple parts: instructions, indicators, ..., data.

    The instructions can be questions, requests, or statements.

    The instructions need to be specific about the task to be executed and should either be added to the beginning or the end of the prompt.

    In cases where the accuracy is important we should request the model to answer only when it's sure about the answer.

    The prompt should state the role of the model by clearly saying that it's an expert in the targeted domain.

    Additional information can be added to the prompt to describe the context of the instruction, the tone of the output, the format of the output, and the targeted audience.

    We can also provide the model with sample prompts along with the expected output for each prompt:
    • Zero-shot prompting: no example is provided.
    • One-shot prompting: A single example is given.
    • Few-shot prompting: More than one example is given.

    We can also split up complex prompts into small prompts and use them to make multiple calls (chain prompting) to the model (possibly calling different models). The output from a model is added to the next prompt.

    Example:

    If we set the "return_full_text" parameter to "True", we can see the full chat text:

    We can see the prompt template as created by the pipeline from the prompt:

    Output:

    Note the special tokens: |system|, |user|, |assistant|

    • System: guidelines for the model

    • User: user input

    • Assistant: generated output

    • The end of text:
  2. Types of prompts
    • Basic prompt:
      The prompt can be a simple question or sentence with no guidelines given to the model.
      The model will try to answer the question or complete the sentence.


      Output:

    • Instruction prompt:
      An instruction prompt has two parts: the instruction and the data


      Output:

    • Instruction prompt with indicators:
      The indicators guide the LLM toward the desired output.


      Output:

    • One-shot prompting:
      An example is provided to the model to show the expected output.


      Output:
  3. Prompt chaining
    First we need to download the model:

    • Let's start with a simple prompt:

      Output:

    • Now let's create a simple chain between the prompt and the template:

      Output:

    • Let's chain some prompts:

      Output:
© 2025  mtitek