• Home
  • LLMs
  • Docker
  • Kubernetes
  • Java
  • Ubuntu
  • Maven
  • Archived
  • About
LLMs | Prompt Engineering
  1. Introduction to Prompt Engineering
  2. Prompt Types and Techniques
  3. Example: Basic Prompts
  4. Example: Instruction Prompts
  5. Example: Prompt Chaining
  6. Example: Prompt Chaining - Multiple Chains

  1. Introduction to Prompt Engineering
    Prompt engineering is the systematic process of designing, refining, and optimizing input prompts to guide large language models (LLMs) toward generating desired outputs. It requires understanding of both the model's capabilities and the specific task requirements.

    Good prompts significantly improve output quality and relevance, ensure consistent results across similar tasks, and provide better control over model behavior and output format.

    Effective prompts consist of several key components that work together to guide the model:

    • Role Definition: Establish the model's role or persona.

    • Task Instructions: Provide specific, unambiguous instructions.

    • Context and Background: Supply relevant background information.

    • Input Data: Present the data that needs to be processed.

    • Output Specifications: Define the desired format, length, tone, and structure of the response.

    • Constraints and Guidelines: Set boundaries and specify any limitations or requirements.

    • Examples (Optional): Provide sample inputs and outputs to demonstrate expected behavior.

    Instructions can take the form of questions, requests, or statements. They should be clearly defined and specific about the task to be performed. Instructions can be placed at either the beginning or end of the prompt.

    When accuracy is critical, it's best to instruct the model to respond only if it is confident in its answer.

    For complex tasks, consider breaking the prompt into smaller, simpler prompts and using them across multiple model calls—a technique known as chain prompting. The output from one model can be used as input for the next, potentially involving different models at each step.
  2. Prompt Types and Techniques
    • Basic prompts:
      The prompt can be a simple question or sentence with no guidelines given to the model. The model will try to answer the question or complete the sentence.


      Input/Output:

    • Instruction prompts:
      Structured prompts with clear role definition and specific instructions. An instruction prompt has two parts: the instruction and the data

      Template:


      Input/Output:

    • Instruction prompts with indicators:
      The indicators guide the LLM toward the desired output.


      Output:

    • Few-Shot Prompting:
      Providing examples to demonstrate the desired input-output pattern.

      Structure:
      • Zero-shot prompting: no examples provided.
      • One-shot prompting: single example provided.
      • Few-shot prompting: multiple examples provided.


      Input/Output:

    • Chain-of-Thought Prompting:
      Encouraging the model to show its reasoning process step-by-step.

      Example:

    • Prompt Chaining:
      Breaking complex tasks into smaller, sequential prompts where each output feeds into the next prompt.
  3. Example: Basic Prompts
    Let's work on this simple basic prompt.

    Let's download the model:

    Python code:

    Run the Python script:

    Output:
  4. Example: Instruction Prompts
    Let's work on this simple instruction prompt.

    Python code:

    Run the Python script:

    Output:

    If we set the "return_full_text" parameter to "True", we can see the full chat text:

    If you uncomment these two lines in the code above, you will see the prompt template as created by the pipeline from the prompt:

    Output:

    Note the special tokens: |system|, |user|, |assistant|

    • System: provides guidelines for the model

    • User: provides the user input

    • Assistant: gives the generated output

    • The end of text:
  5. Example: Prompt Chaining
    Let's use LangChain to create a simple chain between a prompt and a model.

    Python code:

    Run the Python script:

    Output:

  6. Example: Prompt Chaining - Multiple Chains
    Let's use LangChain to chain the execution of two prompts.

    Python code:

    Run the Python script:

    Output:
© 2025  mtitek