• Home
  • Docker
  • Kubernetes
  • LLMs
  • Java
  • Ubuntu
  • Maven
  • Big Data
  • Archived
LLMs | Installation
  1. Install Hugging Face Transformers
  2. Install Hugging Face Command Line Interface (CLI)
  3. Test Hugging Face Transformers
  4. Install LangChain
  5. Install llama-cpp-python
  6. Test LangChain/lama-cpp-python

  1. Install Hugging Face Transformers
    See this page for more details: https://huggingface.co/docs/transformers/en/installation

    Make sure that both python and pip packages are installed.

    Create a directory to use for your tests: /home/mtitek/dev/llm

    Create and activate a virtual environment in your project directory with venv:

    Install Transformers with pip in your newly created virtual environment:

    To install a CPU-only version of Transformers and a machine learning framework PyTorch, run the following command:
    To test if the installation was successful, run the following command. It should return a label and a score for the provided text:
    To deactivate the virtual environment in your project directory and clean it up:
  2. Install Hugging Face Command Line Interface (CLI)
    See this page for more details: https://huggingface.co/docs/huggingface_hub/main/en/guides/cli

    To install the Hugging Face CLI package:
    You can use the CLI to download models:
  3. Test Hugging Face Transformers
    Example:

    Output:
  4. Install LangChain
    See this page for more details: https://python.langchain.com/docs/how_to/installation/

    To install the main langchain package:
    To install the langchain core package:
    To install the langchain community package:
  5. Install llama-cpp-python
    See these pages for more details:
    https://pypi.org/project/llama-cpp-python/
    https://python.langchain.com/docs/integrations/llms/llamacpp/

    To install the llama-cpp-python package:
  6. Test LangChain/lama-cpp-python
    Download this model:

    Python code:

    Output:
© 2025  mtitek