Skip to content

Utilities intended for use with Llama models.

Notifications You must be signed in to change notification settings

Morical/llama-models

 
 

Repository files navigation

🤗 Models on Hugging Face  | Blog  | Website  | Get Started 


Llama Models

Llama is an accessible, open large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. A few key aspects:

  1. Open access: Easy accessibility to cutting-edge large language models, fostering collaboration and advancements among developers, researchers, and organizations
  2. Broad ecosystem: Llama models have been downloaded hundreds of millions of times, there are thousands of community projects built on Llama and platform support is broad from cloud providers to startups - the world is building with Llama!
  3. Trust & safety: Llama models are part of a comprehensive approach to trust and safety, releasing models and tools that are designed to enable community collaboration and encourage the standardization of the development and usage of trust and safety tools for generative AI

Our mission is to empower individuals and industry through this opportunity while fostering an environment of discovery and ethical AI advancements. The model weights are licensed for researchers and commercial entities, upholding the principles of openness.

Llama Models

Model Launch date Model sizes Context Length Tokenizer Acceptable use policy License Model Card
Llama 2 7/18/2023 7B, 13B, 70B 4K Sentencepiece Use Policy License Model Card
Llama 3 4/18/2024 8B, 70B 8K TikToken-based Use Policy License Model Card
Llama 3.1 7/23/2024 8B, 70B, 405B 128K TikToken-based Use Policy License Model Card

Download

To download the model weights and tokenizer, please visit the Meta Llama website and accept our License.

Once your request is approved, you will receive a signed URL over email. Then, run the download.sh script, passing the URL provided when prompted to start the download.

Pre-requisites: Ensure you have wget and md5sum installed. Then run the script: ./download.sh.

Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as 403: Forbidden.

Access to Hugging Face

We also provide downloads on Hugging Face, in both transformers and native llama3 formats. To download the weights from Hugging Face, please follow these steps:

  • Visit one of the repos, for example meta-llama/Meta-Llama-3.1-8B-Instruct.
  • Read and accept the license. Once your request is approved, you'll be granted access to all Llama 3.1 models as well as previous versions. Note that requests used to take up to one hour to get processed.
  • To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the original folder. You can also download them from the command line if you pip install huggingface-hub:
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3.1-8B-Instruct

NOTE The original native weights of meta-llama/Meta-Llama-3.1-405B would not be available through this HugginFace repo.

  • To use with transformers, the following pipeline snippet will download and cache the weights:

    import transformers
    import torch
    
    model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
    
    pipeline = transformers.pipeline(
      "text-generation",
      model="meta-llama/Meta-Llama-3.1-8B-Instruct",
      model_kwargs={"torch_dtype": torch.bfloat16},
      device="cuda",
    )

Installations

You can install this repository as a package by just doing pip install llama-models

Responsible Use

Llama models are a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. To help developers address these risks, we have created the Responsible Use Guide.

Issues

Please report any software “bug” or other problems with the models through one of the following means:

Questions

For common questions, the FAQ can be found here, which will be updated over time as new questions arise.

About

Utilities intended for use with Llama models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 85.4%
  • Shell 10.6%
  • Jinja 4.0%