I have solved it with a cpu installation by installing this : https://github.com/krychu/llama instead of https://github.com/facebookresearch/llama
Complete process to install :
- download the original version of Llama from :
https://github.com/facebookresearch/llamaand extract it to allama-mainfolder - download th cpu version from :
https://github.com/krychu/llamaand extract it and replace files in thellama-mainfolder - run the
download.shscript in a terminal, passing the URL provided when prompted to start the download - go to the
llama-mainfolder - cretate an Python3 env :
python3 -m venv envand activate it :source env/bin/activate - install the cpu version of pytorch :
python3 -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu #pour la version cpu - install dependencies off llama :
python3 -m pip install -e . - run if you have downloaded llama-2-7b :
torchrun --nproc_per_node 1 example_text_completion.py \
--ckpt_dir llama-2-7b/ \
--tokenizer_path tokenizer.model \
--max_seq_len 128 --max_batch_size 1 #(instead of 4)
No comments:
Post a Comment