June 6, 2024
LLAMA
curl -fsSL https://ollama.com/install.sh | sh
ollama run llama2-uncensored:70b
1. Install the Nvidia container toolkit.
2. Run Ollama inside a Docker container
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama run llama2-uncensored:70b