--- license: llama2 metrics: - code_eval library_name: transformers tags: - code - llama-cpp - gguf-my-repo base_model: vanillaOVO/WizardCoder-Python-7B-V1.0 model-index: - name: WizardCoder-Python-34B-V1.0 results: - task: type: text-generation dataset: name: HumanEval type: openai_humaneval metrics: - type: pass@1 value: 0.555 name: pass@1 verified: false --- # lainlives/WizardCoder-Python-7B This model contains GGUF format model files for [`vanillaOVO/WizardCoder-Python-7B-V1.0`](https://huggingface.co/vanillaOVO/WizardCoder-Python-7B-V1.0). ### Available Quants The following files were generated and uploaded to this repo: `Q4_0`, `Q4_K_S`, `Q4_K_M`, `Q5_0`, `Q5_K_S`, `Q5_K_M`, `Q6_K`, `Q8_0`, `f16`, `bf16` ### Use with llama.cpp CLI: ```bash llama-cli --hf-repo lainlives/WizardCoder-Python-7B --hf-file WizardCoder-Python-7B-V1.0-Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo lainlives/WizardCoder-Python-7B --hf-file WizardCoder-Python-7B-V1.0-Q4_K_M.gguf -c 2048 ``` ### Or ollama CLI: ```bash ollama run https://hf.co/lainlives/WizardCoder-Python-7B:Q4_K_M ```