Instructions to use saucam/Phind-Codefuse-34B-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use saucam/Phind-Codefuse-34B-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="saucam/Phind-Codefuse-34B-gguf", filename="Phind-Codefuse-34B.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use saucam/Phind-Codefuse-34B-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf saucam/Phind-Codefuse-34B-gguf # Run inference directly in the terminal: llama-cli -hf saucam/Phind-Codefuse-34B-gguf
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf saucam/Phind-Codefuse-34B-gguf # Run inference directly in the terminal: llama-cli -hf saucam/Phind-Codefuse-34B-gguf
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf saucam/Phind-Codefuse-34B-gguf # Run inference directly in the terminal: ./llama-cli -hf saucam/Phind-Codefuse-34B-gguf
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf saucam/Phind-Codefuse-34B-gguf # Run inference directly in the terminal: ./build/bin/llama-cli -hf saucam/Phind-Codefuse-34B-gguf
Use Docker
docker model run hf.co/saucam/Phind-Codefuse-34B-gguf
- LM Studio
- Jan
- vLLM
How to use saucam/Phind-Codefuse-34B-gguf with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "saucam/Phind-Codefuse-34B-gguf" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "saucam/Phind-Codefuse-34B-gguf", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/saucam/Phind-Codefuse-34B-gguf
- Ollama
How to use saucam/Phind-Codefuse-34B-gguf with Ollama:
ollama run hf.co/saucam/Phind-Codefuse-34B-gguf
- Unsloth Studio new
How to use saucam/Phind-Codefuse-34B-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for saucam/Phind-Codefuse-34B-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for saucam/Phind-Codefuse-34B-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for saucam/Phind-Codefuse-34B-gguf to start chatting
- Docker Model Runner
How to use saucam/Phind-Codefuse-34B-gguf with Docker Model Runner:
docker model run hf.co/saucam/Phind-Codefuse-34B-gguf
- Lemonade
How to use saucam/Phind-Codefuse-34B-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull saucam/Phind-Codefuse-34B-gguf
Run and chat with the model
lemonade run user.Phind-Codefuse-34B-gguf-{{QUANT_TAG}}List all available models
lemonade list
Phind-Codefuse-34B-gguf
Phind-Codefuse-34B-gguf is an 8-bit quantized version of Phind-Codefuse-34B which is a merge of the following models using LazyMergekit:
Usage
Use llama.cpp directly or any of the supported UIs over it.
./main -m /<path to model>/Phind-Codefuse-34B.gguf -p "Write a function to print first n fibonacci numbers in python\n" -n 400 -e
Log start
main: build = 2382 (621e86b3)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed = 1710249100
llama_model_loader: loaded meta data with 22 key-value pairs and 435 tensors from /home/ydatta/Downloads/Phind-Codefuse-34B.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mergekit
llama_model_loader: - kv 2: llama.context_length u32 = 16384
llama_model_loader: - kv 3: llama.embedding_length u32 = 8192
llama_model_loader: - kv 4: llama.block_count u32 = 48
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 22016
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 64
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 7
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - type f32: 97 tensors
llama_model_loader: - type q8_0: 338 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 16384
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 48
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 22016
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 16384
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 34B
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 33.74 B
llm_load_print_meta: model size = 33.39 GiB (8.50 BPW)
llm_load_print_meta: general.name = mergekit
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 2 '</s>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.17 MiB
llm_load_tensors: CPU buffer size = 34194.28 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 96.00 MiB
llama_new_context_with_model: KV self size = 96.00 MiB, K (f16): 48.00 MiB, V (f16): 48.00 MiB
llama_new_context_with_model: CPU input buffer size = 18.01 MiB
llama_new_context_with_model: CPU compute buffer size = 128.00 MiB
llama_new_context_with_model: graph splits (measure): 1
system_info: n_threads = 16 / 32 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 512, n_predict = 400, n_keep = 1
Write a function to print first n fibonacci numbers in python
Here is a simple Python function that prints the first `n` Fibonacci numbers:
```python
def print_fibonacci(n):
a, b = 0, 1
for _ in range(n):
print(a)
a, b = b, a + b
print_fibonacci(10) # prints first 10 Fibonacci numbers
This function starts with a and b as the first two Fibonacci numbers (0 and 1), then it enters a loop that runs n times. In each iteration, it prints the current value of a, then updates a and b to be the next two Fibonacci numbers (b and the sum of a and b). [end of text]
llama_print_timings: load time = 1427.82 ms llama_print_timings: sample time = 29.32 ms / 186 runs ( 0.16 ms per token, 6342.71 tokens per second) llama_print_timings: prompt eval time = 2306.73 ms / 15 tokens ( 153.78 ms per token, 6.50 tokens per second) llama_print_timings: eval time = 134618.75 ms / 185 runs ( 727.67 ms per token, 1.37 tokens per second) llama_print_timings: total time = 137001.23 ms / 200 tokens Log end ```
- Downloads last month
- 8
We're not able to determine the quantization variants.