Text Generation
Transformers
Safetensors
mistral
Merge
mergekit
cognitivecomputations/dolphin-2.8-mistral-7b-v02
NousResearch/Hermes-2-Pro-Mistral-7B
conversational
text-generation-inference
Instructions to use saucam/Nereus-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use saucam/Nereus-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="saucam/Nereus-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("saucam/Nereus-7B") model = AutoModelForCausalLM.from_pretrained("saucam/Nereus-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use saucam/Nereus-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "saucam/Nereus-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "saucam/Nereus-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/saucam/Nereus-7B
- SGLang
How to use saucam/Nereus-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "saucam/Nereus-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "saucam/Nereus-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "saucam/Nereus-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "saucam/Nereus-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use saucam/Nereus-7B with Docker Model Runner:
docker model run hf.co/saucam/Nereus-7B
🌊 Nereus-7B
Nereus-7B excels at conversations, coding, and tasks that require structured output in JSON. It is a merge of the following models using mergekit:
🧩 Configuration
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
density: 0.5
weight: 0.4
# No parameters necessary for base model
- model: NousResearch/Hermes-2-Pro-Mistral-7B
parameters:
density: 0.5
weight: 0.6
merge_method: dare_ties
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
int8_mask: true
dtype: bfloat16
Eval Results
| Benchmark | Model | agieval | gpt4all | bigbench | truthfulqa | Average |
|---|---|---|---|---|---|---|
| nous | Nereus-7B | 42.8 | 72.21 | 39.17 | 54.32 | 52.12 |
| Benchmark | Model | winogrande | arc | gsm8k | mmlu | truthfulqa | hellaswag | Average |
|---|---|---|---|---|---|---|---|---|
| openllm | Nereus-7B | 76.95 | 62.54 | 46.25 | 59.6 | 54.32 | 83.23 | 63.82 |
For detailed results see here
💻 Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Nereus-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Sample responses
What is a large language model?<|im_end|>
<|im_start|>assistant
A large language model is a type of artificial intelligence (AI) system that has been trained on vast amounts of text data to understand, generate, and predict patterns in human language. It is designed to process and analyze natural language input, making it capable of tasks such as text generation, translation, language translation, and text classification. These models are typically based on deep learning techniques, particularly neural networks, and are trained on large datasets, often consisting of billions of words. Some well-known large language models include GPT-3 by OpenAI, BERT by Google, and T5 by Google. These models can be fine-tuned for specific tasks or domains to improve their performance. They have revolutionized the field of natural language processing and have numerous applications in areas such as chatbots, search engines, and automated writing assistance.
- Downloads last month
- 88
