Instructions to use CognitiveScience/CogMod3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CognitiveScience/CogMod3 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="CognitiveScience/CogMod3")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CognitiveScience/CogMod3") model = AutoModelForCausalLM.from_pretrained("CognitiveScience/CogMod3") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use CognitiveScience/CogMod3 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "CognitiveScience/CogMod3" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CognitiveScience/CogMod3", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/CognitiveScience/CogMod3
- SGLang
How to use CognitiveScience/CogMod3 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "CognitiveScience/CogMod3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CognitiveScience/CogMod3", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "CognitiveScience/CogMod3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CognitiveScience/CogMod3", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use CognitiveScience/CogMod3 with Docker Model Runner:
docker model run hf.co/CognitiveScience/CogMod3
File size: 1,806 Bytes
6811845 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | {
"_name_or_path": "gpt2",
"activation_function": "leaky_relu",
"architectures": [
"GPT2LMHeadModel"
],
"attn_block_resid_gain": 1,
"attn_block_skip_gain": 1,
"attn_mat_resid_gain": 1,
"attn_mat_skip_gain": 0,
"attn_pdrop": 0,
"bos_token_id": 0,
"centre_attn": false,
"centre_attn_gain": 1.0,
"embd_pdrop": 0,
"eos_token_id": 0,
"first_layer_value_resid_gain": null,
"initializer_range": 0.02,
"key_init_std": null,
"last_layer_proj_resid_gain": null,
"layer_norm_epsilon": 1e-05,
"lrelu_neg_slope": 0,
"mlp_block_resid_gain": 1,
"mlp_block_skip_gain": 1,
"mlp_proj_init_std": false,
"model_type": "gpt2",
"n_ctx": 128,
"n_embd": 117,
"n_head": 9,
"n_inner": 468,
"n_layer": 12,
"n_positions": 1024,
"norm_position": "pre",
"norm_type": "rmsnorm",
"output_attentions": "false",
"parallel_layers": false,
"proj_init_type": "normal",
"proj_resid_gain": 1.0,
"proj_skip_gain": null,
"query_init_std": null,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"tie_valproj_init": null,
"torch_dtype": "float32",
"trainable_attn_block_gains": false,
"trainable_attn_mat_gains": false,
"trainable_mlp_block_gains": false,
"trainable_proj_gains": false,
"trainable_value_gains": false,
"transformers_version": "4.38.1",
"use_cache": true,
"val_init_type": "normal",
"val_proj_init_std": null,
"value_resid_gain": 1,
"value_skip_gain": 0,
"vocab_size": 50000
}
|