Instructions to use SenseLLM/ReflectionCoder-CL-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SenseLLM/ReflectionCoder-CL-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SenseLLM/ReflectionCoder-CL-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("SenseLLM/ReflectionCoder-CL-7B") model = AutoModelForCausalLM.from_pretrained("SenseLLM/ReflectionCoder-CL-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SenseLLM/ReflectionCoder-CL-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SenseLLM/ReflectionCoder-CL-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SenseLLM/ReflectionCoder-CL-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SenseLLM/ReflectionCoder-CL-7B
- SGLang
How to use SenseLLM/ReflectionCoder-CL-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SenseLLM/ReflectionCoder-CL-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SenseLLM/ReflectionCoder-CL-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SenseLLM/ReflectionCoder-CL-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SenseLLM/ReflectionCoder-CL-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use SenseLLM/ReflectionCoder-CL-7B with Docker Model Runner:
docker model run hf.co/SenseLLM/ReflectionCoder-CL-7B
| license: apache-2.0 | |
| datasets: | |
| - SenseLLM/ReflectionSeq-GPT | |
| - SenseLLM/ReflectionSeq-DS | |
| language: | |
| - en | |
| ## ReflectionCoder: Learning from Reflection Sequence for Enhanced One-off Code Generation | |
| <p align="center"> | |
| <a href="https://arxiv.org/abs/2405.17057">π Paper</a> β’ | |
| <a href="https://github.com/SenseLLM/ReflectionCoder">π Repo</a> β’ | |
| <a href="https://huggingface.co/SenseLLM/ReflectionCoder-DS-33B">π€ Models</a> β’ | |
| <a href="https://huggingface.co/datasets/SenseLLM/ReflectionSeq-GPT">π Datasets </a> | |
| </p> | |
| ## Introduction | |
| ReflectionCoder is a novel approach that effectively leverages reflection sequences constructed by integrating compiler feedback to improve one-off code generation performance. Please refer to our paper and repo for more details! | |
|  | |
| <hr> | |
| ## Models | |
| | Model | Checkpoint | Size | HumanEval (+) | MBPP (+) | License| | |
| |:-------|:------------|:------|:---------------|:----------|:--------| | |
| | ReflectionCoder-CL-7B | π€ [HF Link](https://huggingface.co/SenseLLM/ReflectionCoder-CL-7B) | 7B | 75.0 (68.9) | 72.2 (61.4) | [Llama2](https://ai.meta.com/llama/license/) | | |
| | ReflectionCoder-CL-34B | π€ [HF Link](https://huggingface.co/SenseLLM/ReflectionCoder-CL-34B) | 34B | 70.7 (66.5) | 68.4 (56.6) | [Llama2](https://ai.meta.com/llama/license/) | | |
| | ReflectionCoder-DS-6.7B | π€ [HF Link](https://huggingface.co/SenseLLM/ReflectionCoder-DS-6.7B) | 6.7B | 80.5 (74.4) | 81.5 (69.6) | [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL) | | |
| | ReflectionCoder-DS-33B | π€ [HF Link](https://huggingface.co/SenseLLM/ReflectionCoder-DS-33B) | 33B | 82.9 (76.8) | 84.1 (72.0) | [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL) | | |
| ## Datasets | |
| | Dataset | Link | License | | |
| |:-------------------|:----------------|:----------------------------------------------| | |
| | ReflectionSeq-GPT | π€ [HF Link](https://huggingface.co/datasets/SenseLLM/ReflectionSeq-GPT) | [License](LICENSE) | | |
| | ReflectionSeq-DS | π€ [HF Link](https://huggingface.co/datasets/SenseLLM/ReflectionSeq-DS) | [License](LICENSE) | | |
| ## How to Use | |
| #### Chat Format | |
| Following chat templates of most models, we use two special tokens to wrap the message of user and assistant, *i.e.*, ``<|user|>``, ``<|assistant|>``, and ``<|endofmessage|>``. Furthermore, we use two special tokens to wrap the content of different blocks, *i.e.*, ``<|text|>`` and ``<|endofblock|>``. You can use the following code to prompt our ReflectionCoder. | |
| ```python | |
| import torch | |
| from transformers import pipeline | |
| chat = [ | |
| {"role": "user", "content": "<Your code instruction here>"} | |
| ] | |
| generator = pipeline( | |
| model="SenseLLM/ReflectionCoder-CL-7B", | |
| task="text-generation", | |
| torch_dtype=torch.bfloat16, | |
| device_map="auto", | |
| ) | |
| result = generator(chat, max_length=128, num_return_sequences=1) | |
| print(result) | |
| ``` | |
| Please refer to our [GitHub Repo](https://github.com/SenseLLM/ReflectionCoder) for more technical details. | |
| ## Citation | |
| If you find this repo useful for your research, please kindly cite our paper: | |
| ``` | |
| @misc{ren2024reflectioncoder, | |
| title={ReflectionCoder: Learning from Reflection Sequence for Enhanced One-off Code Generation}, | |
| author={Houxing Ren and Mingjie Zhan and Zhongyuan Wu and Aojun Zhou and Junting Pan and Hongsheng Li}, | |
| year={2024}, | |
| eprint={2405.17057}, | |
| archivePrefix={arXiv}, | |
| primaryClass={cs.CL} | |
| } | |
| ``` | |
| ## Acknowledgments | |
| We thank the following amazing projects that truly inspired us: | |
| - [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) | |
| - [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder) | |
| - [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder) | |
| - [Evol-CodeAlpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) | |
| - [MagiCoder](https://github.com/ise-uiuc/magicoder/tree/main) | |
| - [EvalPlus](https://github.com/evalplus/evalplus) | |
| - [OpenCoderInterpreter](https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/tree/main) |