-
C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling
Paper • 2512.21332 • Published • 14 -
codefuse-ai/C2LLM-7B
Feature Extraction • 8B • Updated • 215 • 8 -
codefuse-ai/C2LLM-0.5B
Feature Extraction • 0.5B • Updated • 162 • 6 -
codefuse-ai/F2LLM-0.6B
Feature Extraction • 0.6B • Updated • 108 • 12
CodeFuse AI
community
AI & ML interests
None defined yet.
Recent Activity
View all activity
Papers
C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling
F2LLM Technical Report: Matching SOTA Embedding Performance with 6 Million Open-Source Data
Organization Card
Hello World! This is CodeFuse!
CodeFuse aims to develop Code Large Language Models (Code LLMs) to support and enhance full-lifecycle AI native sotware developing, covering crucial stages such as design requirements, coding, testing, building, deployment, operations, and insight analysis;
-
MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning
Paper • 2311.02303 • Published • 12 -
CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model
Paper • 2310.06266 • Published • 2 -
CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models
Paper • 2410.06741 • Published • 3 -
Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM
Paper • 2503.17793 • Published • 23
-
C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling
Paper • 2512.21332 • Published • 14 -
codefuse-ai/C2LLM-7B
Feature Extraction • 8B • Updated • 215 • 8 -
codefuse-ai/C2LLM-0.5B
Feature Extraction • 0.5B • Updated • 162 • 6 -
codefuse-ai/F2LLM-0.6B
Feature Extraction • 0.6B • Updated • 108 • 12
-
MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning
Paper • 2311.02303 • Published • 12 -
CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model
Paper • 2310.06266 • Published • 2 -
CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models
Paper • 2410.06741 • Published • 3 -
Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM
Paper • 2503.17793 • Published • 23
models
26
codefuse-ai/C2LLM-7B
Feature Extraction
•
8B
•
Updated
•
215
•
8
codefuse-ai/C2LLM-0.5B
Feature Extraction
•
0.5B
•
Updated
•
162
•
6
codefuse-ai/F2LLM-4B
Feature Extraction
•
4B
•
Updated
•
275
•
11
codefuse-ai/F2LLM-1.7B
Feature Extraction
•
2B
•
Updated
•
141
•
7
codefuse-ai/F2LLM-0.6B
Feature Extraction
•
0.6B
•
Updated
•
108
•
12
codefuse-ai/CodeFuse-CGM-72B
73B
•
Updated
•
16
•
15
codefuse-ai/Rodimus-Plus-Coder-4B-Chat
5B
•
Updated
•
8
•
2
codefuse-ai/Rodimus-Plus-Coder-4B-Base
Updated
•
7
•
1
codefuse-ai/Rodimus-Plus-Coder-1.6B-Chat
2B
•
Updated
•
4
codefuse-ai/Rodimus-Plus-Coder-1.6B-Base
Updated
•
2
datasets
8
codefuse-ai/F2LLM
Preview
•
Updated
•
2.09k
•
7
codefuse-ai/CodeFuse_codeedit
Viewer
•
Updated
•
61
•
118
•
2
codefuse-ai/CodeGraph
Viewer
•
Updated
•
275
•
1.17k
•
5
codefuse-ai/Evol-instruction-66k
Updated
•
185
•
74
codefuse-ai/CodeExercise-Python-27k
Updated
•
703
•
66
codefuse-ai/GALLa
Viewer
•
Updated
•
627k
•
181
•
3
codefuse-ai/CodeFuse-DevOps-Eval
Preview
•
Updated
•
132
•
20
codefuse-ai/CodeFuseEval
Updated
•
276
•
8