You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Neotoi Coder v1

A Rust/Dioxus 0.7 specialist fine-tuned from Qwen3-Coder-14B using RAFT (Retrieval-Augmented Fine-Tuning). Optimized for production-quality Dioxus 0.7 components with Tailwind v4 and WCAG 2.2 AAA accessibility.

Exam Results

Tier Score Required Status
T1 Fundamentals 9/10 9/10 βœ…
T2 RSX Syntax 9/10 8/10 βœ…
T3 Signal Hygiene 10/10 8/10 βœ…
T4 WCAG/ARIA 9/10 7/10 βœ…
T5 use_resource 4/5 4/5 βœ…
T6 Hard Reasoning 2/5 2/5 βœ…
T7 Primitives+CSS 8/10 6/10 βœ…
Overall 51/60 50/60 βœ… PASS

Model Details

  • Base model: Qwen3-Coder-14B
  • Method: RAFT (Retrieval-Augmented Fine-Tuning)
  • Dataset: 3,156 curated Dioxus 0.7 examples
  • Scope: Rust + Dioxus 0.7 + Tailwind v4 + WCAG 2.2 AAA
  • Quantization: Q4_K_M (8.38 GB)
  • Author: Kevin Miller, Jr.

Enabling Thinking Mode

This model supports Qwen3 native thinking tokens. Thinking must be enabled manually depending on your inference backend.

LM Studio

In the chat interface go to the prompt template settings and configure:

Field Value
Before System <|im_start|>system
After System <|im_end|>
Before User <|im_start|>user
After User <|im_end|>
Before Assistant <|im_start|>assistant\n<think>
After Assistant <|im_end|>

Ollama

Create a Modelfile:

FROM neotoi-coder-v1-q4_k_m_final.gguf
PARAMETER temperature 0.2
PARAMETER num_predict 4096
PARAMETER repeat_penalty 1.15
PARAMETER stop "<|im_end|>"
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
<think>
"""
SYSTEM You are Neotoi, an expert Rust and Dioxus 0.7 developer. Always think step-by-step before answering.

llama.cpp / llama-cli

./llama-cli \
  -m neotoi-coder-v1-q4_k_m_final.gguf \
  -ngl 99 \
  --temp 0.2 \
  -p "<|im_start|>user\nYour question here<|im_end|>\n<|im_start|>assistant\n<think>"

What It Knows

  • Dioxus 0.7 RSX brace syntax β€” never function-call style
  • use_signal, use_resource with correct three-arm match
  • r#for on label elements only, never inputs
  • WCAG 2.2 AAA: aria_labelledby, aria_describedby, role="alert", role="dialog", live regions
  • dioxus-primitives β€” no manual ARIA on managed components
  • styles!() macro for CSS modules
  • Tailwind v4 utility classes

What It Does Not Know

  • Tier 6 hard reasoning edge cases (use_context panic behavior, optimistic UI race conditions) β€” known weak spots
  • Playwright/E2E testing (out of scope)
  • Non-Dioxus web frameworks

License

Neotoi Coder Community License v1.0 β€” see LICENSE file. Commercial use of model outputs permitted. Weight redistribution prohibited. Mental health deployment requires written permission.

Credits

Built with:

  • Unsloth β€” 2x faster fine-tuning
  • TRL β€” SFTTrainer
  • Qwen3-Coder-14B β€” base model
  • MLX β€” dataset generation on Apple Silicon
  • Claude Code β€” dataset pipeline and training infrastructure
  • Ansible β€” server automation and RAFT workflow orchestration
  • repomix β€” bundling framework source into LLM context
  • Forgejo β€” self-hosted git, source stored locally
  • Zed β€” editor used throughout development
  • Dioxus β€” the framework this model specializes in

Developed on:

  • Apple M3 MacBook Pro β€” dataset generation, MLX inference, LM Studio
  • Rocky Linux 10.1 β€” dataset generation, Unsloth fine-tuning, PyTorch, GGUF export
  • CachyOS β€” additional RAFT pipeline work
Downloads last month
14
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support