Zixi "Oz" Li
OzTianlu
AI & ML interests
My research focuses on deep reasoning with small language models, Transformer architecture innovation, and knowledge distillation for efficient alignment and transfer.
Recent Activity
liked
a Space
about 13 hours ago
ggml-org/gguf-my-repo
updated
a collection
4 days ago
Geilim Large Language Models
reacted
to
their
post
with π₯
4 days ago
Geilim-1B-SR-Instruct β Serbian Intelligence for Deep Reasoning π§ π·πΈ
https://huggingface.co/NoesisLab/Geilim-1B-SR-Instruct
Geilim-1B-SR-Instruct is a lightweight Large Language Model (LLM) designed to bring advanced reasoning capabilities to low-resource languages. It focuses on Serbian understanding and generation while maintaining robust English reasoning. Built on the LLaMA-3 architecture with a proprietary hybrid reasoning mechanism, it delivers deep logic while keeping outputs concise and natural. π
Core Innovations π‘
Implicit Deep Reasoning: Combines standard attention mechanisms with graph-structured reasoning components for rigorous logic and causal inference. πΈοΈ
ASPP & -flow Hybrid Design: High-efficiency structured propagation + internal probability space optimization for high-quality reasoning without long-winded intermediate steps. β‘
Bilingual Adaptation: Primarily focused on Serbian while preserving English logic, making it perfect for multilingual chats and cross-lingual tasks. π
Lightweight & Efficient: At ~1.3B parameters, it runs smoothly on consumer-grade GPUs, ideal for edge devices and research. π»
Use Cases π οΈ
Serbian Chatbots: Intelligent assistants with local linguistic nuance. π£οΈ
Educational Tools: Multi-turn interactive tasks and learning support. π
Key Advantages β¨
Clean Output: Avoids messy "thinking" tags; reasoning happens internally, delivering clear and direct results. β
Open Access: Licensed under Apache-2.0, making it easy for research and engineering integration. π
AI Democratization: Empowering low-resource language ecosystems with cutting-edge intelligence. π€