🛡️ EvoNet-8B-Reasoning
EvoNet-8B-Reasoning is a specialized Large Language Model designed for the EvoNet Security Audit System. Built on top of the powerful Llama-3.1-8B-Instruct architecture, this model has been significantly enhanced with step-by-step logical reasoning capabilities via the LogicReward adapter.
This model acts as an elite Cybersecurity Pentester & System Architect, capable of analyzing complex server logs, identifying vulnerabilities (like SQLi, XSS, RCE), and providing detailed, thought-out mitigation strategies.
🚀 Key Features
- Step-by-Step Reasoning: The model analyzes problems methodically before outputting a final answer, drastically reducing hallucinations in technical analysis.
- Cybersecurity Focus: Optimized for log analysis, vulnerability scanning, and secure architecture design.
- Bilingual Support: Understands and generates responses in both English and Vietnamese natively.
- Zero-cost Inference Ready: Lightweight enough (8B parameters) to be deployed on affordable hardware (like 16GB VRAM GPUs with 4-bit quantization).
🛠️ Model Details
- Architecture: Llama 3.1
- Parameters: 8 Billion
- Base Model:
NousResearch/Meta-Llama-3.1-8B-Instruct - Quantization Support: NF4 (4-bit) / FP16
- Developed by: Phong Huỳnh (EvoNet)
💻 How to Use (Python / HuggingFace Transformers)
You can easily run this model on a free T4 GPU (like Kaggle or Google Colab) using 4-bit quantization.
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
model_id = "EvoNet/EvoNet-8b-Reasoning" # Update with your exact repo name
# 4-bit Quantization Config to save VRAM
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map={"": 0}
)
messages = [
{"role": "system", "content": "Bạn là EvoNet Pentester. Hãy phân tích vấn đề từng bước rõ ràng trước khi đưa ra kết luận."},
{"role": "user", "content": "Phân tích payload sau: `admin' AND (SELECT 1 FROM (SELECT SLEEP(5))A) AND '1'='1`. Đây là lỗi gì và cách khắc phục?"}
]
prompt_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.7)
print(tokenizer.decode(outputs[0][inputs['input_ids'].shape[-1]:], skip_special_tokens=True
⚠️ Disclaimer This model is developed for educational and defensive purposes only as part of the EvoNet SaaS Audit System. Do not use this model to conduct unauthorized attacks on systems you do not own.
- Downloads last month
- 128
Model tree for EvoNet/EvoNet-8b-Reasoning
Base model
NousResearch/Meta-Llama-3.1-8B-Instruct