Distractor Injection Attacks on Large Reasoning Models: Characterization and Defense Paper • 2510.16259 • Published Oct 17, 2025 • 3 • 2
The Personalization Trap: How User Memory Alters Emotional Reasoning in LLMs Paper • 2510.09905 • Published Oct 10, 2025 • 6 • 4
Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective Paper • 2506.19028 • Published Jun 23, 2025 • 4 • 1
SATA-BENCH: Select All That Apply Benchmark for Multiple Choice Questions Paper • 2506.00643 • Published May 31, 2025 • 6 • 2