Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
links: struct<papers: list<item: struct<title: string, url: string, pdf_url: string, doi_url: string, source: string, year: int64>>, by_source: struct<offline_iclr: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, offline_nips: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, offline_icml: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, offline_cvpr: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, semantic_scholar: list<item: struct<title: string, url: string, pdf_url: string, doi_url: string, source: string, year: int64>>, arxiv: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>>, pdfs_only: list<item: struct<title: string, pdf: string>>, dois_only: list<item: struct<title: string, doi: string>>>
metadata: struct<query: string, total_steps: int64, last_updated: string, started_at: string, total_papers: int64>
vs
papers: list<item: struct<title: string, authors: list<item: string>, abstract: string, url: string, year: int64, venue: string, source: string, doi: string, pdf_url: string, citations: int64, categories: list<item: string>, id: string, track: string, status: string, keywords: string, tldr: string, primary_area: string, similarity_score: double, novelty_score: double, recency_score: double, relevance_score: double, bm25_score: double, combined_score: double, rank: int64>>
metadata: struct<query: string, total_steps: int64, last_updated: string, started_at: string, total_papers: int64>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 572, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
links: struct<papers: list<item: struct<title: string, url: string, pdf_url: string, doi_url: string, source: string, year: int64>>, by_source: struct<offline_iclr: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, offline_nips: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, offline_icml: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, offline_cvpr: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>, semantic_scholar: list<item: struct<title: string, url: string, pdf_url: string, doi_url: string, source: string, year: int64>>, arxiv: list<item: struct<title: string, url: string, pdf_url: string, doi_url: null, source: string, year: int64>>>, pdfs_only: list<item: struct<title: string, pdf: string>>, dois_only: list<item: struct<title: string, doi: string>>>
metadata: struct<query: string, total_steps: int64, last_updated: string, started_at: string, total_papers: int64>
vs
papers: list<item: struct<title: string, authors: list<item: string>, abstract: string, url: string, year: int64, venue: string, source: string, doi: string, pdf_url: string, citations: int64, categories: list<item: string>, id: string, track: string, status: string, keywords: string, tldr: string, primary_area: string, similarity_score: double, novelty_score: double, recency_score: double, relevance_score: double, bm25_score: double, combined_score: double, rank: int64>>
metadata: struct<query: string, total_steps: int64, last_updated: string, started_at: string, total_papers: int64>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Paper Circle: An Open-source Multi-agent Research Discovery and Analysis Framework
Komal Kumar1, Aman Chadha2, Salman Khan1, Fahad Shahbaz Khan1, Hisham Cholakkal1
1 Mohamed bin Zayed University of Artificial Intelligence 2 AWS Generative AI Innovation Center, Amazon Web Services
Features
- Paper Discovery β Multi-agent AI search across arXiv, Scopus, and IEEE with hybrid BM25 + TF-IDF ranking and three discovery modes (Stable, Discovery, Balanced)
- Paper Mind Graph β LLM-powered extraction of concepts, methods, and experiments into structured knowledge graphs with interactive Q&A
- Paper Review Generation β Conference-format reviews (ICLR/NeurIPS/ICML style) via multi-agent analysis with lineage extraction
- Paper Lineage β Relationship mapping (extends/applies/evaluates/contradicts/survey/prerequisite) with interactive graph visualization
- Reading Circles β Community-based reading groups with role-based access, session scheduling, RSVP, and discussion threads
Hugging Face Resources
| Resource | Type | Link |
|---|---|---|
| Papers Database | Dataset | ItsMaxNorm/pc-database |
| Papers API | Space | ItsMaxNorm/papercircle-papers-api |
| Benchmark Leaderboard | Space | ItsMaxNorm/pc-bench |
| Benchmark Results | Dataset | ItsMaxNorm/pc-benchmark |
| Research Sessions | Dataset | ItsMaxNorm/pc-research |
Getting Started
Prerequisites
- Node.js >= 18 and Python >= 3.10
- A Supabase project
- An LLM provider: Ollama (local), OpenAI, or Anthropic
Install and Run
git clone https://github.com/MAXNORM8650/papercircle.git
cd papercircle
# Install
npm install
pip install -r backend/requirements-prod.txt
# Configure
cp .env.example .env # Edit with your Supabase & LLM credentials
# Run
npm run dev # Frontend (localhost:5173)
python backend/apis/fast_discovery_api.py # Discovery API (localhost:8000)
python backend/apis/paper_review_server.py # Review API (localhost:8005)
python backend/apis/paper_analysis_api.py # Analysis API (localhost:8006)
See docs/QUICK_START.md for detailed setup and docs/DEPLOYMENT_GUIDE.md for production deployment.
Project Structure
papercircle/
βββ src/ # Frontend (React 18 + TypeScript)
β βββ components/
β β βββ Papers/ # Paper discovery, detail, analysis views
β β βββ Lineage/ # Paper relationship graph & analysis hub
β β βββ Sessions/ # Session scheduling, RSVP, attendance
β β βββ Communities/ # Reading circle management
β β βββ Dashboard/ # User dashboard
β β βββ Auth/ # Authentication modals
β β βββ Layout/ # Header, navigation
β β βββ Admin/ # Admin panel
β β βββ Settings/ # LLM & user settings
β βββ contexts/ # AuthContext, CommunityContext, LineageAnalysisContext
β βββ lib/ # Supabase client, API helpers, arXiv client
β βββ hooks/ # Custom React hooks
β
βββ backend/
β βββ agents/
β β βββ paper_review_agents/ # Multi-agent review generation & benchmarking
β β β βββ orchestrator.py # Agent orchestration pipeline
β β β βββ specialized_agents.py # Critic, Literature, Reproducibility agents
β β β βββ benchmark_framework.py # Review benchmark framework
β β β βββ benchmark_paper_review.py # Benchmark CLI
β β β βββ evaluation_metrics.py # MSE, MAE, correlation, accuracy metrics
β β β βββ benchmark_results/ # Cached benchmark outputs
β β βββ paper_mind_graph/ # Knowledge graph extraction from PDFs
β β β βββ graph_builder.py # LLM-based concept/method extraction
β β β βββ qa_system.py # Interactive Q&A over papers
β β β βββ ingestion.py # PDF parsing & chunking
β β β βββ export.py # JSON/Markdown/Mermaid/HTML export
β β βββ discovery/ # Paper discovery agents & ranking
β β βββ agents/ # Core query & research agents
β βββ apis/
β β βββ fast_discovery_api.py # Discovery API (port 8000)
β β βββ paper_review_server.py # Review API (port 8005)
β β βββ paper_analysis_api.py # Analysis API (port 8006)
β β βββ community_papers_api.py # Community papers API
β β βββ research_pipeline_api.py # Research pipeline API
β β βββ unified/ # Unified Docker API (app.py + routers/)
β βββ core/ # paperfinder.py, discovery_papers.py
β βββ services/ # HuggingFace papers client
β βββ utils/ # Storage utilities
β
βββ supabase/
β βββ migrations/ # 55 SQL migrations (schema, RLS, seeds)
β βββ functions/ # Edge functions (arxiv-search)
β
βββ api/ # Vercel serverless functions
β βββ arxiv.js # arXiv CORS proxy
β βββ community-papers.js # Community papers endpoint
β βββ sync-status.js # Sync status endpoint
β
βββ scripts/ # Utility scripts
β βββ javascript/ # arxiv-proxy, search engine, test scripts
β βββ shell/ # Start scripts for each API service
β βββ *.py # Dataset builder, sync, DB fixes
β
βββ docs/ # Documentation
β βββ BENCHMARKS.md # Benchmark guide (review + retrieval)
β βββ QUICK_START.md # Quick start guide
β βββ DEPLOYMENT_GUIDE.md # Production deployment
β βββ SECURITY.md # Security guidelines
β βββ MIGRATION_COMPLETE.md # Serverless migration summary
β βββ PAPER_REVIEW_AGENTS_IMPLEMENTATION.md # Review system implementation
β
βββ examples/
β βββ pc-data/ # Benchmark datasets
β βββ docs/ # Architecture & integration guides
β βββ ARCHITECTURE_DIAGRAMS.md # System diagrams
β βββ MULTI_AGENT_PIPELINE_ARCHITECTURE.md
β βββ ORCHESTRATOR_ARCHITECTURE.md
β βββ PAPER_MIND_GRAPH_ARCHITECTURE.md
β βββ AGENT_OPTIMIZATION_GUIDE.md
β βββ RERANKER_INTEGRATION_SUMMARY.md
β βββ setup/ # Module setup & integration guides
β
βββ hf_spaces/ # HuggingFace Spaces (Papers API app)
βββ assets/ # Architecture & results figures
βββ public/ # Logo and static assets
Benchmarks
Two evaluation suites: Review Quality (AI reviews vs human reviewers) and Retrieval Quality (paper search accuracy).
| Benchmark | Metrics | Conferences | Details |
|---|---|---|---|
| Paper Review | MSE, MAE, Pearson r, Spearman Ο, Accuracy Β±0.5/1.0/1.5 | ICLR, NeurIPS, ICML | docs/BENCHMARKS.md |
| Retrieval | Recall@k, MRR, Success Rate | 30+ conferences | docs/BENCHMARKS.md |
# Review benchmark
python backend/agents/paper_review_agents/benchmark_paper_review.py \
--data iclr2024.json --conference iclr --limit 100
# Retrieval benchmark
python benchmark_multiagent.py --queries queries.json --baseline bm25+reranker
Model results: ItsMaxNorm/pc-benchmark Interactive leaderboard: ItsMaxNorm/pc-bench
Citation
If you find PaperCircle useful in your research, please cite our paper:
misc{kumar2026papercircleopensourcemultiagent,
title={Paper Circle: An Open-source Multi-agent Research Discovery and Analysis Framework},
author={Komal Kumar and Aman Chadha and Salman Khan and Fahad Shahbaz Khan and Hisham Cholakkal},
year={2026},
eprint={2604.06170},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2604.06170},
}
License
MIT License β see LICENSE
Acknowledgments
arXiv β’ Supabase β’ smolagents β’ LiteLLM β’ Ollama β’ Hugging Face
- Downloads last month
- 818