Dataset Viewer
Auto-converted to Parquet Duplicate
repo_name
stringlengths
3
38
repo_commit
stringlengths
40
40
repo_content
stringlengths
0
949k
repo_readme
stringlengths
34
275k
XSStrike
f29278760453996c713af908376d6dab24e61692
File: xsstrike.py #!/usr/bin/env python3 from __future__ import print_function from core.colors import end, red, white, bad, info # Just a fancy ass banner print('''%s \tXSStrike %sv3.1.5 %s''' % (red, white, end)) try: import concurrent.futures from urllib.parse import urlparse try: import fuz...
<h1 align="center"> <br> <a href="https://github.com/s0md3v/XSStrike"><img src="https://image.ibb.co/cpuYoA/xsstrike-logo.png" alt="XSStrike"></a> <br> XSStrike <br> </h1> <h4 align="center">Advanced XSS Detection Suite</h4> <p align="center"> <a href="https://github.com/s0md3v/XSStrike/releases"> <im...
ItChat
d5ce5db32ca15cef8eefa548a438a9fcc4502a6d
"File: setup.py\n\n\"\"\" A wechat personal account api project\nSee:\nhttps://github.com/littlecode(...TRUNCATED)
"# itchat\n\n[![Gitter][gitter-picture]][gitter] ![py27][py27] ![py35][py35] [English version][engli(...TRUNCATED)
babyagi
11c853dbfc087cd96e034e7488d6f895248ba63a
"File: babyagi.py\n\n#!/usr/bin/env python3\nfrom dotenv import load_dotenv\n\n# Load default enviro(...TRUNCATED)
"# Translations:\n\n[<img title=\"عربي\" alt=\"عربي\" src=\"https://cdn.staticaly.com/gh/hjn(...TRUNCATED)
OpenVoice
f3cf835540572ade1460c8952f39d53e4f7952df
"File: setup.py\n\nfrom setuptools import setup, find_packages\n\n\nsetup(name='MyShell-OpenVoice',\(...TRUNCATED)
"<div align=\"center\">\n <div>&nbsp;</div>\n <img src=\"resources/openvoicelogo.jpg\" width=\"400(...TRUNCATED)
cli
f4cf43ecdd6c5c52b5c4ba91086d5c6ccfebcd6d
"File: setup.py\n\nfrom setuptools import setup\n\nsetup()\n\n\n\nFile: docs/installation/generate.p(...TRUNCATED)
"<h2 align=\"center\">\n <a href=\"https://httpie.io\" target=\"blank_\">\n <img height=\"(...TRUNCATED)
awesome-machine-learning
4964cff36225e9951c6c6a398fb925f269532b1b
"File: scripts/pull_R_packages.py\n\n#!/usr/bin/python\n\n\"\"\"\n This script will scrape the r-(...TRUNCATED)
"# Awesome Machine Learning [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78(...TRUNCATED)
interactive-coding-challenges
358f2cc60426d5c4c3d7d580910eec9a7b393fa9
"File: __init__.py\n\n\n\n\nFile: recursion_dynamic/__init__.py\n\n\n\n\nFile: recursion_dynamic/pow(...TRUNCATED)
"<br/>\n<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/donnemartin/interactive(...TRUNCATED)
MHDDoS
74a6d0ca4aeecb92ebc8f38917c722bc4226ebde
"File: start.py\n\n#!/usr/bin/env python3\n \nfrom concurrent.futures import ThreadPoolExecutor, as_(...TRUNCATED)
"<p align=\"center\"><img src=\"https://i.ibb.co/3F6V9JQ/MHDDoS.png\" width=\"400px\" height=\"150px(...TRUNCATED)
llama
8fac8befd776bc03242fe7bc2236cdb41b6c609c
"File: example_text_completion.py\n\n# Copyright (c) Meta Platforms, Inc. and affiliates.\n# This so(...TRUNCATED)
"## **Note of deprecation**\n\nThank you for developing with Llama models. As part of the Llama 3.1 (...TRUNCATED)
python-mini-projects
e0cfd4b0fe5e0bb4d443daba594e83332d5fb720
"File: projects/birthDateToCurrentAge.py\n\nfrom datetime import date # import(...TRUNCATED)
"<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->\n[![forthebadge](https(...TRUNCATED)
End of preview. Expand in Data Studio

Generate README Eval

The generate-readme-eval is a dataset (train split) and benchmark (test split) to evaluate the effectiveness of LLMs when summarizing entire GitHub repos in form of a README.md file. The datset is curated from top 400 real Python repositories from GitHub with at least 1000 stars and 100 forks. The script used to generate the dataset can be found here. For the dataset we restrict ourselves to GH repositories that are less than 100k tokens in size to allow us to put the entire repo in the context of LLM in a single call. The train split of the dataset can be used to fine-tune your own model, the results reported here are for the test split.

To evaluate a LLM on the benchmark we can use the evaluation script given here. During evaluation we prompt the LLM to generate a structured README.md file using the entire contents of the repository (repo_content). We evaluate the output response from LLM by comparing it with the actual README file of that repository across several different metrics.

In addition to the traditional NLP metircs like BLEU, ROUGE scores and cosine similarity, we also compute custom metrics that capture structural similarity, code consistency (from code to README), readability (FRES) and information retrieval. The final score is generated between by taking a weighted average of the metrics. The weights used for the final score are shown below.

weights = {
    'bleu': 0.1,
    'rouge-1': 0.033,
    'rouge-2': 0.033,
    'rouge-l': 0.034,
    'cosine_similarity': 0.1,
    'structural_similarity': 0.1,
    'information_retrieval': 0.2,
    'code_consistency': 0.2,
    'readability': 0.2
}

At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the leaderboard please create a PR with the log file of the run and details about the model.

If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.79 on this benchmark. We can validate it by running the evaluation script with --oracle flag. The oracle run log is available here.

Leaderboard

The current SOTA model on this benchmark in zero shot setting is Gemini-1.5-Flash-Exp-0827. It scores the highest across a number of different metrics.

Model Score BLEU ROUGE-1 ROUGE-2 ROUGE-l Cosine-Sim Structural-Sim Info-Ret Code-Consistency Readability Logs
llama3.1-8b-instruct 24.43 0.72 11.96 1.69 11.51 30.29 24.16 44.50 7.96 37.90 link
mistral-nemo-instruct-2407 25.62 1.09 11.24 1.70 10.94 26.62 24.26 52.00 8.80 37.30 link
gpt-4o-mini-2024-07-18 32.16 1.64 15.46 3.85 14.84 40.57 23.81 72.50 4.77 44.81 link
gpt-4o-2024-08-06 33.13 1.68 15.36 3.59 14.81 40.00 23.91 74.50 8.36 44.33 link
o1-mini-2024-09-12 33.05 3.13 15.39 3.51 14.81 42.49 27.55 80.00 7.78 35.27 link
gemini-1.5-flash-8b-exp-0827 32.12 1.36 14.66 3.31 14.14 38.31 23.00 70.00 7.43 46.47 link
gemini-1.5-flash-exp-0827 33.43 1.66 16.00 3.88 15.33 41.87 23.59 76.50 7.86 43.34 link
gemini-1.5-pro-exp-0827 32.51 2.55 15.27 4.97 14.86 41.09 23.94 72.82 6.73 43.34 link
oracle-score 56.79 100.00 100.00 100.00 100.00 100.00 98.24 59.00 11.01 14.84 link

Few-Shot

This benchmark is interesting because it is not that easy to few-shot your way to improve performance. There are couple of reasons for that:

  1. The average context length required for each item can be up to 100k tokens which makes it out of the reach of most models except Google Gemini which has a context legnth of up to 2 Million tokens.

  2. There is a trade-off in accuracy inherit in the benchmark as adding more examples makes some of the metrics like information_retrieval and readability worse. At larger contexts models do not have perfect recall and may miss important information.

Our experiments with few-shot prompts confirm this, the maximum overall score is at 1-shot and adding more examples doesn't help after that.

Model Score BLEU ROUGE-1 ROUGE-2 ROUGE-l Cosine-Sim Structural-Sim Info-Ret Code-Consistency Readability Logs
0-shot-gemini-1.5-flash-exp-0827 33.43 1.66 16.00 3.88 15.33 41.87 23.59 76.50 7.86 43.34 link
1-shot-gemini-1.5-flash-exp-0827 35.40 21.81 34.00 24.97 33.61 61.53 37.60 61.00 12.89 27.22 link
3-shot-gemini-1.5-flash-exp-0827 33.10 20.02 32.70 22.66 32.21 58.98 34.54 60.50 13.09 20.52 link
5-shot-gemini-1.5-flash-exp-0827 33.97 19.24 32.31 21.48 31.74 61.49 33.17 59.50 11.48 27.65 link
7-shot-gemini-1.5-flash-exp-0827 33.00 15.43 28.52 17.18 28.07 56.25 33.55 63.50 12.40 24.15 link
Downloads last month
102