bibtex_url stringlengths 41 50 | bibtext stringlengths 693 2.88k | abstract stringlengths 0 2k | authors sequencelengths 1 45 | title stringlengths 21 206 | id stringlengths 7 16 | type stringclasses 2
values | arxiv_id stringlengths 9 12 ⌀ |
|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.acl-long.1.bib | @inproceedings{zhang-etal-2024-quantized,
title = "Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models",
author = "Zhang, Zhengxin and
Zhao, Dan and
Miao, Xupeng and
Oliaro, Gabriele and
Zhang, Zhihao and
Li, Qing and
Jiang, Yong ... | Finetuning large language models (LLMs) has been empirically effective on a variety of downstream tasks. Existing approaches to finetuning an LLM either focus on parameter-efficient finetuning, which only updates a small number of trainable parameters, or attempt to reduce the memory footprint during the training phase... | [
"Zhang, Zhengxin",
"Zhao, Dan",
"Miao, Xupeng",
"Oliaro, Gabriele",
"Zhang, Zhihao",
"Li, Qing",
"Jiang, Yong",
"Jia, Zhihao"
] | Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models | acl-long.1 | Oral | 2402.04902v3 |
https://aclanthology.org/2024.acl-long.2.bib | @inproceedings{zhang-etal-2024-unsupervised,
title = "Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances",
author = "Zhang, Hanlei and
Xu, Hua and
Long, Fei and
Wang, Xin and
Gao, Kai",
editor = "Ku, Lun-Wei and
Martins, Andre and
Sr... | Discovering the semantics of multimodal utterances is essential for understanding human language and enhancing human-machine interactions. Existing methods manifest limitations in leveraging nonverbal information for discerning complex semantics in unsupervised scenarios. This paper introduces a novel unsupervised mult... | [
"Zhang, Hanlei",
"Xu, Hua",
"Long, Fei",
"Wang, Xin",
"Gao, Kai"
] | Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances | acl-long.2 | Poster | 2405.12775v1 |
https://aclanthology.org/2024.acl-long.3.bib | @inproceedings{li-etal-2024-mage,
title = "{MAGE}: Machine-generated Text Detection in the Wild",
author = "Li, Yafu and
Li, Qintong and
Cui, Leyang and
Bi, Wei and
Wang, Zhilin and
Wang, Longyue and
Yang, Linyi and
Shi, Shuming and
Zhang, Yue",
editor... | Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective deepfake text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods o specific domains or particular language models. In pr... | [
"Li, Yafu",
"Li, Qintong",
"Cui, Leyang",
"Bi, Wei",
"Wang, Zhilin",
"Wang, Longyue",
"Yang, Linyi",
"Shi, Shuming",
"Zhang, Yue"
] | {MAGE}: Machine-generated Text Detection in the Wild | acl-long.3 | Poster | 2210.07903v2 |
https://aclanthology.org/2024.acl-long.4.bib | @inproceedings{li-etal-2024-privlm,
title = "{P}riv{LM}-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models",
author = "Li, Haoran and
Guo, Dadi and
Li, Donghao and
Fan, Wei and
Hu, Qi and
Liu, Xin and
Chan, Chunkit and
Yao, Duanyi and
Ya... | The rapid development of language models (LMs) brings unprecedented accessibility and usage for both models and users. On the one hand, powerful LMs achieve state-of-the-art performance over numerous downstream NLP tasks. On the other hand, more and more attention is paid to unrestricted model accesses that may bring m... | [
"Li, Haoran",
"Guo, Dadi",
"Li, Donghao",
"Fan, Wei",
"Hu, Qi",
"Liu, Xin",
"Chan, Chunkit",
"Yao, Duanyi",
"Yao, Yuan",
"Song, Yangqiu"
] | {P}riv{LM}-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models | acl-long.4 | Oral | 2212.10011v2 |
https://aclanthology.org/2024.acl-long.5.bib | @inproceedings{hu-etal-2024-gentranslate,
title = "{G}en{T}ranslate: Large Language Models are Generative Multilingual Speech and Machine Translators",
author = "Hu, Yuchen and
Chen, Chen and
Yang, Chao-Han and
Li, Ruizhe and
Zhang, Dong and
Chen, Zhehuai and
Chng, EngS... | Recent advances in large language models (LLMs) have stepped forward the development of multilingual speech and machine translation by its reduced representation errors and incorporated external knowledge. However, both translation tasks typically utilize beam search decoding and top-1 hypothesis selection for inferenc... | [
"Hu, Yuchen",
"Chen, Chen",
"Yang, Chao-Han",
"Li, Ruizhe",
"Zhang, Dong",
"Chen, Zhehuai",
"Chng, EngSiong"
] | {G}en{T}ranslate: Large Language Models are Generative Multilingual Speech and Machine Translators | acl-long.5 | Oral | 1910.00254v2 |
https://aclanthology.org/2024.acl-long.6.bib | @inproceedings{xu-etal-2024-exploring,
title = "Exploring Chain-of-Thought for Multi-modal Metaphor Detection",
author = "Xu, Yanzhi and
Hua, Yueying and
Li, Shichen and
Wang, Zhongqing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proce... | Metaphors are commonly found in advertising and internet memes. However, the free form of internet memes often leads to a lack of high-quality textual data. Metaphor detection demands a deep interpretation of both textual and visual elements, requiring extensive common-sense knowledge, which poses a challenge to langua... | [
"Xu, Yanzhi",
"Hua, Yueying",
"Li, Shichen",
"Wang, Zhongqing"
] | Exploring Chain-of-Thought for Multi-modal Metaphor Detection | acl-long.6 | Poster | 1508.04515v1 |
https://aclanthology.org/2024.acl-long.7.bib | @inproceedings{du-etal-2024-bitdistiller,
title = "{B}it{D}istiller: Unleashing the Potential of Sub-4-Bit {LLM}s via Self-Distillation",
author = "Du, DaYou and
Zhang, Yijia and
Cao, Shijie and
Guo, Jiaqi and
Cao, Ting and
Chu, Xiaowen and
Xu, Ningyi",
editor = "Ku... | The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework tha... | [
"Du, DaYou",
"Zhang, Yijia",
"Cao, Shijie",
"Guo, Jiaqi",
"Cao, Ting",
"Chu, Xiaowen",
"Xu, Ningyi"
] | {B}it{D}istiller: Unleashing the Potential of Sub-4-Bit {LLM}s via Self-Distillation | acl-long.7 | Poster | 2402.10631v1 |
https://aclanthology.org/2024.acl-long.8.bib | @inproceedings{chen-etal-2024-unified,
title = "A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation",
author = "Chen, Kai and
Wang, Ye and
Li, Yitong and
Li, Aiping and
Yu, Han and
Song, Xin",
editor = "Ku, Lun-Wei and
Martins,... | Temporal knowledge graph (TKG) reasoning has two settings: interpolation reasoning and extrapolation reasoning. Both of them draw plenty of research interest and have great significance. Methods of the former de-emphasize the temporal correlations among facts sequences, while methods of the latter require strict chrono... | [
"Chen, Kai",
"Wang, Ye",
"Li, Yitong",
"Li, Aiping",
"Yu, Han",
"Song, Xin"
] | A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation | acl-long.8 | Poster | 2405.18106v1 |
https://aclanthology.org/2024.acl-long.9.bib | @inproceedings{xu-etal-2024-unsupervised,
title = "Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation",
author = "Xu, Shicheng and
Pang, Liang and
Yu, Mo and
Meng, Fandong and
Shen, Huawei and
Cheng, Xueqi and
Zhou, ... | Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additional information from retrieval. However, studies have shown that LLMs still face challenges in effectively using the retrieved information, even ignore it or be misled by it. The key reason is that the training of LLMs do... | [
"Xu, Shicheng",
"Pang, Liang",
"Yu, Mo",
"Meng, F",
"ong",
"Shen, Huawei",
"Cheng, Xueqi",
"Zhou, Jie"
] | Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation | acl-long.9 | Poster | 2402.18150v2 |
https://aclanthology.org/2024.acl-long.10.bib | @inproceedings{hu-etal-2024-cscd,
title = "{CSCD}-{NS}: a {C}hinese Spelling Check Dataset for Native Speakers",
author = "Hu, Yong and
Meng, Fandong and
Zhou, Jie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Mee... | In this paper, we present CSCD-NS, the first Chinese spelling check (CSC) dataset designed for native speakers, containing 40,000 samples from a Chinese social platform. Compared with existing CSC datasets aimed at Chinese learners, CSCD-NS is ten times larger in scale and exhibits a distinct error distribution, with a... | [
"Hu, Yong",
"Meng, F",
"ong",
"Zhou, Jie"
] | {CSCD}-{NS}: a {C}hinese Spelling Check Dataset for Native Speakers | acl-long.10 | Poster | 2211.08788v3 |
https://aclanthology.org/2024.acl-long.11.bib | @inproceedings{karakkaparambil-james-etal-2024-evaluating,
title = "Evaluating Dynamic Topic Models",
author = "Karakkaparambil James, Charu and
Nagda, Mayank and
Haji Ghassemi, Nooshin and
Kloft, Marius and
Fellenz, Sophie",
editor = "Ku, Lun-Wei and
Martins, Andre and
... | There is a lack of quantitative measures to evaluate the progression of topics through time in dynamic topic models (DTMs). Filling this gap, we propose a novel evaluation measure for DTMs that analyzes the changes in the quality of each topic over time. Additionally, we propose an extension combining topic quality wit... | [
"Karakkaparambil James, Charu",
"Nagda, Mayank",
"Haji Ghassemi, Nooshin",
"Kloft, Marius",
"Fellenz, Sophie"
] | Evaluating Dynamic Topic Models | acl-long.11 | Poster | 2406.18907v1 |
https://aclanthology.org/2024.acl-long.12.bib | @inproceedings{dong-etal-2024-abilities,
title = "How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition",
author = "Dong, Guanting and
Yuan, Hongyi and
Lu, Keming and
Li, Chengpeng and
Xue, Mingfeng and
Liu, Dayiheng and
Wang, We... | Large language models (LLMs) with enormous pre-training tokens and parameters emerge diverse abilities, including math reasoning, codegeneration, and instruction following. These abilities are further enhanced by supervised fine-tuning (SFT). While the open-source community has explored ad-hoc SFT for enhancing individ... | [
"Dong, Guanting",
"Yuan, Hongyi",
"Lu, Keming",
"Li, Chengpeng",
"Xue, Mingfeng",
"Liu, Dayiheng",
"Wang, Wei",
"Yuan, Zheng",
"Zhou, Chang",
"Zhou, Jingren"
] | How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition | acl-long.12 | Poster | 2310.05492v4 |
https://aclanthology.org/2024.acl-long.13.bib | @inproceedings{xu-etal-2024-lens,
title = "Through the Lens of Split Vote: Exploring Disagreement, Difficulty and Calibration in Legal Case Outcome Classification",
author = "Xu, Shanshan and
T.y.s.s, Santosh and
Ichim, Oana and
Plank, Barbara and
Grabmair, Matthias",
editor = "K... | In legal decisions, split votes (SV) occur when judges cannot reach a unanimous decision, posing a difficulty for lawyers who must navigate diverse legal arguments and opinions. In high-stakes domains, {\%}as human-AI interaction systems become increasingly important, understanding the alignment of perceived difficulty... | [
"Xu, Shanshan",
"T.y.s.s, Santosh",
"Ichim, Oana",
"Plank, Barbara",
"Grabmair, Matthias"
] | Through the Lens of Split Vote: Exploring Disagreement, Difficulty and Calibration in Legal Case Outcome Classification | acl-long.13 | Oral | 2402.07214v3 |
https://aclanthology.org/2024.acl-long.14.bib | @inproceedings{dalal-etal-2024-inference,
title = "Inference to the Best Explanation in Large Language Models",
author = "Dalal, Dhairya and
Valentino, Marco and
Freitas, Andre and
Buitelaar, Paul",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktit... | While Large Language Models (LLMs) have found success in real-world applications, their underlying explanatory process is still poorly understood. This paper proposes \textit{IBE-Eval}, a framework inspired by philosophical accounts on \textit{Inference to the Best Explanation (IBE)} to advance the interpretation and e... | [
"Dalal, Dhairya",
"Valentino, Marco",
"Freitas, Andre",
"Buitelaar, Paul"
] | Inference to the Best Explanation in Large Language Models | acl-long.14 | Poster | 2402.10767v1 |
https://aclanthology.org/2024.acl-long.15.bib | @inproceedings{poesina-etal-2024-novel,
title = "A Novel Cartography-Based Curriculum Learning Method Applied on {R}o{NLI}: The First {R}omanian Natural Language Inference Corpus",
author = "Poesina, Eduard and
Caragea, Cornelia and
Ionescu, Radu",
editor = "Ku, Lun-Wei and
Martins, And... | Natural language inference (NLI), the task of recognizing the entailment relationship in sentence pairs, is an actively studied topic serving as a proxy for natural language understanding. Despite the relevance of the task in building conversational agents and improving text classification, machine translation and othe... | [
"Poesina, Eduard",
"Caragea, Cornelia",
"Ionescu, Radu"
] | A Novel Cartography-Based Curriculum Learning Method Applied on {R}o{NLI}: The First {R}omanian Natural Language Inference Corpus | acl-long.15 | Poster | 2405.11877v4 |
https://aclanthology.org/2024.acl-long.16.bib | @inproceedings{chen-etal-2024-minprompt,
title = "{M}in{P}rompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering",
author = "Chen, Xiusi and
Jiang, Jyun-Yu and
Chang, Wei-Cheng and
Hsieh, Cho-Jui and
Yu, Hsiang-Fu and
Wang, Wei",
editor = "Ku, ... | Recent advances in few-shot question answering (QA) mostly rely on the power of pre-trained large language models (LLMs) and fine-tuning in specific settings. Although the pre-training stage has already equipped LLMs with powerful reasoning capabilities, LLMs still need to be fine-tuned to adapt to specific domains to ... | [
"Chen, Xiusi",
"Jiang, Jyun-Yu",
"Chang, Wei-Cheng",
"Hsieh, Cho-Jui",
"Yu, Hsiang-Fu",
"Wang, Wei"
] | {M}in{P}rompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering | acl-long.16 | Poster | 2306.04101v1 |
https://aclanthology.org/2024.acl-long.17.bib | @inproceedings{hu-etal-2024-sportsmetrics,
title = "{S}ports{M}etrics: Blending Text and Numerical Data to Understand Information Fusion in {LLM}s",
author = "Hu, Yebowen and
Song, Kaiqiang and
Cho, Sangwoo and
Wang, Xiaoyang and
Foroosh, Hassan and
Yu, Dong and
Liu, Fe... | Large language models hold significant potential for integrating various data types, such as text documents and database records, for advanced analytics. However, blending text and numerical data presents substantial challenges. LLMs need to process and cross-reference entities and numbers, handle data inconsistencies ... | [
"Hu, Yebowen",
"Song, Kaiqiang",
"Cho, Sangwoo",
"Wang, Xiaoyang",
"Foroosh, Hassan",
"Yu, Dong",
"Liu, Fei"
] | {S}ports{M}etrics: Blending Text and Numerical Data to Understand Information Fusion in {LLM}s | acl-long.17 | Poster | 2402.10979v2 |
https://aclanthology.org/2024.acl-long.18.bib | @inproceedings{wang-etal-2024-scimon,
title = "{S}ci{MON}: Scientific Inspiration Machines Optimized for Novelty",
author = "Wang, Qingyun and
Downey, Doug and
Ji, Heng and
Hope, Tom",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedi... | We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature. Work on literature-based hypothesis generation has traditionally focused on binary link prediction{---}severely limiting the expressivity of hypotheses. This line of work also does not focus on o... | [
"Wang, Qingyun",
"Downey, Doug",
"Ji, Heng",
"Hope, Tom"
] | {S}ci{MON}: Scientific Inspiration Machines Optimized for Novelty | acl-long.18 | Poster | 2305.14259v7 |
https://aclanthology.org/2024.acl-long.19.bib | @inproceedings{jian-etal-2024-expedited,
title = "Expedited Training of Visual Conditioned Language Generation via Redundancy Reduction",
author = "Jian, Yiren and
Liu, Tingkai and
Tao, Yunzhe and
Zhang, Chunhui and
Vosoughi, Soroush and
Yang, Hongxia",
editor = "Ku, Lun-W... | We introduce $\text{EVL}_{\text{Gen}}$, a streamlined framework designed for the pre-training of visually conditioned language generation models with high computational demands, utilizing frozen pre-trained large language models (LLMs). The conventional approach in vision-language pre-training (VLP) typically involves ... | [
"Jian, Yiren",
"Liu, Tingkai",
"Tao, Yunzhe",
"Zhang, Chunhui",
"Vosoughi, Soroush",
"Yang, Hongxia"
] | Expedited Training of Visual Conditioned Language Generation via Redundancy Reduction | acl-long.19 | Oral | 2310.03291v3 |
https://aclanthology.org/2024.acl-long.20.bib | @inproceedings{kumar-etal-2024-confidence,
title = "Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models",
author = "Kumar, Abhishek and
Morabito, Robert and
Umbet, Sanzhar and
Kabbara, Jad and
Emami, Ali",
editor = "Ku, L... | As the use of Large Language Models (LLMs) becomes more widespread, understanding their self-evaluation of confidence in generated responses becomes increasingly important as it is integral to the reliability of the output of these models. We introduce the concept of Confidence-Probability Alignment, that connects an L... | [
"Kumar, Abhishek",
"Morabito, Robert",
"Umbet, Sanzhar",
"Kabbara, Jad",
"Emami, Ali"
] | Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models | acl-long.20 | Poster | 2405.16282v5 |
https://aclanthology.org/2024.acl-long.21.bib | @inproceedings{wang-etal-2024-retrieval,
title = "Retrieval-Augmented Multilingual Knowledge Editing",
author = "Wang, Weixuan and
Haddow, Barry and
Birch, Alexandra",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual M... | Knowledge represented in Large Language Models (LLMs) is quite often incorrect and can also become obsolete over time. Updating knowledge via fine-tuning is computationally resource-hungry and not reliable, and so knowledge editing (KE) has developed as an effective and economical alternative to inject new knowledge or... | [
"Wang, Weixuan",
"Haddow, Barry",
"Birch, Alex",
"ra"
] | Retrieval-Augmented Multilingual Knowledge Editing | acl-long.21 | Poster | 2312.13040v1 |
https://aclanthology.org/2024.acl-long.22.bib | @inproceedings{park-etal-2024-picturing,
title = "Picturing Ambiguity: A Visual Twist on the {W}inograd Schema Challenge",
author = "Park, Brendan and
Janecek, Madeline and
Ezzati-Jivan, Naser and
Li, Yifeng and
Emami, Ali",
editor = "Ku, Lun-Wei and
Martins, Andre and
... | Large Language Models (LLMs) have demonstrated remarkable success in tasks like the Winograd Schema Challenge (WSC), showcasing advanced textual common-sense reasoning. However, applying this reasoning to multimodal domains, where understanding text and images together is essential, remains a substantial challenge. To ... | [
"Park, Brendan",
"Janecek, Madeline",
"Ezzati-Jivan, Naser",
"Li, Yifeng",
"Emami, Ali"
] | Picturing Ambiguity: A Visual Twist on the {W}inograd Schema Challenge | acl-long.22 | Oral | 2405.16277v3 |
https://aclanthology.org/2024.acl-long.23.bib | @inproceedings{kumar-etal-2024-subtle,
title = "Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models",
author = "Kumar, Abhishek and
Yunusov, Sarfaroz and
Emami, Ali",
editor = "Ku, Lun-Wei and
Martins, Andre and
... | Research on Large Language Models (LLMs) has often neglected subtle biases that, although less apparent, can significantly influence the models{'} outputs toward particular social narratives. This study addresses two such biases within LLMs: representative bias, which denotes a tendency of LLMs to generate outputs that... | [
"Kumar, Abhishek",
"Yunusov, Sarfaroz",
"Emami, Ali"
] | Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models | acl-long.23 | Poster | 2405.14555v4 |
https://aclanthology.org/2024.acl-long.24.bib | @inproceedings{leto-etal-2024-framing,
title = "Framing in the Presence of Supporting Data: A Case Study in {U}.{S}. Economic News",
author = "Leto, Alexandria and
Pickens, Elliot and
Needell, Coen and
Rothschild, David and
Pacheco, Maria",
editor = "Ku, Lun-Wei and
Martin... | The mainstream media has much leeway in what it chooses to cover and how it covers it. These choices have real-world consequences on what people know and their subsequent behaviors. However, the lack of objective measures to evaluate editorial choices makes research in this area particularly difficult. In this paper, w... | [
"Leto, Alex",
"ria",
"Pickens, Elliot",
"Needell, Coen",
"Rothschild, David",
"Pacheco, Maria"
] | Framing in the Presence of Supporting Data: A Case Study in {U}.{S}. Economic News | acl-long.24 | Poster | 2402.14224v2 |
https://aclanthology.org/2024.acl-long.25.bib | @inproceedings{wang-etal-2024-mementos,
title = "Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences",
author = "Wang, Xiyao and
Zhou, Yuhang and
Liu, Xiaoyu and
Lu, Hongjin and
Xu, Yuancheng and
He, Feihong and
Yoon, J... | Multimodal Large Language Models (MLLMs) have demonstrated proficiency in handling a variety of visual-language tasks. However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image, and the ability of modern MLLMs to extrapolate from image sequences, ... | [
"Wang, Xiyao",
"Zhou, Yuhang",
"Liu, Xiaoyu",
"Lu, Hongjin",
"Xu, Yuancheng",
"He, Feihong",
"Yoon, Jaehong",
"Lu, Taixi",
"Liu, Fuxiao",
"Bertasius, Gedas",
"Bansal, Mohit",
"Yao, Huaxiu",
"Huang, Furong"
] | Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences | acl-long.25 | Poster | 2401.10529v2 |
https://aclanthology.org/2024.acl-long.26.bib | @inproceedings{gao-etal-2024-ttm,
title = "{TTM}-{RE}: Memory-Augmented Document-Level Relation Extraction",
author = "Gao, Chufan and
Wang, Xuan and
Sun, Jimeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeti... | Document-level relation extraction aims to categorize the association between any two entities within a document.We find that previous methods for document-level relation extraction are ineffective in exploiting the full potential of large amounts of training data with varied noise levels. For example, in the ReDocRED ... | [
"Gao, Chufan",
"Wang, Xuan",
"Sun, Jimeng"
] | {TTM}-{RE}: Memory-Augmented Document-Level Relation Extraction | acl-long.26 | Poster | 2310.09265v1 |
https://aclanthology.org/2024.acl-long.27.bib | @inproceedings{peng-etal-2024-answer,
title = "Answer is All You Need: Instruction-following Text Embedding via Answering the Question",
author = "Peng, Letian and
Zhang, Yuwei and
Wang, Zilong and
Srinivasa, Jayanth and
Liu, Gaowen and
Wang, Zihan and
Shang, Jingbo",
... | This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion. While previous methods improve general task awareness by injecting the instruction information into encoding, they fail to be sensitive to clearer criteria like {``}evalu... | [
"Peng, Letian",
"Zhang, Yuwei",
"Wang, Zilong",
"Srinivasa, Jayanth",
"Liu, Gaowen",
"Wang, Zihan",
"Shang, Jingbo"
] | Answer is All You Need: Instruction-following Text Embedding via Answering the Question | acl-long.27 | Poster | 2402.09642v1 |
https://aclanthology.org/2024.acl-long.28.bib | @inproceedings{zhou-etal-2024-explore,
title = "Explore Spurious Correlations at the Concept Level in Language Models for Text Classification",
author = "Zhou, Yuhang and
Xu, Paiheng and
Liu, Xiaoyu and
An, Bang and
Ai, Wei and
Huang, Furong",
editor = "Ku, Lun-Wei and
... | Language models (LMs) have achieved notable success in numerous NLP tasks, employing both fine-tuning and in-context learning (ICL) methods. While language models demonstrate exceptional performance, they face robustness challenges due to spurious correlations arising from imbalanced label distributions in training dat... | [
"Zhou, Yuhang",
"Xu, Paiheng",
"Liu, Xiaoyu",
"An, Bang",
"Ai, Wei",
"Huang, Furong"
] | Explore Spurious Correlations at the Concept Level in Language Models for Text Classification | acl-long.28 | Poster | 2311.08648v4 |
https://aclanthology.org/2024.acl-long.29.bib | @inproceedings{cheng-etal-2024-every,
title = "Every Answer Matters: Evaluating Commonsense with Probabilistic Measures",
author = "Cheng, Qi and
Boratko, Michael and
Yelugam, Pranay Kumar and
O{'}Gorman, Tim and
Singh, Nalini and
McCallum, Andrew and
Li, Xiang",
ed... | Large language models have demonstrated impressive performance on commonsense tasks; however, these tasks are often posed as multiple-choice questions, allowing models to exploit systematic biases. Commonsense is also inherently probabilistic with multiple correct answers. The purpose of {``}boiling water{''} could be ... | [
"Cheng, Qi",
"Boratko, Michael",
"Yelugam, Pranay Kumar",
"O{'}Gorman, Tim",
"Singh, Nalini",
"McCallum, Andrew",
"Li, Xiang"
] | Every Answer Matters: Evaluating Commonsense with Probabilistic Measures | acl-long.29 | Poster | 2406.04145v1 |
https://aclanthology.org/2024.acl-long.30.bib | @inproceedings{xie-etal-2024-gradsafe,
title = "{G}rad{S}afe: Detecting Jailbreak Prompts for {LLM}s via Safety-Critical Gradient Analysis",
author = "Xie, Yueqi and
Fang, Minghong and
Pi, Renjie and
Gong, Neil",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek... | Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for detecting jailbreak prompts are primarily online moderation APIs or finetuned LLMs. These strategies, however, often require extensive and resource-intensive data collection and training processes. In this study, we propose GradSafe,... | [
"Xie, Yueqi",
"Fang, Minghong",
"Pi, Renjie",
"Gong, Neil"
] | {G}rad{S}afe: Detecting Jailbreak Prompts for {LLM}s via Safety-Critical Gradient Analysis | acl-long.30 | Poster | 2402.13494v2 |
https://aclanthology.org/2024.acl-long.31.bib | @inproceedings{lee-etal-2024-pouring,
title = "Pouring Your Heart Out: Investigating the Role of Figurative Language in Online Expressions of Empathy",
author = "Lee, Gyeongeun and
Wong, Christina and
Guo, Meghan and
Parde, Natalie",
editor = "Ku, Lun-Wei and
Martins, Andre and
... | Empathy is a social mechanism used to support and strengthen emotional connection with others, including in online communities. However, little is currently known about the nature of these online expressions, nor the particular factors that may lead to their improved detection. In this work, we study the role of a spec... | [
"Lee, Gyeongeun",
"Wong, Christina",
"Guo, Meghan",
"Parde, Natalie"
] | Pouring Your Heart Out: Investigating the Role of Figurative Language in Online Expressions of Empathy | acl-long.31 | Poster | 2009.08441v1 |
https://aclanthology.org/2024.acl-long.32.bib | @inproceedings{wang-etal-2024-information,
title = "An Information-Theoretic Approach to Analyze {NLP} Classification Tasks",
author = "Wang, Luran and
Gales, Mark and
Raina, Vatsal",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of t... | Understanding the contribution of the inputs on the output is useful across many tasks. This work provides an information-theoretic framework to analyse the influence of inputs for text classification tasks. Natural language processing (NLP) tasks take either a single or multiple text elements to predict an output vari... | [
"Wang, Luran",
"Gales, Mark",
"Raina, Vatsal"
] | An Information-Theoretic Approach to Analyze {NLP} Classification Tasks | acl-long.32 | Poster | 2402.00978v1 |
https://aclanthology.org/2024.acl-long.33.bib | @inproceedings{zhang-etal-2024-model,
title = "Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders",
author = "Zhang, Yuwei and
Singh, Siffi and
Sengupta, Sailik and
Shalyminov, Igor and
Su, Hang and
Song, Hwanjun and
Mansour,... | Conversational systems often rely on embedding models for intent classification and intent clustering tasks. The advent of Large Language Models (LLMs), which enable instructional embeddings allowing one to adjust semantics over the embedding space using prompts, are being viewed as a panacea for these downstream conve... | [
"Zhang, Yuwei",
"Singh, Siffi",
"Sengupta, Sailik",
"Shalyminov, Igor",
"Su, Hang",
"Song, Hwanjun",
"Mansour, Saab"
] | Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders | acl-long.33 | Poster | 2403.04314v1 |
https://aclanthology.org/2024.acl-long.34.bib | @inproceedings{he-etal-2024-wav2gloss,
title = "{W}av2{G}loss: Generating Interlinear Glossed Text from Speech",
author = "He, Taiqi and
Choi, Kwanghee and
Tjuatja, Lindia and
Robinson, Nathaniel and
Shi, Jiatong and
Watanabe, Shinji and
Neubig, Graham and
Morten... | Thousands of the world{'}s languages are in danger of extinction{---}a tremendous threat to cultural identities and human language diversity. Interlinear Glossed Text (IGT) is a form of linguistic annotation that can support documentation and resource creation for these languages{'} communities. IGT typically consists ... | [
"He, Taiqi",
"Choi, Kwanghee",
"Tjuatja, Lindia",
"Robinson, Nathaniel",
"Shi, Jiatong",
"Watanabe, Shinji",
"Neubig, Graham",
"Mortensen, David",
"Levin, Lori"
] | {W}av2{G}loss: Generating Interlinear Glossed Text from Speech | acl-long.34 | Poster | 2403.13169v2 |
https://aclanthology.org/2024.acl-long.35.bib | @inproceedings{hu-etal-2024-leveraging,
title = "Leveraging Codebook Knowledge with {NLI} and {C}hat{GPT} for Zero-Shot Political Relation Classification",
author = "Hu, Yibo and
Skorupa Parolin, Erick and
Khan, Latifur and
Brandt, Patrick and
Osorio, Javier and
D{'}Orazio, Vi... | Is it possible accurately classify political relations within evolving event ontologies without extensive annotations? This study investigates zero-shot learning methods that use expert knowledge from existing annotation codebook, and evaluates the performance of advanced ChatGPT (GPT-3.5/4) and a natural language infe... | [
"Hu, Yibo",
"Skorupa Parolin, Erick",
"Khan, Latifur",
"Br",
"t, Patrick",
"Osorio, Javier",
"D{'}Orazio, Vito"
] | Leveraging Codebook Knowledge with {NLI} and {C}hat{GPT} for Zero-Shot Political Relation Classification | acl-long.35 | Poster | 2308.07876v3 |
https://aclanthology.org/2024.acl-long.36.bib | @inproceedings{xu-wang-2024-spor,
title = "{SPOR}: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text Generation",
author = "Xu, Ziyao and
Wang, Houfeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Procee... | Compositional generalization is an important ability of language models and has many different manifestations. For data-to-text generation, previous research on this ability is limited to a single manifestation called Systematicity and lacks consideration of large language models (LLMs), which cannot fully cover practi... | [
"Xu, Ziyao",
"Wang, Houfeng"
] | {SPOR}: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text Generation | acl-long.36 | Poster | 2405.10650v8 |
https://aclanthology.org/2024.acl-long.37.bib | @inproceedings{shi-etal-2024-opex,
title = "{OPE}x: A Component-Wise Analysis of {LLM}-Centric Agents in Embodied Instruction Following",
author = "Shi, Haochen and
Sun, Zhiyuan and
Yuan, Xingdi and
C{\^o}t{\'e}, Marc-Alexandre and
Liu, Bang",
editor = "Ku, Lun-Wei and
Mar... | Embodied Instruction Following (EIF) is a crucial task in embodied learning, requiring agents to interact with their environment through egocentric observations to fulfill natural language instructions. Recent advancements have seen a surge in employing large language models (LLMs) within a framework-centric approach t... | [
"Shi, Haochen",
"Sun, Zhiyuan",
"Yuan, Xingdi",
"C{\\^o}t{\\'e}, Marc-Alex",
"re",
"Liu, Bang"
] | {OPE}x: A Component-Wise Analysis of {LLM}-Centric Agents in Embodied Instruction Following | acl-long.37 | Poster | 2310.12344v1 |
https://aclanthology.org/2024.acl-long.38.bib | @inproceedings{shen-etal-2024-multimodal,
title = "Multimodal Instruction Tuning with Conditional Mixture of {L}o{RA}",
author = "Shen, Ying and
Xu, Zhiyang and
Wang, Qifan and
Cheng, Yu and
Yin, Wenpeng and
Huang, Lifu",
editor = "Ku, Lun-Wei and
Martins, Andre an... | Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in diverse tasks across different domains, with an increasing focus on improving their zero-shot generalization capabilities for unseen multimodal tasks. Multimodal instruction tuning has emerged as a successful strategy for achieving zer... | [
"Shen, Ying",
"Xu, Zhiyang",
"Wang, Qifan",
"Cheng, Yu",
"Yin, Wenpeng",
"Huang, Lifu"
] | Multimodal Instruction Tuning with Conditional Mixture of {L}o{RA} | acl-long.38 | Poster | 2402.15896v1 |
https://aclanthology.org/2024.acl-long.39.bib | @inproceedings{xie-etal-2024-doclens,
title = "{D}oc{L}ens: Multi-aspect Fine-grained Medical Text Evaluation",
author = "Xie, Yiqing and
Zhang, Sheng and
Cheng, Hao and
Liu, Pengfei and
Gero, Zelalem and
Wong, Cliff and
Naumann, Tristan and
Poon, Hoifung and
... | Medical text generation aims to assist with administrative work and highlight salient information to support decision-making.To reflect the specific requirements of medical text, in this paper, we propose a set of metrics to evaluate the completeness, conciseness, and attribution of the generated text at a fine-grained... | [
"Xie, Yiqing",
"Zhang, Sheng",
"Cheng, Hao",
"Liu, Pengfei",
"Gero, Zelalem",
"Wong, Cliff",
"Naumann, Tristan",
"Poon, Hoifung",
"Rose, Carolyn"
] | {D}oc{L}ens: Multi-aspect Fine-grained Medical Text Evaluation | acl-long.39 | Poster | 2404.07613v1 |
https://aclanthology.org/2024.acl-long.40.bib | @inproceedings{xia-etal-2024-fofo,
title = "{FOFO}: A Benchmark to Evaluate {LLM}s{'} Format-Following Capability",
author = "Xia, Congying and
Xing, Chen and
Du, Jiangshu and
Yang, Xinyi and
Feng, Yihao and
Xu, Ran and
Yin, Wenpeng and
Xiong, Caiming",
edito... | This paper presents FoFo, a pioneering benchmark for evaluating large language models{'} (LLMs) ability to follow complex, domain-specific formats, a crucial yet under-examined capability for their application as AI agents. Despite LLMs{'} advancements, existing benchmarks fail to assess their format-following proficie... | [
"Xia, Congying",
"Xing, Chen",
"Du, Jiangshu",
"Yang, Xinyi",
"Feng, Yihao",
"Xu, Ran",
"Yin, Wenpeng",
"Xiong, Caiming"
] | {FOFO}: A Benchmark to Evaluate {LLM}s{'} Format-Following Capability | acl-long.40 | Poster | 2403.12316v1 |
https://aclanthology.org/2024.acl-long.41.bib | @inproceedings{yoo-etal-2024-hyper,
title = "Hyper-{CL}: Conditioning Sentence Representations with Hypernetworks",
author = "Yoo, Young and
Cha, Jii and
Kim, Changhyeon and
Kim, Taeuk",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Procee... | While the introduction of contrastive learning frameworks in sentence representation learning has significantly contributed to advancements in the field, it still remains unclear whether state-of-the-art sentence embeddings can capture the fine-grained semantics of sentences, particularly when conditioned on specific p... | [
"Yoo, Young",
"Cha, Jii",
"Kim, Changhyeon",
"Kim, Taeuk"
] | Hyper-{CL}: Conditioning Sentence Representations with Hypernetworks | acl-long.41 | Poster | 2403.09490v2 |
https://aclanthology.org/2024.acl-long.42.bib | @inproceedings{lim-etal-2024-analysis,
title = "Analysis of Multi-Source Language Training in Cross-Lingual Transfer",
author = "Lim, Seonghoon and
Yun, Taejun and
Kim, Jinhyeon and
Choi, Jihun and
Kim, Taeuk",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, ... | The successful adaptation of multilingual language models (LMs) to a specific language-task pair critically depends on the availability of data tailored for that condition. While cross-lingual transfer (XLT) methods have contributed to addressing this data scarcity problem, there still exists ongoing debate about the m... | [
"Lim, Seonghoon",
"Yun, Taejun",
"Kim, Jinhyeon",
"Choi, Jihun",
"Kim, Taeuk"
] | Analysis of Multi-Source Language Training in Cross-Lingual Transfer | acl-long.42 | Poster | 1712.01813v1 |
https://aclanthology.org/2024.acl-long.43.bib | @inproceedings{ghosh-etal-2024-abex,
title = "{ABEX}: Data Augmentation for Low-Resource {NLU} via Expanding Abstract Descriptions",
author = "Ghosh, Sreyan and
Tyagi, Utkarsh and
Kumar, Sonal and
Evuru, Chandra Kiran and
S, Ramaneswaran and
Sakshi, S and
Manocha, Dines... | We present ABEX, a novel and effective generative data augmentation methodology for low-resource Natural Language Understanding (NLU) tasks. ABEX is based on ABstract-and-EXpand, a novel paradigm for generating diverse forms of an input document {--} we first convert a document into its concise, abstract description an... | [
"Ghosh, Sreyan",
"Tyagi, Utkarsh",
"Kumar, Sonal",
"Evuru, Ch",
"ra Kiran",
"S, Ramaneswaran",
"Sakshi, S",
"Manocha, Dinesh"
] | {ABEX}: Data Augmentation for Low-Resource {NLU} via Expanding Abstract Descriptions | acl-long.43 | Poster | 2406.04286v1 |
https://aclanthology.org/2024.acl-long.44.bib | @inproceedings{bandarkar-etal-2024-belebele,
title = "The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants",
author = "Bandarkar, Lucas and
Liang, Davis and
Muller, Benjamin and
Artetxe, Mikel and
Shukla, Satya Narayan and
Husa, Donald and... | We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each ques... | [
"B",
"arkar, Lucas",
"Liang, Davis",
"Muller, Benjamin",
"Artetxe, Mikel",
"Shukla, Satya Narayan",
"Husa, Donald",
"Goyal, Naman",
"Krishnan, Abhin",
"an",
"Zettlemoyer, Luke",
"Khabsa, Madian"
] | The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants | acl-long.44 | Poster | 2308.16884v2 |
https://aclanthology.org/2024.acl-long.45.bib | @inproceedings{an-etal-2024-learn,
title = "Learn from Failure: Fine-tuning {LLM}s with Trial-and-Error Data for Intuitionistic Propositional Logic Proving",
author = "An, Chenyang and
Chen, Zhibo and
Ye, Qihao and
First, Emily and
Peng, Letian and
Zhang, Jiayun and
Wan... | Recent advances in Automated Theorem Proving have shown the effectiveness of leveraging a (large) language model that generates tactics (i.e. proof steps) to search through proof states. The current model, while trained solely on successful proof paths, faces a discrepancy at the inference stage, as it must sample and ... | [
"An, Chenyang",
"Chen, Zhibo",
"Ye, Qihao",
"First, Emily",
"Peng, Letian",
"Zhang, Jiayun",
"Wang, Zihan",
"Lerner, Sorin",
"Shang, Jingbo"
] | Learn from Failure: Fine-tuning {LLM}s with Trial-and-Error Data for Intuitionistic Propositional Logic Proving | acl-long.45 | Poster | 2207.07306v1 |
https://aclanthology.org/2024.acl-long.46.bib | @inproceedings{lee-etal-2024-interactive,
title = "Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach",
author = "Lee, Saehyung and
Yu, Sangwon and
Park, Junsung and
Yi, Jihun and
Yoon, Sungroh",
editor = "Ku, Lun-Wei and
Martins, Andr... | In this paper, we primarily address the issue of dialogue-form context query within the interactive text-to-image retrieval task. Our methodology, PlugIR, actively utilizes the general instruction-following capability of LLMs in two ways. First, by reformulating the dialogue-form context, we eliminate the necessity of ... | [
"Lee, Saehyung",
"Yu, Sangwon",
"Park, Junsung",
"Yi, Jihun",
"Yoon, Sungroh"
] | Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach | acl-long.46 | Oral | 2404.05825v1 |
https://aclanthology.org/2024.acl-long.47.bib | @inproceedings{lin-etal-2024-imbue,
title = "{IMBUE}: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction",
author = "Lin, Inna and
Sharma, Ashish and
Rytting, Christopher and
Miner, Adam and
Suh, Jina and
Al... | Navigating certain communication situations can be challenging due to individuals{'} lack of skills and the interference of strong emotions. However, effective learning opportunities are rarely accessible. In this work, we conduct a human-centered study that uses language models to simulate bespoke communication traini... | [
"Lin, Inna",
"Sharma, Ashish",
"Rytting, Christopher",
"Miner, Adam",
"Suh, Jina",
"Althoff, Tim"
] | {IMBUE}: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction | acl-long.47 | Poster | 2402.12556v1 |
https://aclanthology.org/2024.acl-long.48.bib | @inproceedings{lin-etal-2024-token,
title = "Token-wise Influential Training Data Retrieval for Large Language Models",
author = "Lin, Huawei and
Long, Jikai and
Xu, Zhaozhuo and
Zhao, Weijie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = ... | Given a Large Language Model (LLM) generation, how can we identify which training data led to this generation? In this paper, we proposed RapidIn, a scalable framework adapting to LLMs for estimating the influence of each training data. The proposed framework consists of two stages: caching and retrieval. First, we com... | [
"Lin, Huawei",
"Long, Jikai",
"Xu, Zhaozhuo",
"Zhao, Weijie"
] | Token-wise Influential Training Data Retrieval for Large Language Models | acl-long.48 | Poster | 2305.13286v2 |
https://aclanthology.org/2024.acl-long.49.bib | @inproceedings{weinzierl-harabagiu-2024-tree,
title = "Tree-of-Counterfactual Prompting for Zero-Shot Stance Detection",
author = "Weinzierl, Maxwell and
Harabagiu, Sanda",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Mee... | Stance detection enables the inference of attitudes from human communications. Automatic stance identification was mostly cast as a classification problem. However, stance decisions involve complex judgments, which can be nowadays generated by prompting Large Language Models (LLMs). In this paper we present a new metho... | [
"Weinzierl, Maxwell",
"Harabagiu, S",
"a"
] | Tree-of-Counterfactual Prompting for Zero-Shot Stance Detection | acl-long.49 | Poster | 2310.19750v1 |
https://aclanthology.org/2024.acl-long.50.bib | @inproceedings{koh-etal-2024-visualwebarena,
title = "{V}isual{W}eb{A}rena: Evaluating Multimodal Agents on Realistic Visual Web Tasks",
author = "Koh, Jing Yu and
Lo, Robert and
Jang, Lawrence and
Duvvur, Vikram and
Lim, Ming and
Huang, Po-Yu and
Neubig, Graham and
... | Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that mos... | [
"Koh, Jing Yu",
"Lo, Robert",
"Jang, Lawrence",
"Duvvur, Vikram",
"Lim, Ming",
"Huang, Po-Yu",
"Neubig, Graham",
"Zhou, Shuyan",
"Salakhutdinov, Russ",
"Fried, Daniel"
] | {V}isual{W}eb{A}rena: Evaluating Multimodal Agents on Realistic Visual Web Tasks | acl-long.50 | Poster | 2401.13649v2 |
https://aclanthology.org/2024.acl-long.51.bib | @inproceedings{song-etal-2024-finesure,
title = "{F}ine{S}ur{E}: Fine-grained Summarization Evaluation using {LLM}s",
author = "Song, Hwanjun and
Su, Hang and
Shalyminov, Igor and
Cai, Jason and
Mansour, Saab",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, ... | Automated evaluation is crucial for streamlining text summarization benchmarking and model development, given the costly and time-consuming nature of human evaluation. Traditional methods like ROUGE do not correlate well with human judgment, while recently proposed LLM-based metrics provide only summary-level assessmen... | [
"Song, Hwanjun",
"Su, Hang",
"Shalyminov, Igor",
"Cai, Jason",
"Mansour, Saab"
] | {F}ine{S}ur{E}: Fine-grained Summarization Evaluation using {LLM}s | acl-long.51 | Poster | 2402.17008v1 |
https://aclanthology.org/2024.acl-long.52.bib | @inproceedings{ahn-etal-2024-tuning,
title = "Tuning Large Multimodal Models for Videos using Reinforcement Learning from {AI} Feedback",
author = "Ahn, Daechul and
Choi, Yura and
Yu, Youngjae and
Kang, Dongyeop and
Choi, Jonghyun",
editor = "Ku, Lun-Wei and
Martins, Andre... | Recent advancements in large language models have influenced the development of video large multimodal models (VLMMs). Previous approaches for VLMMs involve Supervised Fine-Tuning (SFT) with instruction-tuned datasets, integrating LLM with visual encoders, and additional learnable parameters. Here, aligning video with ... | [
"Ahn, Daechul",
"Choi, Yura",
"Yu, Youngjae",
"Kang, Dongyeop",
"Choi, Jonghyun"
] | Tuning Large Multimodal Models for Videos using Reinforcement Learning from {AI} Feedback | acl-long.52 | Oral | 2402.03746v3 |
https://aclanthology.org/2024.acl-long.53.bib | @inproceedings{zhan-etal-2024-prompt,
title = "Prompt Refinement with Image Pivot for Text-to-Image Generation",
author = "Zhan, Jingtao and
Ai, Qingyao and
Liu, Yiqun and
Pan, Yingwei and
Yao, Ting and
Mao, Jiaxin and
Ma, Shaoping and
Mei, Tao",
editor = "Ku... | For text-to-image generation, automatically refining user-provided natural language prompts into the keyword-enriched prompts favored by systems is essential for the user experience. Such a prompt refinement process is analogous to translating the prompt from {``}user languages{''} into {``}system languages{''}. Howeve... | [
"Zhan, Jingtao",
"Ai, Qingyao",
"Liu, Yiqun",
"Pan, Yingwei",
"Yao, Ting",
"Mao, Jiaxin",
"Ma, Shaoping",
"Mei, Tao"
] | Prompt Refinement with Image Pivot for Text-to-Image Generation | acl-long.53 | Poster | 2407.00247v1 |
https://aclanthology.org/2024.acl-long.54.bib | @inproceedings{mita-etal-2024-striking,
title = "Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation",
author = "Mita, Masato and
Murakami, Soichiro and
Kato, Akihiko and
Zhang, Peinan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar,... | In response to the limitations of manual ad creation, significant research has been conducted in the field of automatic ad text generation (ATG). However, the lack of comprehensive benchmarks and well-defined problem sets has made comparing different methods challenging. To tackle these challenges, we standardize the t... | [
"Mita, Masato",
"Murakami, Soichiro",
"Kato, Akihiko",
"Zhang, Peinan"
] | Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation | acl-long.54 | Poster | 2309.12030v2 |
https://aclanthology.org/2024.acl-long.55.bib | @inproceedings{wang-etal-2024-absinstruct,
title = "{A}bs{I}nstruct: Eliciting Abstraction Ability from {LLM}s through Explanation Tuning with Plausibility Estimation",
author = "Wang, Zhaowei and
Fan, Wei and
Zong, Qing and
Zhang, Hongming and
Choi, Sehyun and
Fang, Tianqing ... | Abstraction ability is crucial in human intelligence, which can also benefit various tasks in NLP study. Existing work shows that LLMs are deficient in abstract ability, and how to improve it remains unexplored. In this work, we design the framework AbsInstruct to enhance LLMs{'} abstraction ability through instruction... | [
"Wang, Zhaowei",
"Fan, Wei",
"Zong, Qing",
"Zhang, Hongming",
"Choi, Sehyun",
"Fang, Tianqing",
"Liu, Xin",
"Song, Yangqiu",
"Wong, Ginny",
"See, Simon"
] | {A}bs{I}nstruct: Eliciting Abstraction Ability from {LLM}s through Explanation Tuning with Plausibility Estimation | acl-long.55 | Poster | 2402.10646v2 |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 20