bibtex_url stringlengths 41 53 | acl_proceedings stringlengths 38 50 | bibtext stringlengths 528 3.02k | abstract stringlengths 17 2.35k | authors sequencelengths 1 44 | title stringlengths 18 190 | id stringlengths 7 19 | arxiv_id stringlengths 10 10 ⌀ | GitHub sequencelengths 1 1 | paper_page stringclasses 528
values | n_linked_authors int64 -1 15 | upvotes int64 -1 77 | num_comments int64 -1 10 | n_authors int64 -1 52 | Models sequencelengths 0 100 | Datasets sequencelengths 0 15 | Spaces sequencelengths 0 46 | paper_page_exists_pre_conf int64 0 1 | type stringclasses 2
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.emnlp-main.1.bib | https://aclanthology.org/2023.emnlp-main.1/ | @inproceedings{zhang-etal-2023-iag,
title = "{IAG}: Induction-Augmented Generation Framework for Answering Reasoning Questions",
author = "Zhang, Zhebin and
Zhang, Xinyu and
Ren, Yuanhang and
Shi, Saijiang and
Han, Meng and
Wu, Yongkang and
Lai, Ruofei and
Cao, Z... | Retrieval-Augmented Generation (RAG), by incorporating external knowledge with parametric memory of language models, has become the state-of-the-art architecture for open-domain QA tasks. However, common knowledge bases are inherently constrained by limited coverage and noisy information, making retrieval-based approac... | [
"Zhang, Zhebin",
"Zhang, Xinyu",
"Ren, Yuanhang",
"Shi, Saijiang",
"Han, Meng",
"Wu, Yongkang",
"Lai, Ruofei",
"Cao, Zhao"
] | IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions | emnlp-main.1 | 2311.18397 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.2.bib | https://aclanthology.org/2023.emnlp-main.2/ | @inproceedings{yamamoto-matsuzaki-2023-absolute,
title = "Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position",
author = "Yamamoto, Yuji and
Matsuzaki, Takuya",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Procee... | Attention weight is a clue to interpret how a Transformer-based model makes an inference. In some attention heads, the attention focuses on the neighbors of each token. This allows the output vector of each token to depend on the surrounding tokens and contributes to make the inference context-dependent. We analyze the... | [
"Yamamoto, Yuji",
"Matsuzaki, Takuya"
] | Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position | emnlp-main.2 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral | |
https://aclanthology.org/2023.emnlp-main.3.bib | https://aclanthology.org/2023.emnlp-main.3/ | @inproceedings{qiang-etal-2023-chinese,
title = "{C}hinese Lexical Substitution: Dataset and Method",
author = "Qiang, Jipeng and
Liu, Kang and
Li, Ying and
Li, Yun and
Zhu, Yi and
Yuan, Yun-Hao and
Hu, Xiaocheng and
Ouyang, Xiaoye",
editor = "Bouamor, Houda ... | Existing lexical substitution (LS) benchmarks were collected by asking human annotators to think of substitutes from memory, resulting in benchmarks with limited coverage and relatively small scales. To overcome this problem, we propose a novel annotation method to construct an LS dataset based on human and machine col... | [
"Qiang, Jipeng",
"Liu, Kang",
"Li, Ying",
"Li, Yun",
"Zhu, Yi",
"Yuan, Yun-Hao",
"Hu, Xiaocheng",
"Ouyang, Xiaoye"
] | Chinese Lexical Substitution: Dataset and Method | emnlp-main.3 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.4.bib | https://aclanthology.org/2023.emnlp-main.4/ | @inproceedings{sun-etal-2023-decoding,
title = "Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting",
author = "Sun, Chenkai and
Li, Jinning and
Fung, Yi and
Chan, Hou and
Abdelzaher, Tarek and
Zhai, ChengXian... | Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury. To effectively forecast responses, it is essential to develop measures that leverage the ... | [
"Sun, Chenkai",
"Li, Jinning",
"Fung, Yi",
"Chan, Hou",
"Abdelzaher, Tarek",
"Zhai, ChengXiang",
"Ji, Heng"
] | Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting | emnlp-main.4 | 2310.13297 | [
"https://github.com/chenkaisun/socialsense"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.5.bib | https://aclanthology.org/2023.emnlp-main.5/ | @inproceedings{yao-etal-2023-fine,
title = "Fine-grained Conversational Decoding via Isotropic and Proximal Search",
author = "Yao, Yuxuan and
Wu, Han and
Xu, Qiling and
Song, Linqi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings o... | General-purpose text decoding approaches are usually adopted for dialogue response generation. Although the quality of the generated responses can be improved with dialogue-specific encoding methods, conversational decoding methods are still under-explored. Inspired by SimDRC that a good dialogue feature space should f... | [
"Yao, Yuxuan",
"Wu, Han",
"Xu, Qiling",
"Song, Linqi"
] | Fine-grained Conversational Decoding via Isotropic and Proximal Search | emnlp-main.5 | 2310.08130 | [
"https://github.com/starrYYxuan/IPS"
] | https://huggingface.co/papers/2310.08130 | 0 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.6.bib | https://aclanthology.org/2023.emnlp-main.6/ | @inproceedings{stefanovitch-piskorski-2023-holistic,
title = "Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign",
author = "Stefanovitch, Nicolas and
Piskorski, Jakub",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, K... | In this paper we report on the complexity of persuasion technique annotation in the context of a large multilingual annotation campaign involving 6 languages and approximately 40 annotators. We highlight the techniques that appear to be difficult for humans to annotate and elaborate on our findings on the causes of thi... | [
"Stefanovitch, Nicolas",
"Piskorski, Jakub"
] | Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign | emnlp-main.6 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.7.bib | https://aclanthology.org/2023.emnlp-main.7/ | @inproceedings{borenstein-etal-2023-phd,
title = "{PHD}: Pixel-Based Language Modeling of Historical Documents",
author = "Borenstein, Nadav and
Rust, Phillip and
Elliott, Desmond and
Augenstein, Isabelle",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
boo... | The digitisation of historical documents has provided historians with unprecedented research opportunities. Yet, the conventional approach to analysing historical documents involves converting them from images to text using OCR, a process that overlooks the potential benefits of treating them as images and introduces h... | [
"Borenstein, Nadav",
"Rust, Phillip",
"Elliott, Desmond",
"Augenstein, Isabelle"
] | PHD: Pixel-Based Language Modeling of Historical Documents | emnlp-main.7 | 2310.18343 | [
"https://github.com/nadavborenstein/pixel-bw"
] | https://huggingface.co/papers/2310.18343 | 1 | 1 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.8.bib | https://aclanthology.org/2023.emnlp-main.8/ | @inproceedings{wang-etal-2023-primacy,
title = "Primacy Effect of {C}hat{GPT}",
author = "Wang, Yiwei and
Cai, Yujun and
Chen, Muhao and
Liang, Yuxuan and
Hooi, Bryan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 20... | Instruction-tuned large language models (LLMs), such as ChatGPT, have led to promising zero-shot performance in discriminative natural language understanding (NLU) tasks. This involves querying the LLM using a prompt containing the question, and the candidate labels to choose from. The question-answering capabilities o... | [
"Wang, Yiwei",
"Cai, Yujun",
"Chen, Muhao",
"Liang, Yuxuan",
"Hooi, Bryan"
] | Primacy Effect of ChatGPT | emnlp-main.8 | 2310.13206 | [
"https://github.com/wangywust/primacyeffectgpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.9.bib | https://aclanthology.org/2023.emnlp-main.9/ | @inproceedings{kawabata-sugawara-2023-evaluating,
title = "Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension",
author = "Kawabata, Akira and
Sugawara, Saku",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedin... | To precisely evaluate a language model{'}s capability for logical reading comprehension, we present a dataset for testing the understanding of the rationale behind critical reasoning. For questions taken from an existing multiple-choice logical reading comprehension dataset, we crowdsource rationale texts that explain ... | [
"Kawabata, Akira",
"Sugawara, Saku"
] | Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension | emnlp-main.9 | 2311.18353 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.10.bib | https://aclanthology.org/2023.emnlp-main.10/ | @inproceedings{muller-etal-2023-evaluating,
title = "Evaluating and Modeling Attribution for Cross-Lingual Question Answering",
author = "Muller, Benjamin and
Wieting, John and
Clark, Jonathan and
Kwiatkowski, Tom and
Ruder, Sebastian and
Soares, Livio and
Aharoni, Roee... | Trustworthy answer content is abundant in many high-resource languages and is instantly accessible through question answering systems {---} yet this content can be hard to access for those that do not speak these languages. The leap forward in cross-lingual modeling quality offered by generative language models offers ... | [
"Muller, Benjamin",
"Wieting, John",
"Clark, Jonathan",
"Kwiatkowski, Tom",
"Ruder, Sebastian",
"Soares, Livio",
"Aharoni, Roee",
"Herzig, Jonathan",
"Wang, Xinyi"
] | Evaluating and Modeling Attribution for Cross-Lingual Question Answering | emnlp-main.10 | 2305.14332 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.11.bib | https://aclanthology.org/2023.emnlp-main.11/ | @inproceedings{oladipo-etal-2023-better,
title = "Better Quality Pre-training Data and T5 Models for {A}frican Languages",
author = "Oladipo, Akintunde and
Adeyemi, Mofetoluwa and
Ahia, Orevaoghene and
Owodunni, Abraham and
Ogundepo, Odunayo and
Adelani, David and
Lin, ... | In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for 16 African languages,... | [
"Oladipo, Akintunde",
"Adeyemi, Mofetoluwa",
"Ahia, Orevaoghene",
"Owodunni, Abraham",
"Ogundepo, Odunayo",
"Adelani, David",
"Lin, Jimmy"
] | Better Quality Pre-training Data and T5 Models for African Languages | emnlp-main.11 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.12.bib | https://aclanthology.org/2023.emnlp-main.12/ | @inproceedings{tan-etal-2023-sparse,
title = "Sparse Universal Transformer",
author = "Tan, Shawn and
Shen, Yikang and
Chen, Zhenfang and
Courville, Aaron and
Gan, Chuang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of th... | The Universal Transformer (UT) is a variant of the Transformer that shares parameters across its layers and is Turing-complete under certain assumptions. Empirical evidence also shows that UTs have better compositional generalization than Vanilla Transformers (VTs) in formal language tasks. The parameter-sharing also a... | [
"Tan, Shawn",
"Shen, Yikang",
"Chen, Zhenfang",
"Courville, Aaron",
"Gan, Chuang"
] | Sparse Universal Transformer | emnlp-main.12 | 2310.07096 | [
""
] | https://huggingface.co/papers/2310.07096 | 1 | 0 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.13.bib | https://aclanthology.org/2023.emnlp-main.13/ | @inproceedings{li-etal-2023-theory,
title = "Theory of Mind for Multi-Agent Collaboration via Large Language Models",
author = "Li, Huao and
Chong, Yu and
Stepputtis, Simon and
Campbell, Joseph and
Hughes, Dana and
Lewis, Charles and
Sycara, Katia",
editor = "Bouamo... | While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing thei... | [
"Li, Huao",
"Chong, Yu",
"Stepputtis, Simon",
"Campbell, Joseph",
"Hughes, Dana",
"Lewis, Charles",
"Sycara, Katia"
] | Theory of Mind for Multi-Agent Collaboration via Large Language Models | emnlp-main.13 | 2310.10701 | [
"https://github.com/romanlee6/multi_LLM_comm"
] | https://huggingface.co/papers/2310.10701 | 0 | 0 | 0 | 7 | [] | [] | [
"agentharbor/agenta"
] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.14.bib | https://aclanthology.org/2023.emnlp-main.14/ | @inproceedings{litschko-etal-2023-establishing,
title = "Establishing Trustworthiness: Rethinking Tasks and Model Evaluation",
author = {Litschko, Robert and
M{\"u}ller-Eberstein, Max and
van der Goot, Rob and
Weber-Genzel, Leon and
Plank, Barbara},
editor = "Bouamor, Houda and
... | Language understanding is a multi-faceted cognitive capability, which the Natural Language Processing (NLP) community has striven to model computationally for decades. Traditionally, facets of linguistic intelligence have been compartmentalized into tasks with specialized model architectures and corresponding evaluatio... | [
"Litschko, Robert",
"M{\\\"u}ller-Eberstein, Max",
"van der Goot, Rob",
"Weber-Genzel, Leon",
"Plank, Barbara"
] | Establishing Trustworthiness: Rethinking Tasks and Model Evaluation | emnlp-main.14 | 2310.05442 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.15.bib | https://aclanthology.org/2023.emnlp-main.15/ | @inproceedings{himakunthala-etal-2023-lets,
title = "Let{'}s Think Frame by Frame with {VIP}: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought",
author = "Himakunthala, Vaishnavi and
Ouyang, Andy and
Rose, Daniel and
He, Ryan and
Mei, Alex and
Lu,... | Despite exciting recent results showing vision-language systems{'} capacity to reason about images using natural language, their capacity for video reasoning remains underexplored. We motivate framing video reasoning as the sequential understanding of a small number of keyframes, thereby leveraging the power and robust... | [
"Himakunthala, Vaishnavi",
"Ouyang, Andy",
"Rose, Daniel",
"He, Ryan",
"Mei, Alex",
"Lu, Yujie",
"Sonar, Chinmay",
"Saxon, Michael",
"Wang, William"
] | Let's Think Frame by Frame with VIP: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought | emnlp-main.15 | 2305.13903 | [
"https://github.com/vaishnavihimakunthala/vip"
] | https://huggingface.co/papers/2305.13903 | 2 | 0 | 0 | 9 | [] | [
"ryanhe/VIP"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.16.bib | https://aclanthology.org/2023.emnlp-main.16/ | @inproceedings{khondaker-etal-2023-gptaraeval,
title = "{GPTA}ra{E}val: A Comprehensive Evaluation of {C}hat{GPT} on {A}rabic {NLP}",
author = "Khondaker, Md Tawkat Islam and
Waheed, Abdul and
Nagoudi, El Moatez Billah and
Abdul-Mageed, Muhammad",
editor = "Bouamor, Houda and
Pin... | ChatGPT{'}s emergence heralds a transformative phase in NLP, particularly demonstrated through its excellent performance on many English benchmarks. However, the model{'}s efficacy across diverse linguistic contexts remains largely uncharted territory. This work aims to bridge this knowledge gap, with a primary focus o... | [
"Khondaker, Md Tawkat Islam",
"Waheed, Abdul",
"Nagoudi, El Moatez Billah",
"Abdul-Mageed, Muhammad"
] | GPTAraEval: A Comprehensive Evaluation of ChatGPT on Arabic NLP | emnlp-main.16 | 2305.14976 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral | |
https://aclanthology.org/2023.emnlp-main.17.bib | https://aclanthology.org/2023.emnlp-main.17/ | @inproceedings{li-etal-2023-dual-channel,
title = "Dual-Channel Span for Aspect Sentiment Triplet Extraction",
author = "Li, Pan and
Li, Ping and
Zhang, Kai",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empiric... | Aspect Sentiment Triplet Extraction (ASTE) is one of the compound tasks of fine-grained aspect-based sentiment analysis (ABSA), aiming at extracting the triplets of aspect terms, corresponding opinion terms and the associated sentiment orientation. Recent efforts in exploiting span-level semantic interaction shown supe... | [
"Li, Pan",
"Li, Ping",
"Zhang, Kai"
] | Dual-Channel Span for Aspect Sentiment Triplet Extraction | emnlp-main.17 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.18.bib | https://aclanthology.org/2023.emnlp-main.18/ | @inproceedings{li-zhang-2023-cultural,
title = "Cultural Concept Adaptation on Multimodal Reasoning",
author = "Li, Zhi and
Zhang, Yin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Languag... | Developing cultural adaptation methods is important, which can improve the model performance on the low-resource ones and provide more equitable opportunities for everyone to benefit from advanced technology. Past methods primarily focused on multilingual and multimodal capabilities, and the improvement of multicultura... | [
"Li, Zhi",
"Zhang, Yin"
] | Cultural Concept Adaptation on Multimodal Reasoning | emnlp-main.18 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral | |
https://aclanthology.org/2023.emnlp-main.19.bib | https://aclanthology.org/2023.emnlp-main.19/ | @inproceedings{samir-silfverberg-2023-understanding,
title = "Understanding Compositional Data Augmentation in Typologically Diverse Morphological Inflection",
author = "Samir, Farhan and
Silfverberg, Miikka",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "P... | Data augmentation techniques are widely used in low-resource automatic morphological inflection to address the issue of data sparsity. However, the full implications of these techniques remain poorly understood. In this study, we aim to shed light on the theoretical aspects of the data augmentation strategy StemCorrupt... | [
"Samir, Farhan",
"Silfverberg, Miikka"
] | Understanding Compositional Data Augmentation in Typologically Diverse Morphological Inflection | emnlp-main.19 | 2305.13658 | [
"https://github.com/smfsamir/understanding-augmentation-morphology"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral | |
https://aclanthology.org/2023.emnlp-main.20.bib | https://aclanthology.org/2023.emnlp-main.20/ | @inproceedings{li-etal-2023-evaluating,
title = "Evaluating Object Hallucination in Large Vision-Language Models",
author = "Li, Yifan and
Du, Yifan and
Zhou, Kun and
Wang, Jinpeng and
Zhao, Xin and
Wen, Ji-Rong",
editor = "Bouamor, Houda and
Pino, Juan and
B... | Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently proposed by integrating powerful LLMs for improving the performance on complex multimodal tasks. Despite the promising progress on LVLMs, we find that they suffer from object hallucinations... | [
"Li, Yifan",
"Du, Yifan",
"Zhou, Kun",
"Wang, Jinpeng",
"Zhao, Xin",
"Wen, Ji-Rong"
] | Evaluating Object Hallucination in Large Vision-Language Models | emnlp-main.20 | 2305.10355 | [
"https://github.com/rucaibox/pope"
] | https://huggingface.co/papers/2305.10355 | 0 | 0 | 0 | 6 | [
"google/paligemma-3b-pt-224",
"google/paligemma-3b-pt-896",
"google/paligemma-3b-mix-448",
"google/paligemma-3b-mix-224",
"google/paligemma-3b-pt-448",
"google/paligemma-3b-ft-ocrvqa-896",
"google/paligemma-3b-ft-vqav2-448",
"google/paligemma-3b-ft-refcoco-seg-896",
"google/paligemma-3b-ft-ocrvqa-44... | [
"HuggingFaceM4/POPE_modif"
] | [
"big-vision/paligemma-hf",
"manu/ColPali-demo",
"merve/paligemma-doc",
"merve/paligemma-tracking",
"agentsea/paligemma-waveui",
"Justinrune/LLaMA-Factory",
"Saee/vQA-exploration",
"dwb2023/model_explorer2",
"dwb2023/model_explorer4",
"rynmurdock/Blue_Tigers",
"beingcognitive/Image_to_Music",
"... | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.21.bib | https://aclanthology.org/2023.emnlp-main.21/ | @inproceedings{cao-etal-2023-event,
title = "Event Ontology Completion with Hierarchical Structure Evolution Networks",
author = "Cao, Pengfei and
Hao, Yupu and
Chen, Yubo and
Liu, Kang and
Xu, Jiexin and
Li, Huaijun and
Jiang, Xiaojian and
Zhao, Jun",
editor... | Traditional event detection methods require predefined event schemas. However, manually defining event schemas is expensive and the coverage of schemas is limited. To this end, some works study the event type induction (ETI) task, which discovers new event types via clustering. However, the setting of ETI suffers from ... | [
"Cao, Pengfei",
"Hao, Yupu",
"Chen, Yubo",
"Liu, Kang",
"Xu, Jiexin",
"Li, Huaijun",
"Jiang, Xiaojian",
"Zhao, Jun"
] | Event Ontology Completion with Hierarchical Structure Evolution Networks | emnlp-main.21 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.22.bib | https://aclanthology.org/2023.emnlp-main.22/ | @inproceedings{jin-etal-2023-parameter,
title = "Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients",
author = "Jin, Feihu and
Zhang, Jiajun and
Zong, Chengqing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Procee... | Fine-tuning all parameters of large language models (LLMs) requires significant computational resources and is time-consuming. Recent parameter-efficient tuning methods such as Adapter tuning, Prefix tuning, and LoRA allow for updating a small subset of parameters in large language models. However, they can only save a... | [
"Jin, Feihu",
"Zhang, Jiajun",
"Zong, Chengqing"
] | Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients | emnlp-main.22 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.23.bib | https://aclanthology.org/2023.emnlp-main.23/ | @inproceedings{lei-huang-2023-discourse,
title = "Discourse Structures Guided Fine-grained Propaganda Identification",
author = "Lei, Yuanyuan and
Huang, Ruihong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical... | Propaganda is a form of deceptive narratives that instigate or mislead the public, usually with a political purpose. In this paper, we aim to identify propaganda in political news at two fine-grained levels: sentence-level and token-level. We observe that propaganda content is more likely to be embedded in sentences th... | [
"Lei, Yuanyuan",
"Huang, Ruihong"
] | Discourse Structures Guided Fine-grained Propaganda Identification | emnlp-main.23 | 2310.18544 | [
"https://github.com/yuanyuanlei-nlp/propaganda_emnlp_2023"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.24.bib | https://aclanthology.org/2023.emnlp-main.24/ | @inproceedings{minixhofer-etal-2023-compoundpiece,
title = "{C}ompound{P}iece: Evaluating and Improving Decompounding Performance of Language Models",
author = "Minixhofer, Benjamin and
Pfeiffer, Jonas and
Vuli{\'c}, Ivan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika... | While many languages possess processes of joining two or more words to create compound words, previous studies have been typically limited only to languages with excessively productive compound formation (e.g., German, Dutch) and there is no public dataset containing compound and non-compound words across a large numbe... | [
"Minixhofer, Benjamin",
"Pfeiffer, Jonas",
"Vuli{\\'c}, Ivan"
] | CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models | emnlp-main.24 | 2305.14214 | [
"https://github.com/bminixhofer/compoundpiece"
] | https://huggingface.co/papers/2305.14214 | 1 | 0 | 0 | 3 | [
"benjamin/compoundpiece",
"benjamin/compoundpiece-stage1"
] | [
"benjamin/compoundpiece"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.25.bib | https://aclanthology.org/2023.emnlp-main.25/ | @inproceedings{wang-etal-2023-improving,
title = "Improving Image Captioning via Predicting Structured Concepts",
author = "Wang, Ting and
Chen, Weidong and
Tian, Yuanhe and
Song, Yan and
Mao, Zhendong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
... | Having the difficulty of solving the semantic gap between images and texts for the image captioning task, conventional studies in this area paid some attention to treating semantic concepts as a bridge between the two modalities and improved captioning performance accordingly. Although promising results on concept pred... | [
"Wang, Ting",
"Chen, Weidong",
"Tian, Yuanhe",
"Song, Yan",
"Mao, Zhendong"
] | Improving Image Captioning via Predicting Structured Concepts | emnlp-main.25 | 2311.08223 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral | |
https://aclanthology.org/2023.emnlp-main.26.bib | https://aclanthology.org/2023.emnlp-main.26/ | @inproceedings{jones-etal-2023-gatitos,
title = "{GATITOS}: Using a New Multilingual Lexicon for Low-resource Machine Translation",
author = "Jones, Alexander and
Caswell, Isaac and
Firat, Orhan and
Saxena, Ishank",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika"... | Modern machine translation models and language models are able to translate without having been trained on parallel data, greatly expanding the set of languages that they can serve. However, these models still struggle in a variety of predictable ways, a problem that cannot be overcome without at least some trusted bil... | [
"Jones, Alex",
"er",
"Caswell, Isaac",
"Firat, Orhan",
"Saxena, Ishank"
] | GATITOS: Using a New Multilingual Lexicon for Low-resource Machine Translation | emnlp-main.26 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.27.bib | https://aclanthology.org/2023.emnlp-main.27/ | @inproceedings{gao-etal-2023-continually,
title = "Continually Improving Extractive {QA} via Human Feedback",
author = "Gao, Ge and
Chen, Hung-Ting and
Artzi, Yoav and
Choi, Eunsol",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of... | We study continually improving an extractive question answering (QA) system via human user feedback. We design and deploy an iterative approach, where information-seeking users ask questions, receive model-predicted answers, and provide feedback. We conduct experiments involving thousands of user interactions under div... | [
"Gao, Ge",
"Chen, Hung-Ting",
"Artzi, Yoav",
"Choi, Eunsol"
] | Continually Improving Extractive QA via Human Feedback | emnlp-main.27 | 2305.12473 | [
"https://github.com/lil-lab/qa-from-hf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.28.bib | https://aclanthology.org/2023.emnlp-main.28/ | @inproceedings{chen-etal-2023-using,
title = "Using Interpretation Methods for Model Enhancement",
author = "Chen, Zhuo and
Jiang, Chengyue and
Tu, Kewei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical ... | In the age of neural natural language processing, there are plenty of works trying to derive interpretations of neural models. Intuitively, when gold rationales exist during training, one can additionally train the model to match its interpretation with the rationales. However, this intuitive idea has not been fully ex... | [
"Chen, Zhuo",
"Jiang, Chengyue",
"Tu, Kewei"
] | Using Interpretation Methods for Model Enhancement | emnlp-main.28 | 2404.02068 | [
"https://github.com/chord-chen-30/uimer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.29.bib | https://aclanthology.org/2023.emnlp-main.29/ | @inproceedings{zhang-etal-2023-expression,
title = "An Expression Tree Decoding Strategy for Mathematical Equation Generation",
author = "Zhang, Wenqi and
Shen, Yongliang and
Nong, Qingpeng and
Tan, Zeqi and
Ma, Yanna and
Lu, Weiming",
editor = "Bouamor, Houda and
P... | Generating mathematical equations from natural language requires an accurate understanding of the relations among math expressions. Existing approaches can be broadly categorized into token-level and expression-level generation. The former treats equations as a mathematical language, sequentially generating math tokens... | [
"Zhang, Wenqi",
"Shen, Yongliang",
"Nong, Qingpeng",
"Tan, Zeqi",
"Ma, Yanna",
"Lu, Weiming"
] | An Expression Tree Decoding Strategy for Mathematical Equation Generation | emnlp-main.29 | 2310.09619 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.30.bib | https://aclanthology.org/2023.emnlp-main.30/ | @inproceedings{yang-etal-2023-bootstrapping,
title = "Bootstrapping Small {\&} High Performance Language Models with Unmasking-Removal Training Policy",
author = "Yang, Yahan and
Sulem, Elior and
Lee, Insup and
Roth, Dan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, ... | BabyBERTa, a language model trained on small-scale child-directed speech while none of the words are unmasked during training, has been shown to achieve a level of grammaticality comparable to that of RoBERTa-base, which is trained on 6,000 times more words and 15 times more parameters. Relying on this promising result... | [
"Yang, Yahan",
"Sulem, Elior",
"Lee, Insup",
"Roth, Dan"
] | Bootstrapping Small & High Performance Language Models with Unmasking-Removal Training Policy | emnlp-main.30 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.31.bib | https://aclanthology.org/2023.emnlp-main.31/ | @inproceedings{yoon-bak-2023-diversity,
title = "Diversity Enhanced Narrative Question Generation for Storybooks",
author = "Yoon, Hokeun and
Bak, JinYeong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Metho... | Question generation (QG) from a given context can enhance comprehension, engagement, assessment, and overall efficacy in learning or conversational environments. Despite recent advancements in QG, the challenge of enhancing or measuring the diversity of generated questions often remains unaddressed. In this paper, we i... | [
"Yoon, Hokeun",
"Bak, JinYeong"
] | Diversity Enhanced Narrative Question Generation for Storybooks | emnlp-main.31 | 2310.16446 | [
"https://github.com/hkyoon95/mqg"
] | https://huggingface.co/papers/2310.16446 | 0 | 0 | 0 | 2 | [] | [] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.32.bib | https://aclanthology.org/2023.emnlp-main.32/ | @inproceedings{dong-etal-2023-debiasing,
title = "Debiasing Made State-of-the-art: Revisiting the Simple Seed-based Weak Supervision for Text Classification",
author = "Dong, Chengyu and
Wang, Zihan and
Shang, Jingbo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
... | Recent advances in weakly supervised text classification mostly focus on designing sophisticated methods to turn high-level human heuristics into quality pseudo-labels. In this paper, we revisit the seed matching-based method, which is arguably the simplest way to generate pseudo-labels, and show that its power was gre... | [
"Dong, Chengyu",
"Wang, Zihan",
"Shang, Jingbo"
] | Debiasing Made State-of-the-art: Revisiting the Simple Seed-based Weak Supervision for Text Classification | emnlp-main.32 | 2305.14794 | [
"https://github.com/shwinshaker/simseed"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.33.bib | https://aclanthology.org/2023.emnlp-main.33/ | @inproceedings{chen-etal-2023-enhance,
title = "How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning",
author = "Chen, Hang and
Yang, Xinyu and
Luo, Jing and
Zhu, Wenjing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitl... | Our investigation into the Affective Reasoning in Conversation (ARC) task highlights the challenge of causal discrimination. Almost all existing models, including large language models (LLMs), excel at capturing semantic correlations within utterance embeddings but fall short in determining the specific causal relation... | [
"Chen, Hang",
"Yang, Xinyu",
"Luo, Jing",
"Zhu, Wenjing"
] | How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning | emnlp-main.33 | 2305.02615 | [
"https://github.com/zodiark-ch/mater-of-our-emnlp2023-paper"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.34.bib | https://aclanthology.org/2023.emnlp-main.34/ | @inproceedings{si-etal-2023-compressing,
title = "Compressing and Debiasing Vision-Language Pre-Trained Models for Visual Question Answering",
author = "Si, Qingyi and
Liu, Yuanxin and
Lin, Zheng and
Fu, Peng and
Cao, Yanan and
Wang, Weiping",
editor = "Bouamor, Houda and... | Despite the excellent performance of vision-language pre-trained models (VLPs) on conventional VQA task, they still suffer from two problems: First, VLPs tend to rely on language biases in datasets and fail to generalize to out-of-distribution (OOD) data. Second, they are inefficient in terms of memory footprint and co... | [
"Si, Qingyi",
"Liu, Yuanxin",
"Lin, Zheng",
"Fu, Peng",
"Cao, Yanan",
"Wang, Weiping"
] | Compressing and Debiasing Vision-Language Pre-Trained Models for Visual Question Answering | emnlp-main.34 | 2210.14558 | [
"https://github.com/phoebussi/compress-robust-vqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral | |
https://aclanthology.org/2023.emnlp-main.35.bib | https://aclanthology.org/2023.emnlp-main.35/ | @inproceedings{cole-etal-2023-selectively,
title = "Selectively Answering Ambiguous Questions",
author = "Cole, Jeremy and
Zhang, Michael and
Gillick, Daniel and
Eisenschlos, Julian and
Dhingra, Bhuwan and
Eisenstein, Jacob",
editor = "Bouamor, Houda and
Pino, Juan ... | Trustworthy language models should abstain from answering questions when they do not know the answer. However, the answer to a question can be unknown for a variety of reasons. Prior research has focused on the case in which the question is clear and the answer is unambiguous but possibly unknown. However, the answer t... | [
"Cole, Jeremy",
"Zhang, Michael",
"Gillick, Daniel",
"Eisenschlos, Julian",
"Dhingra, Bhuwan",
"Eisenstein, Jacob"
] | Selectively Answering Ambiguous Questions | emnlp-main.35 | 2305.14613 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.36.bib | https://aclanthology.org/2023.emnlp-main.36/ | @inproceedings{lee-etal-2023-temporal,
title = "Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning",
author = "Lee, Dong-Ho and
Ahrabian, Kian and
Jin, Woojeong and
Morstatter, Fred and
Pujara, Jay",
editor = "Bouamor, Houda and
Pino, Juan an... | Temporal knowledge graph (TKG) forecasting benchmarks challenge models to predict future facts using knowledge of past facts. In this paper, we develop an approach to use in-context learning (ICL) with large language models (LLMs) for TKG forecasting. Our extensive evaluation compares diverse baselines, including both ... | [
"Lee, Dong-Ho",
"Ahrabian, Kian",
"Jin, Woojeong",
"Morstatter, Fred",
"Pujara, Jay"
] | Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning | emnlp-main.36 | 2305.10613 | [
"https://github.com/usc-isi-i2/isi-tkg-icl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.37.bib | https://aclanthology.org/2023.emnlp-main.37/ | @inproceedings{hwang-etal-2023-knowledge,
title = "Knowledge Graph Compression Enhances Diverse Commonsense Generation",
author = "Hwang, EunJeong and
Thost, Veronika and
Shwartz, Vered and
Ma, Tengfei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
bookti... | Generating commonsense explanations requires reasoning about commonsense knowledge beyond what is explicitly mentioned in the context. Existing models use commonsense knowledge graphs such as ConceptNet to extract a subgraph of relevant knowledge pertaining to concepts in the input. However, due to the large coverage a... | [
"Hwang, EunJeong",
"Thost, Veronika",
"Shwartz, Vered",
"Ma, Tengfei"
] | Knowledge Graph Compression Enhances Diverse Commonsense Generation | emnlp-main.37 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.38.bib | https://aclanthology.org/2023.emnlp-main.38/ | @inproceedings{li-etal-2023-pragmatic,
title = "Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models",
author = "Li, Yiyuan and
Menon, Rakesh and
Ghosh, Sayan and
Srivastava, Shashank",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
bookti... | Generalized quantifiers (e.g., $\textit{few}$, $\textit{most}$) are used to indicate the proportions predicates satisfy (for example, $\textit{some}$ apples are red). One way to interpret quantifier semantics is to explicitly bind these satisfactions with percentage scopes (e.g., 30{\%}-40{\%} of apples are red). This ... | [
"Li, Yiyuan",
"Menon, Rakesh",
"Ghosh, Sayan",
"Srivastava, Shashank"
] | Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models | emnlp-main.38 | 2311.04659 | [
"https://github.com/nativeatom/presque"
] | https://huggingface.co/papers/2311.04659 | 0 | 0 | 0 | 4 | [] | [
"billli/QuRe"
] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.39.bib | https://aclanthology.org/2023.emnlp-main.39/ | @inproceedings{liu-etal-2023-llm,
title = "{LLM}-{FP}4: 4-Bit Floating-Point Quantized Transformers",
author = "Liu, Shih-yang and
Liu, Zechun and
Huang, Xijie and
Dong, Pingcheng and
Cheng, Kwang-Ting",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
... | We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floa... | [
"Liu, Shih-yang",
"Liu, Zechun",
"Huang, Xijie",
"Dong, Pingcheng",
"Cheng, Kwang-Ting"
] | LLM-FP4: 4-Bit Floating-Point Quantized Transformers | emnlp-main.39 | 2310.16836 | [
"https://github.com/nbasyl/llm-fp4"
] | https://huggingface.co/papers/2310.16836 | 3 | 13 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.40.bib | https://aclanthology.org/2023.emnlp-main.40/ | @inproceedings{tang-etal-2023-improving,
title = "Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers",
author = "Tang, Chen and
Wang, Shun and
Goldsack, Tomas and
Lin, Chenghua",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, ... | Abstracts derived from biomedical literature possess distinct domain-specific characteristics, including specialised writing styles and biomedical terminologies, which necessitate a deep understanding of the related literature. As a result, existing language models struggle to generate technical summaries that are on p... | [
"Tang, Chen",
"Wang, Shun",
"Goldsack, Tomas",
"Lin, Chenghua"
] | Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers | emnlp-main.40 | 2310.15684 | [
"https://github.com/tangg555/biomed-sum"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.41.bib | https://aclanthology.org/2023.emnlp-main.41/ | @inproceedings{ye-durrett-2023-explanation,
title = "Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting",
author = "Ye, Xi and
Durrett, Greg",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empiric... | Recent work has shown how to prompt large language models with explanations to obtain strong performance on textual reasoning tasks, i.e., the chain-of-thought paradigm. However, subtly different explanations can yield widely varying downstream task accuracy. Explanations that have not been {``}tuned{''} for a task, su... | [
"Ye, Xi",
"Durrett, Greg"
] | Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting | emnlp-main.41 | 2302.04813 | [
"https://github.com/xiye17/explselection"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.42.bib | https://aclanthology.org/2023.emnlp-main.42/ | @inproceedings{dale-etal-2023-halomi,
title = "{H}al{O}mi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation",
author = "Dale, David and
Voita, Elena and
Lam, Janice and
Hansanti, Prangthip and
Ropers, Christophe and
Ka... | Hallucinations in machine translation are translations that contain information completely unrelated to the input. Omissions are translations that do not include some of the input information. While both cases tend to be catastrophic errors undermining user trust, annotated data with these types of pathologies is extre... | [
"Dale, David",
"Voita, Elena",
"Lam, Janice",
"Hansanti, Prangthip",
"Ropers, Christophe",
"Kalbassi, Elahe",
"Gao, Cynthia",
"Barrault, Loic",
"Costa-juss{\\`a}, Marta"
] | HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation | emnlp-main.42 | 2305.11746 | [
"https://github.com/facebookresearch/stopes"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.43.bib | https://aclanthology.org/2023.emnlp-main.43/ | @inproceedings{he-etal-2023-gradient,
title = "Gradient-based Gradual Pruning for Language-Specific Multilingual Neural Machine Translation",
author = "He, Dan and
Pham, Minh-Quang and
Ha, Thanh-Le and
Turchi, Marco",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalik... | Multilingual neural machine translation (MNMT) offers the convenience of translating between multiple languages with a single model. However, MNMT often suffers from performance degradation in high-resource languages compared to bilingual counterparts. This degradation is commonly attributed to parameter interference, ... | [
"He, Dan",
"Pham, Minh-Quang",
"Ha, Thanh-Le",
"Turchi, Marco"
] | Gradient-based Gradual Pruning for Language-Specific Multilingual Neural Machine Translation | emnlp-main.43 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.44.bib | https://aclanthology.org/2023.emnlp-main.44/ | @inproceedings{whitehouse-etal-2023-llm,
title = "{LLM}-powered Data Augmentation for Enhanced Cross-lingual Performance",
author = "Whitehouse, Chenxi and
Choudhury, Monojit and
Aji, Alham Fikri",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Procee... | This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, StableVicuna, ChatGPT, and GPT-4, to augment three datasets... | [
"Whitehouse, Chenxi",
"Choudhury, Monojit",
"Aji, Alham Fikri"
] | LLM-powered Data Augmentation for Enhanced Cross-lingual Performance | emnlp-main.44 | 2305.14288 | [
"https://github.com/mbzuai-nlp/gen-X"
] | https://huggingface.co/papers/2305.14288 | 2 | 0 | 0 | 3 | [] | [
"coref-data/gen_winograd_raw"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.45.bib | https://aclanthology.org/2023.emnlp-main.45/ | @inproceedings{wang-etal-2023-prompt-based,
title = "Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition",
author = "Wang, Chenxu and
Jian, Ping and
Huang, Mu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedi... | Implicit Discourse Relation Recognition (IDRR), which infers discourse relations without the help of explicit connectives, is still a crucial and challenging task for discourse parsing. Recent works tend to exploit the hierarchical structure information from the annotated senses, which demonstrate enhanced discourse re... | [
"Wang, Chenxu",
"Jian, Ping",
"Huang, Mu"
] | Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition | emnlp-main.45 | 2311.00367 | [
"https://github.com/lalalamdbf/plse_idrr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral | |
https://aclanthology.org/2023.emnlp-main.46.bib | https://aclanthology.org/2023.emnlp-main.46/ | @inproceedings{chung-yu-2023-vlis,
title = "{VLIS}: Unimodal Language Models Guide Multimodal Language Generation",
author = "Chung, Jiwan and
Yu, Youngjae",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Metho... | Multimodal language generation, which leverages the synergy of language and vision, is a rapidly expanding field. However, existing vision-language models face challenges in tasks that require complex linguistic understanding. To address this issue, we introduce Visual-Language models as Importance Sampling weights (VL... | [
"Chung, Jiwan",
"Yu, Youngjae"
] | VLIS: Unimodal Language Models Guide Multimodal Language Generation | emnlp-main.46 | 2310.09767 | [
"https://github.com/jiwanchung/vlis"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.47.bib | https://aclanthology.org/2023.emnlp-main.47/ | @inproceedings{suresh-etal-2023-conceptual,
title = "Conceptual structure coheres in human cognition but not in large language models",
author = "Suresh, Siddharth and
Mukherjee, Kushin and
Yu, Xizheng and
Huang, Wei-Chun and
Padua, Lisa and
Rogers, Timothy",
editor = "Bou... | Neural network models of language have long been used as a tool for developing hypotheses about conceptual representation in the mind and brain. For many years, such use involved extracting vector-space representations of words and using distances among these to predict or understand human behavior in various semantic ... | [
"Suresh, Siddharth",
"Mukherjee, Kushin",
"Yu, Xizheng",
"Huang, Wei-Chun",
"Padua, Lisa",
"Rogers, Timothy"
] | Conceptual structure coheres in human cognition but not in large language models | emnlp-main.47 | 2304.02754 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.48.bib | https://aclanthology.org/2023.emnlp-main.48/ | @inproceedings{feng-etal-2023-towards,
title = "Towards {LLM}-driven Dialogue State Tracking",
author = "Feng, Yujie and
Lu, Zexin and
Liu, Bo and
Zhan, Liming and
Wu, Xiao-Ming",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceeding... | Dialogue State Tracking (DST) is of paramount importance in ensuring accurate tracking of user goals and system actions within task-oriented dialogue systems. The emergence of large language models (LLMs) such as GPT3 and ChatGPT has sparked considerable interest in assessing their efficacy across diverse applications.... | [
"Feng, Yujie",
"Lu, Zexin",
"Liu, Bo",
"Zhan, Liming",
"Wu, Xiao-Ming"
] | Towards LLM-driven Dialogue State Tracking | emnlp-main.48 | 2310.14970 | [
"https://github.com/woodscene/ldst"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.49.bib | https://aclanthology.org/2023.emnlp-main.49/ | @inproceedings{zhang-etal-2023-learning-language,
title = "Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis",
author = "Zhang, Haoyu and
Wang, Yu and
Yin, Guanghao and
Liu, Kejun and
Liu, Yuanyuan and
Yu, Tianshu",
editor = ... | Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (*e.g.,* language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved. To alleviate this, we present Ada... | [
"Zhang, Haoyu",
"Wang, Yu",
"Yin, Guanghao",
"Liu, Kejun",
"Liu, Yuanyuan",
"Yu, Tianshu"
] | Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis | emnlp-main.49 | 2310.05804 | [
"https://github.com/Haoyu-ha/ALMT"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.emnlp-main.50.bib | https://aclanthology.org/2023.emnlp-main.50/ | @inproceedings{pantazopoulos-etal-2023-multitask,
title = "Multitask Multimodal Prompted Training for Interactive Embodied Task Completion",
author = "Pantazopoulos, Georgios and
Nikandrou, Malvina and
Parekh, Amit and
Hemanthage, Bhathiya and
Eshghi, Arash and
Konstas, Ioanni... | Interactive and embodied tasks pose at least two fundamental challenges to existing Vision {\&} Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified enco... | [
"Pantazopoulos, Georgios",
"Nik",
"rou, Malvina",
"Parekh, Amit",
"Hemanthage, Bhathiya",
"Eshghi, Arash",
"Konstas, Ioannis",
"Rieser, Verena",
"Lemon, Oliver",
"Suglia, Aless",
"ro"
] | Multitask Multimodal Prompted Training for Interactive Embodied Task Completion | emnlp-main.50 | 2311.04067 | [
""
] | https://huggingface.co/papers/2311.04067 | 1 | 1 | 0 | 9 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.51.bib | https://aclanthology.org/2023.emnlp-main.51/ | @inproceedings{liu-etal-2023-afraid,
title = "We{'}re Afraid Language Models Aren{'}t Modeling Ambiguity",
author = "Liu, Alisa and
Wu, Zhaofeng and
Michael, Julian and
Suhr, Alane and
West, Peter and
Koller, Alexander and
Swayamdipta, Swabha and
Smith, Noah and... | Ambiguity is an intrinsic feature of natural language. Managing ambiguity is a key part of human language understanding, allowing us to anticipate misunderstanding as communicators and revise our interpretations as listeners. As language models are increasingly employed as dialogue interfaces and writing aids, handling... | [
"Liu, Alisa",
"Wu, Zhaofeng",
"Michael, Julian",
"Suhr, Alane",
"West, Peter",
"Koller, Alex",
"er",
"Swayamdipta, Swabha",
"Smith, Noah",
"Choi, Yejin"
] | We're Afraid Language Models Aren't Modeling Ambiguity | emnlp-main.51 | 2304.14399 | [
"https://github.com/alisawuffles/ambient"
] | https://huggingface.co/papers/2304.14399 | 1 | 0 | 0 | 9 | [] | [
"metaeval/ambient"
] | [] | 1 | Poster |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6