We meet on a weely basis to discuss recent research papers. Here is the schedule and the papers presented.

Schedule

Date Speaker Title
20/12/24 Huda  
13/12/24 Procheta  
06/12/24 Masaru Isonuma  
29/11/24 Taichi  
22/11/24 Zhidong Ling  
15/11/24 Gaifan  
08/11/24 Danushka  
01/11/24 Mike Large Language Model Is Not a Good Few-shot Information Extractor, but a Good Reranker for Hard Samples!
25/10/24 Tulika Rethinking Task-Oriented Dialogue Systems: From Complex Modularity to Zero-Shot Autonomous Agent
18/10/24 Tianhui Commonsense Knowledge Editing Based on Free-Text in LLMs
11/10/24 Lingfang Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
04/10/24 Jodie Debiasing Vision-Language Models via Biased Prompts
27/09/24 Jack “I’d Like to Have an Argument, Please”: Argumentative Reasoning in Large Language Models
20/09/24 Huda Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity
13/09/24 Procheta Skill-Mix: a Flexible and Expandable Family of Evaluations for AI models
06/09/24 Danushka Mission Impossible Language Models
26/07/24 Mike ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems
19/07/24 Tulika SummEdits: Measuring LLM Ability at Factual Reasoning Through The Lens of Summarization
12/07/24 Tianhui UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations
28/06/24 Lingfang What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions
21/06/24 Jodie CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark
14/06/24 No meeting due to EMNLP submission deadline  
07/06/24 Jack Automated Claim Matching with Large Language Models: Empowering Fact-Checkers in the Fight Against Misinformation
31/05/24 Huda ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base
24/05/24 Procheta Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
17/05/24 Danushka Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements
10/05/24 No meeting due to PhD workshop  
03/05/24 Mike KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction
26/04/24 No meeting due to NLP4SocialGood Symposium  
19/04/24 Tianhui How Easily do Irrelevant Inputs Skew the Responses of Large Language Models?
12/04/24 Tulika DisorBERT: A Double Domain Adaptation Model for Detecting Signs of Mental Disorders in Social Media
05/04/24 Lingfang Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
22/03/24 Tianhui Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
15/03/24 Jodie Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
08/03/24 Jack From Text to Structure: Using Large Language Models to Support the Development of Legal Expert Systems
01/03/24 Danushka In-Contextual Gender Bias Suppression for Large Language Models
23/02/24 Junjie Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
09/02/24 Huda PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization
02/02/24 Procheta Exploiting Language Characteristics for Legal Domain-Specific Language Model Pretraining
26/01/24 Mike G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment
19/01/24 Tulika Towards Understanding Omission in Dialogue Summarization
12/01/24 Saleem Towards Interpretable Mental Health Analysis LLMs
15/12/23 Danushka, Meng, Jodie EMNLP-23 overview
08/12/23 No meeting due to EMNLP  
01/12/23 Lingfang Interpreting Language Models with Contrastive Explanations
24/11/23 Danushka An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
17/11/23 Tianhui IJCNLP-AACL 2023 Overview
10/11/23 Jodie Assessing Multilingual Fairness in Pre-trained Multimodal Representations
03/11/23 No meeting due to IJCNLP-AACL  
27/10/23 Danushka Discovering Universal Geometry in Embeddings with ICA
20/10/23 Huda Direct Fact Retrieval from Knowledge Graphs without Entity Linking
13/10/23 Tianhui  
10/06/23 Tulika  
29/09/23 Procheta Self-Refine: Iterative Refinement with Self-Feedback
22/09/23 Mike Should You Mask 15% in Masked Language Modeling?
14/09/23 Jodie Having Beer after Prayer? Measuring Cultural Bias in Large Language Models
07/09/23 Danushka XL-LEXEME: WiC Pretrained Model for Cross-Lingual LEXical sEMantic changE
20/07/23 Danushka (ACL update)  
13/07/23 No meeting (ACL@Toronto)  
06/07/23 Huda Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling. poster and presentation
29/06/23 Tianhui LoRA: Low-Rank Adaptation of Large Language Models(https://arxiv.org/abs/2106.09685)
22/06/23 No meeting (EMNLP preparations)  
15/06/23 Procheta Tutorial rehersal
08/06/23 Skipped due to NLP Symposium  
01/06/23 Danushka Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts
25/05/23 Mike Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
18/05/23 Jodie We’re Afraid Language Models Aren’t Modeling Ambiguity
11/05/23 Huda Decoupling Mixture-of-Graphs: Unseen Relational Learning for Knowledge Graph Completion by Fusing Ontology and Textual Experts
04/05/23 Tianhui Multimodal Chain-of-Thought Reasoning in Language Models
27/04/23 Danushka Generated Knowledge Prompting for Commonsense Reasoning
20/04/23 Procheta “Why Should I Trust You?”: Explaining the Predictions of Any Classifier
13/04/23 Tulika ECIR conference update
06/04/23 Mike Eider: Empowering Document-level Relation Extraction with Efficient Evidence Extraction and Inference-stage Fusion
30/03/23 Jodie XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
23/03/23 Huda Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach
16/03/23 Xiaohang Time-Aware Language Models as Temporal Knowledge Bases
09/03/23 Gaifan Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation
02/03/23 Tianhui Metric-guided Distillation: Distilling Knowledge from the Metric to Ranker and Retriever for Generative Commonsense Reasoning
16/02/23 Tulika Aspect-Controllable Opinion Summarization
09/02/23 Procheta Dialog Inpainting: Turning Documents into Dialogs
02/02/23 Danushka Accelerating Large-Scale Inference with Anisotropic Vector Quantization
26/01/23 Jodie Retrieval Augmentation for Commonsense Reasoning: A Unified Approach
19/01/23 Huda FPC: Fine-tuning with Prompt Curriculum for Relation Extraction
05/01/23 Mike Generative Knowledge Graph Construction: A Review
22/12/22 Tulika Many Hands Make Light Work: Using Essay Traits to Automatically Score Essays
15/12/22 Jodie and Danushka EMNLP 2022 Summary
01/12/22 Jodie ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding
24/11/22 Aida MelBERT: Metaphor Detection via Contextualized Late Interaction using Metaphorical Identification Theories
17/11/22 Huda Deep Bidirectional Language-Knowledge Graph Pretraining
10/11/22 Danushka Learning To Retrieve Prompts for In-Context Learning
03/11/22 Tulika Dr. Can See: Towards a Multi-modal Disease Diagnosis Virtual Assistant
27/10/22 Mike An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning
13/10/22 Tianhui Enhancing Topic-to-Essay Generation with External Commonsense Knowledge
06/10/22 Huda Document-Level Relation Extraction with Sentences Importance Estimation and Focusing
29/09/22 Procheta Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation
22/09/22 Danushka IR like a SIR: Sense-enhanced Information Retrieval for Multiple Languages
08/09/22 Danushka Word2Box: Capturing Set-Theoretic Semantics of Words using Box Embeddings
23/08/22 Jodie MetaICL: Learning to Learn In Context
02/08/22 Huda On Transferability of Prompt Tuning for Natural Language Processing and Learning to Transfer Prompts for Text Generation
26/07/22 Michael Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks
12/07/22 Danushka Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
05/07/22 Angrosh IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization
28/06/22 Jodie Nibbling at the Hard Core of Word Sense Disambiguation
14/06/22 Huda Learning to Borrow – Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion
07/06/22 Michael Meta-learning via Language Model In-context Tuning
17/05/22 Danushka ACL summary
10/05/22 Angrosh Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction
26/05/22 Huda Learning to Borrow – Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion
19/04/22 Jodie Sense Embeddings are also Biased–Evaluating Social Biases in Static and Contextualised Sense Embeddings
12/04/22 Michael Position-based Prompting for Health Outcome Generation
05/04/22 Danushka Generating Datasets with Pretrained Language Models
29/03/22 Huda Exploring Task Difficulty for Few-Shot Relation Extraction
22/03/22 Jodie Word sense disambiguation: Towards interactive context exploitation from both word and sense perspectives
15/03/22 Michael CROSSFIT : A Few-shot Learning Challenge for Cosstask generalisation in NLP
08/03/22 Danushka PairRE: Knowledge Graph Embeddings via Paired Relation Vectors
01/03/22 Angrosh Relation Classification with Entity Type Restriction
22/02/22 Huda Entity Concept-enhanced Few-shot Relation Extraction
15/02/22 Danushka Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
08/02/22 Michael Making Pre-trained Language Models Better Few-shot Learners
02/02/22 Jodie ESC: Redesigning WSD with Extractive Sense Comprehension
26/01/22 Huda CoLAKE: Contextualized Language and Knowledge Embedding
19/01/22 Danushka Knowlege base completion meets transfer learning
24/11/21 Jodie ConSeC: Word Sense Disambiguation as Continuous Sense Comprehension
17/11/21 Micheal When does Further Pre-training MLM Help? An Empirical Study on Task-Oriented Dialog Pre-training
3/11/21 Angrosh Graph Transformer Networks
27/10/21 Huda Distilling Relation Embeddings from Pre-trained Language Models
20/10/21 Danushka Dynamic contextualised word embeddings
13/10/21 Jodie MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models
06/10/21 Micheal AUTOPROMPT: Eliciting Knowledge from Language Models with Automatically Generated Prompts
29/09/21 James Does knowledge distillation really work?
22/09/21 Huda ZS-BERT: Towards Zero-Shot Relation Extraction with Attribute Representation Learning
13/09/21 Angrosh Dependency-driven RE with Attentive Graph Convolutional Networks
08/09/21 Danushka Is sparse attention interpretable
01/09/21 Jodie Sparsity Makes Sense: Word Sense Disambiguation Using Sparse Contextualized Word Representations
25/08/21 Micheal Prefix-Tuning: Optimizing Continuous Prompts for Generation
18/08/21 James Lookahead: A far-sighted alternative of magnitude-based pruning
11/08/21 Huda ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning
04/08/21 Angrosh VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification

Go to the Home Page