We meet on a weely basis to discuss recent research papers. Here is the schedule and the papers presented.

Schedule

Date Speaker Title
16/08/22 Angrosh  
09/08/22 Jodie  
02/08/22 Huda On Transferability of Prompt Tuning for Natural Language Processing and Learning to Transfer Prompts for Text Generation
26/07/22 Michael Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks
12/07/22 Danushka Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
05/07/22 Angrosh IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization
28/06/22 Jodie Nibbling at the Hard Core of Word Sense Disambiguation
14/06/22 Huda Learning to Borrow – Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion
07/06/22 Michael Meta-learning via Language Model In-context Tuning
17/05/22 Danushka ACL summary
10/05/22 Angrosh Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction
26/05/22 Huda Learning to Borrow – Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion
19/04/22 Jodie Sense Embeddings are also Biased–Evaluating Social Biases in Static and Contextualised Sense Embeddings
12/04/22 Michael Position-based Prompting for Health Outcome Generation
05/04/22 Danushka Generating Datasets with Pretrained Language Models
29/03/22 Huda Exploring Task Difficulty for Few-Shot Relation Extraction
22/03/22 Jodie Word sense disambiguation: Towards interactive context exploitation from both word and sense perspectives
15/03/22 Michael CROSSFIT : A Few-shot Learning Challenge for Cosstask generalisation in NLP
08/03/22 Danushka PairRE: Knowledge Graph Embeddings via Paired Relation Vectors
01/03/22 Angrosh Relation Classification with Entity Type Restriction
22/02/22 Huda Entity Concept-enhanced Few-shot Relation Extraction
15/02/22 Danushka Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
08/02/22 Michael Making Pre-trained Language Models Better Few-shot Learners
02/02/22 Jodie ESC: Redesigning WSD with Extractive Sense Comprehension
26/01/22 Huda CoLAKE: Contextualized Language and Knowledge Embedding
19/01/22 Danushka Knowlege base completion meets transfer learning
24/11/21 Jodie ConSeC: Word Sense Disambiguation as Continuous Sense Comprehension
17/11/21 Micheal When does Further Pre-training MLM Help? An Empirical Study on Task-Oriented Dialog Pre-training
3/11/21 Angrosh Graph Transformer Networks
27/10/21 Huda Distilling Relation Embeddings from Pre-trained Language Models
20/10/21 Danushka Dynamic contextualised word embeddings
13/10/21 Jodie MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models
06/10/21 Micheal AUTOPROMPT: Eliciting Knowledge from Language Models with Automatically Generated Prompts
29/09/21 James Does knowledge distillation really work?
22/09/21 Huda ZS-BERT: Towards Zero-Shot Relation Extraction with Attribute Representation Learning
13/09/21 Angrosh Dependency-driven RE with Attentive Graph Convolutional Networks
08/09/21 Danushka Is sparse attention interpretable
01/09/21 Jodie Sparsity Makes Sense: Word Sense Disambiguation Using Sparse Contextualized Word Representations
25/08/21 Micheal Prefix-Tuning: Optimizing Continuous Prompts for Generation
18/08/21 James Lookahead: A far-sighted alternative of magnitude-based pruning
11/08/21 Huda ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning
04/08/21 Angrosh VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification

Go to the Home Page