The seminar will cover two topics:
(1) Argument Mining and Its Applications
Time: 8h40-10h10, Thursday, 12th Jan 2023
Speaker: Dr. Park Jun Seok, NAVER Expert
Slides: Argument-mining
Abstract:
Easy access to the internet has led to a steep increase in the amount of user comments written by inexperienced writers. While such user comments may provide useful information, readers are left with the burden of delving through piles of uninformative text to find the information they need. One popular approach to deal with this problem is to build systems that can help the readers by means of recommending well-written comments or summarizing available information. We, however, consider the problem from the perspective of the commenters: Can we build a system that can guide the commenters to write “better” comments? This will improve the overall quality of textual content available on the internet and is complementary to existing approaches to help the readers.
This talk will present the core components of an automated system to assist commenters in constructing better-structured arguments in their comments. These include: (1) A monological argumentation model to capture the evaluability of arguments in the online setting, (2) A classifier for determining an appropriate type of support (either reason or evidence) for propositions comprising user comments, (3) A classifier for identifying support relations present in user comments. I will also briefly discuss how this system is applicable to broader domains such as automated essay grading, summarization, and recommendation systems.
References:
Argument Mining: A Survey (CL 2019)
Argument Mining for Review Helpfulness Prediction (EMNLP 2022)
(2) Self-supervised Vision and Language Pre-Training
Time: 10h15-11h45, Thursday, 12th January 2023
Speaker: Dr. Kim Won Jae, NAVER Expert
Slides: Self-Supervised-Vision-andLanguage-Pre-Training
Abstract:
This talk will cover the topic of self-supervised vision-and-language pre-training. This topic is broadly divided into self-supervised leaning (SSL) in the first part and vision-and-language pre-training (VLP) in the second part. Therefore, at the beginning of the talk, we will look at how SSL has been done in individual modalities, such as images and text, and later in the talk, we will see how these methods are applied to VLP.
These are the papers that will help you understand the talk if you read it.
References:
a) Language SSL:
- ELMo: Peters, Matthew E. et al. “Deep Contextualized Word Representations.” NAACL (2018).
- GPT: Radford, Alec and Karthik Narasimhan. “Improving Language Understanding by Generative Pre-Training.” (2018).
- BERT: Devlin, Jacob et al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” NAACL (2019).
b) Vision SSL:
- Jigsaw: Noroozi, Mehdi and Paolo Favaro. “Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles.” ECCV (2016).
- InstDisc: Wu, Zhirong et al. “Unsupervised Feature Learning via Non-parametric Instance Discrimination.” CVPR (2018).
- MoCo: He, Kaiming et al. “Momentum Contrast for Unsupervised Visual Representation Learning.” CVPR (2020).
- MAE: He, Kaiming et al. “Masked Autoencoders Are Scalable Vision Learners.” CVPR (2022).
c) Vision-and-Language:
- BUTD: Anderson, Peter et al. “Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering.” CVPR (2018).
- ViLBERT: Lu, Jiasen et al. “ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks.” NeurIPS (2019).
- UNITER: Chen, Yen-Chun et al. “UNITER: UNiversal Image-TExt Representation Learning.” ECCV (2020).
- ViLT: Kim, Wonjae et al. “ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision.” ICML (2021).
The seminar will be delivered online via MS Teams. We would like to invite all HUST students to join the seminar to gain deeper knowledge in NLP and ML. Here is the link for registration: https://forms.office.com/r/wBP34K4U7f
QR code: