Natural language processing (NLP) is a subfield of linguistics, artificial intelligence aiming at helping computers can understand human language and can interact with human. With the rapid development of data science, NLP has a big progress in creating applications that can bring many benefits to life. Some applications of NLP are machine translation, chatbot, social media monitoring, survey analysis, targeted advertising, hiring and recruitment, voice assistants, spelling correction.
Our research group focuses on exploiting machine learning and deep learning techniques, incorporating with NLP features and other knowledges to develop high performance NLP applications. We also investigate methods to construct knowledge base and taxonomy for specific NLP tasks, and to create large datasets for training NLP tasks. See the slides here for more detail.
Contact: Assoc. Prof. Le Thanh Huong, Email: firstname.lastname@example.org
Exploiting machine learning, deep learning techniques, in companied with NLP features to research and develop NLP applications in the following directions:
- Information extraction: Several tasks are investigated including named entity recognition, relation extraction, event extraction.
- Chatbot/question answering: Generation answers for questions based on different sources such as paragraphs, knowledge bases, databases, … Chatbot/question answering is used in many real-life applications such as customer service, study counseling, … We address different problems in this research direction including intent classification, slot tagging, question similarity, dialog management, …
- Speech Technologies: Focusing on expressive speech synthesis, speech synthesis with state-of-the-art research, automatic speech recognition; speaker verification, speaker identification
- Text Summarization: Summarizing single or multi-documents, either by picking up important sentences or creating new summaries with condensed content. We also look at query-based summarization, in which the answer is generated by summarizing all the documents returned by the query.
- Sentiment analysis: Detecting positive/negative sentiment in text. Sentiment analysis is often used by businesses to detect sentiment in social data, and to understand customers.
- Machine translation: We concentrate on several aspects: developing multilingual neural machine translation; increasing the performances (accuracy, speed) of the system; dealing with low resource languages; automatically building MT corpus for training machine translation systems.
- Plagiarism detection: Automatically identifying the copied fragments in a suspicious document from other source documents. We also concern about cross-language plagiarism detection where the source of plagiarism is in a different language.
- Vietnamese spelling correction: Spelling and grammatical errors make input texts difficult to understand. If such documents are used for training, it leads to bad model quality. In real-world NLP problems, we often meet texts with a lot of typos. Because of that, data should be cleaned before using. We focus on correcting spelling errors in two data types: academic text and social data.
- Synonym discovery from multiple sources: The project aims at discovering synonyms from multiple Web data sources. Synonyms are in form of various alias of the same entity, or equivalent representations of attribute relationships. The main sources come from user interaction with web search engines such as web search logs, semi-structured data such as web tables, and unstructured data such as web documents.
- Weakly supervised aspect extraction: The project aims at extracting domain aspects from user-generated content which serves as an essential step in opinion mining. It tackles the bottleneck of data annotation by studying the paradigm of weak supervision empowered by neural representation and neural learning frameworks.
- Weakly supervised taxonomy construction: A taxonomy is a scheme of classification that helps to organize and index knowledge. Generally, the development and the maintenance of a taxonomy is a labor-intensive task requiring significant resources and expertise. Our objective aims at exploring weak supervision to accelerate the process in an automated manner while keeping a minimum requirement on manual tasks.
- Knowledge base construction from semi-structured documents: Today, our data universe is increasing exponentially and more than 70% of those data are unstructured and semi-structures (e.g. word, pdf, excel files). Those data are commonly un-touched as they are not in the right forms for data analytic software. Our objective is to develop natural language understanding methods to extract valuable information in semi-structured documents. We are then able to construct knowledge bases, which benefit further analytics and beyond.