“The most important general-purpose technology of our era is Artificial Intelligence, particularly Machine Learning” – Harvard Business Review, 2017/7.

Our research group focuses on some fundamental problems of Machine Learning in general and Deep Learning in particular. We also study to use modern technologies from Machine Learning to other application areas. See the slides here for more detail.

Contact: Assoc. Prof. Than Quang Khoat, Email:

Research Directions

  • Continual learning: Explore new models and methods that can help a machine to learn continually from tasks to tasks, or when the data may come sequentially and infinitely.
  • Deep generative models: Explore novel models that can generate realistic data (images, videos, music, art, materials,…). Some recent models include Generative Adversarial Nets (GAN), Variational Autoencoders (VAE), Diffusion probabilistic models (DPM).
  • Theoretical foundation of Deep Learning: Explore why deep neural networks often generalize well, why overparameterized models can generalize really well in practice. Explore conditions for high generalization of machine learning models.
  • Representation learning: Explore novel ways to learn a latent representation of data, for which it can boost the performance of different machine learning models in different applications.
  • Recommender system: Explore the efficiency of modern machine learning models in recommender systems.

Some Research Problems

  • Why does catastrophic forgetting appear and how to avoid it when learning continually from tasks to tasks? What is an efficient way to balance different sources of knowledge?
  • Why are noises and sparsity really challenging when working with data streams, for which the data may come sequentially and infinitely? How to overcome those challenges?
  • Explore novel models that can generate realistic data (images, videos, music, art, materials,…).
  • Why can those generative models generalize well although most are unsupervised in nature?
  • Can adversarial models really generalize when different players use different losses?
  • Why do deep neural networks often generalize well?
  • Why do deep neural networks often suffer from adversarial attacks or noises? What are the fundamental roots and how to overcome them?
  • Why can overparameterized models generalize really well in practice?
  • What are the necessary conditions for high generalization of machine learning models?
  • What are the criteria for ensuring that a learnt latent representation of data is good? What are good criteria for learning new latent space?
  • Does self-supervised learning really generalize?
  • Why extreme sparsity is an extreme challenge in recommender systems? How to efficiently deal with sparsity?
  • Can modeling high-order interactions between users and items help improve the effectiveness of recommender systems?
  • Can modeling sequential behaviors of online users help improve the effectiveness of recommender systems?

Team Members

Assoc. Prof. Than Quang Khoat
Team Leader

MS. Ngo Van Linh

Projects and Solutions

Latest Publications

Publications in 2022

  1. Tran Xuan Bach, Nguyen Duc Anh, Ngo Van Linh, Khoat Than. Dynamic Transformation of Prior Knowledge Into Bayesian Models for Data Streams. IEEE Transactions on Knowledge and Data Engineering. 1-9. 23/12/2021
  2. Son-Tung Tran, Van-Hung Le, Van-Nam Hoang, Khoat Than, Thanh-Hai Tran, Hai Vu, Thi-Lan Le. A Local Structure-aware 3D Hand Pose Estimation Method for Egocentric Videos. 2022 IEEE Ninth International Conference on Communications and Electronics (ICCE). Nha Trang. 27/07/2022
  3. Dieu Vu, Khang Truong, Khanh Nguyen, Linh Ngo Van, Khoat Than. Revisiting Supervised Word Embeddings. Journal of Information Science and Engineering. 413-427. 17/08/2020
  4. Linh Ngo Van, Nam Le Hai, Hoang Pham, Khoat Than. Auxiliary Local Variables for Improving Regularization/Prior Approach in Continual Learning. PAKDD: Pacific-Asia Conference on Knowledge Discovery and Data Mining. 16-28. Chengdu, China. 15/05/2022
  5. Tung Doan and Atsuhiro Takasu. Kernel Clustering With Sigmoid Regularization for Efficient Segmentation of Sequential Data. IEEE Access. 62848-62862. 03/06/2022
  6. Hoang Phan, Anh Phan Tuan, Son Nguyen, Ngo Van Linh, Khoat Than. Reducing Catastrophic Forgetting in Neural Networks via Gaussian Mixture Approximation. PAKDD: Pacific-Asia Conference on Knowledge Discovery and Data Mining. 106-117. Chengdu, China. 15/05/2022
  7. Quang-Hien Kha, Thi-Oanh Tran, Trinh-Trung-Duong Nguyen, Van-Nui Nguyen, Khoat Than, Nguyen Quoc Khanh Le. An interpretable deep learning model for classifying adaptor protein complexes from sequence information. Methods. 90-96. 22/09/2022
  8. Khoa Nguyen, Dung Nguyen, Nghia Vu, Khoat Than. Random Generative Adversarial Networks. The 11th International Symposium on Information and Communication Technology. 01/12/2022
  9. Quyen Tran, Lam Tran, Linh Chu Hai, Linh Ngo Van, Khoat Than.. From implicit to explicit feedback: A deep neural network for modeling sequential behaviours and long-short term preferences of online users. Neurocomputing. 89-105. 14/01/2022
  10. Thi-Thanh Ha, Van-Nha Nguyen, Kiem-Hieu Nguyen, Kim-Anh Nguyen, Quang-Khoat Than. Utilizing SBERT For Finding Similar Questions in Community Question Answering. 13th International Conference on Knowledge and Systems Engineering (KSE). 1-6. Bangkok, Thailand. 10/11/2021
  11. Hieu Man Duc Trong, Nghia Ngo Trung, Linh Ngo Van, Thien Huu Nguyen. Selecting Optimal Context Sentences for Event-Event Relation Extraction. AAAI Conference on Artificial Intelligence. 11058-11066. USA. 22/02/2022
  12. Viet Dac Lai, Hieu Man, Linh Ngo, Franck Dernoncourt, Thien Huu Nguyen. Multilingual SubEvent Relation Extraction: A Novel Dataset and Structure Induction Method. Conference on Empirical Methods in Natural Language Processing. 5559–5570. Abu Dhabi, UAE. 07/12/2022
  13. Tung Nguyen, Trung Mai, Nam Nguyen, Linh Ngo Van, Khoat Than. Balancing stability and plasticity when learning topic models from short and noisy text streams. Neurocomputing. 30-43. 12/07/2022
  14. Nghia Ngo Trung, Linh Ngo Van, Thien Huu Nguyen. Unsupervised Domain Adaptation for Text Classification via Meta Self-Paced Learning. International Conference on Computational Linguistics. 4741–4752. 12/10/2022
  15. Ha Nguyen, Hoang Pham, Son Nguyen, Ngo Van Linh, Khoat Than. Adaptive infinite dropout for noisy and sparse data streams. Machine Learning. 1-36. 16/03/2022
  16. Thi Hong Vuong, Tung Doan, Atsuhiro Takasu. Sensor data alignment for multi-view bridge monitoring. International Conference on Bridge Maintenance, Safety and Management (IABMAS). 1598-1606. Barcelona, Spain. 11/07/2022

Publications in 2021

  1. Ngo Van Linh, Bach Tran Xuan, Khoat Than. A graph convolutional topic model for short and noisy text streams. Neurocomputing. 345-359. 19/10/2021
  2. Tien-Cuong Nguyen, Van-Quyen Nguyen, Van Linh Ngo, Khoat Than, Tien-Lam Pham. Learning hidden chemistry with deep neural networks. Computational Materials Science. 1-7. 05/08/2021
  3. Duc Anh Nguyen, Ngo Van Linh, Nguyen Kim Anh, Canh Hao Nguyen, Khoat Than. Boosting prior knowledge in streaming variational Bayes. Neurocomputing. 143-159. 01/02/2021
  4. Anh Son TA. Sovling problem. NICS. 17/01/2021
  5. Son Nguyen, Duong Nguyen, Khai Nguyen, Khoat Than, Hung Bui, Nhat Ho. Structured Dropout Variational Inference for Bayesian Neural Networks. 35th Conference on Neural Information Processing Systems (NeurIPS). 1-9. Sydney, Australia. 06/12/2021
  6. Van Tinh Nguyen, Thi Tu Kien Le, Khoat Than, Dang Hung Tran. Predicting miRNA–disease associations using improved random walk with restart and integrating multiple similarities. Scientific Reports. 21071. 15/10/2021