“The most important general-purpose technology of our era is Artificial Intelligence, particularly Machine Learning” – Harvard Business Review, 2017/7.
Our research group focuses on some fundamental problems of Machine Learning in general and Deep Learning in particular. We also study to use modern technologies from Machine Learning to other application areas. See the slides here for more detail.
Contact: Assoc. Prof. Than Quang Khoat, Email: firstname.lastname@example.org
- Continual learning: Explore new models and methods that can help a machine to learn continually from tasks to tasks, or when the data may come sequentially and infinitely.
- Deep generative models: Explore novel models that can generate realistic data (images, videos, music, art, materials,…). Some recent models include Generative Adversarial Nets (GAN), Variational Autoencoders (VAE), Diffusion probabilistic models (DPM).
- Theoretical foundation of Deep Learning: Explore why deep neural networks often generalize well, why overparameterized models can generalize really well in practice. Explore conditions for high generalization of machine learning models.
- Representation learning: Explore novel ways to learn a latent representation of data, for which it can boost the performance of different machine learning models in different applications.
- Recommender system: Explore the efficiency of modern machine learning models in recommender systems.
Some Research Problems
- Why does catastrophic forgetting appear and how to avoid it when learning continually from tasks to tasks? What is an efficient way to balance different sources of knowledge?
- Why are noises and sparsity really challenging when working with data streams, for which the data may come sequentially and infinitely? How to overcome those challenges?
- Explore novel models that can generate realistic data (images, videos, music, art, materials,…).
- Why can those generative models generalize well although most are unsupervised in nature?
- Can adversarial models really generalize when different players use different losses?
- Why do deep neural networks often generalize well?
- Why do deep neural networks often suffer from adversarial attacks or noises? What are the fundamental roots and how to overcome them?
- Why can overparameterized models generalize really well in practice?
- What are the necessary conditions for high generalization of machine learning models?
- What are the criteria for ensuring that a learnt latent representation of data is good? What are good criteria for learning new latent space?
- Does self-supervised learning really generalize?
- Why extreme sparsity is an extreme challenge in recommender systems? How to efficiently deal with sparsity?
- Can modeling high-order interactions between users and items help improve the effectiveness of recommender systems?
- Can modeling sequential behaviors of online users help improve the effectiveness of recommender systems?