About Us

This is the MoE Key Laboratory of High Confidence Software Technologies (MoE Lab), supervised by Prof. Wong Kam-Fai, at System Engineering and Engineering Management Department seem, The Chinese University of Hong Kong (CUHK) seem. We focus on the research and development of theory, system and application for Human-centric Generative AI in Forgettability, Reliability, Adaptability, Multiplicity, Explainability (FRAME). Details of our research plan are as follows.

If you're interested in following research directions, feel free to Join Us.

🔭 Scope
Natural Language Processing (NLP)
Artificial Intelligence (AI)
Information Retrieval (IR)
Knowledge Discovery & Data Mining (KDD)
🗞️ News
[12-2023] 🔈 4 students participated in EMNLP 2023, Singapore!
[11-2023] 🔈 Prof. WONG and 1 student participated in AACL 2023, Bali, Indonesia!
[10-2023] 🔈 5 papers have been accepted in EMNLP 2023!
[09-2023] 🔈 1 student participated in SIGDIAL 2023, Prague, Czechia!
[08-2023] 🔈 1 paper (Oral) has been accepted in AACL 2023!
[05-2023] 🔈 5 papers have been accepted in ACL 2023!

Research Plan - Human-Centric Generative AI

Forgettability - LLMs are typically trained on a large amount of data collected from users with sensitive personal information, leading to the risks of user privacy violations. Therefore, it is crucial to help LLMs forget specific training data while maintaining model performance, which can also potentially address the problem of harmful and toxic data, as well as the privacy-aware situations.
📑 Our Related Work: KGA (ACL 2023).

F

Reliability - Current LLMs suffer from severe hallucination issue which may mislead users with false information practically. We are focusing on developing reliable AI systems by implicity elicting model's ability express factual knowledge or explicitly using external tools to generate more trustworthy, factually correct outputs by LLMs.
📑 Our Related Work: ORIG (ACL 2023), FactDial (EMNLP 2023), CONNER (EMNLP 2023),

R

Adaptability - Despite these advancements of LLMs greatly enhancing various tasks related to comprehension and generation, the reasoning and planning abilities of LLMs remain a focal point for improvement. Our objective is to unveil and enhance the internal reasoning capacity of LLMs, adapting their planning capabilities for seamless interaction with the external environment.
📑 Our Related Work: TPE, SAFARI (EMNLP 2023), Cue-CoT (EMNLP 2023), MCML.

A

Multiplicity - Accurately understanding and analyzing multi-modal documents is highly challenging in AI. We are developing a unified framework that integrates language and visual information from multimodal documents into a generative pre-training approach, achieving state-of-the-art capabilities in fine-grained downstream tasks.
📑 Our Related Work: Visual .

M

Explainability - LLMs have a large number of parameters and non-linear operations, opaquing their operation mechanisms, preventing people from understanding the basis and reasons underlying the predictions. Explainable AI is a technique used to interpret AI models, enabling the understanding of the model's prediction logic.
📑 Our Related Work: ReadPrompt (EMNLP 2023).

E