Minghua (Richardo) He πŸ‘‹

Foundation Models & Reliable AI/Systems

I am currently a graduate student at Peking University, the National Engineering Research Center for Software Engineering. I've also had great experiences working at WeChat AI, Microsoft Research Asia and Alibaba Group. My research interest is broadly in Foundation Models and Reliable AI/Systems.

My research vision is to construct a virtuous cycle between AI and Systems. I am dedicated to bridging the gap between theoretical AI capabilities and real-world system requirements, striving to build intelligent systems that are not only powerful, but also reliable and efficient at scale.

Highlights:
✨ 5+ First/Co-first Author Papers in CCF-A Conferences
✨ Oral Presentations & Top Papers in Top-tier Conferences
✨ Research Internships at Top Tech Labs
✨ Core Contributor to Well-known Open-source Foundation Models
πŸ”₯ News
  • 2025/12
    πŸŽ‰πŸŽ‰
    We released WeDLM, the first diffusion LLM to outperform industrial AR engines (vLLM), achieving 3Γ— speedup on reasoning and up to 10Γ— in generation! πŸš€
  • 2025/12
    πŸŽ‰πŸŽ‰
    Three papers accepted to ICSE 2026!
  • 2025/10
    πŸŽ‰πŸŽ‰
    Selected as an in-person volunteer for EMNLP 2025!
  • 2025/09
    πŸŽ‰πŸŽ‰
    Three papers accepted to ASE 2025!
  • 2025/08
    πŸŽ‰πŸŽ‰
    One paper accepted to ASE 2025 (Directly Accept, Top 9.5%), see you in Seoul! πŸ‡°πŸ‡·
  • 2025/08
    πŸŽ‰πŸŽ‰
    One paper accepted to EMNLP 2025 (Oral Presentation, Top 4.3%), see you in Suzhou! πŸ‡¨πŸ‡³
  • 2025/07
    πŸŽ‰πŸŽ‰
    Two papers accepted to ISSRE 2025!
  • 2025/03
    πŸŽ‰πŸŽ‰
    Three papers accepted to FSE 2025, see you in Trondheim! πŸ‡³πŸ‡΄
  • 2025/01
    πŸŽ‰πŸŽ‰
    One paper accepted to ICSE 2025, see you in Ottawa! πŸ‡¨πŸ‡¦
  • 2024/08
    πŸŽ‰πŸŽ‰
    One paper accepted to ISSRE 2024, see you in Tsukuba! πŸ‡―πŸ‡΅
πŸ“ Research

🀝
I'm open to collaborations on related projects, feel free to contact me!
Papers sorted by recency. * indicates equal contribution.
Open to collaboration. Contact Me
🌟 Selected Publications

WeDLM: Reconciling Diffusion Language Models with Standard Causal Attention for Fast Inference

WeDLM Team: Aiwei Liu*, Minghua He*, Shaoxun Zeng, Sijun Zhang, Linhao Zhang, Chuhan Wu, Wei Jia, Yuan Liu, Xiao Zhou, Jie Zhou

Technical Report HuggingFace Trending Top 10 (8/2372000)
WeChat AI, Tencent Technical Report
TL;DR: WeDLM is a diffusion language model framework built on standard cau...

WeDLM is a diffusion language model framework built on standard causal attention via Topological Reordering, enabling prefix-cache compatibility and streaming parallel decoding. It achieves up to 3Γ— speedup on reasoning benchmarks and 10Γ— in low-entropy regimes compared to vLLM-served AR baselinesβ€”the first DLLM to outperform industrial AR engines in wall-clock speed.

ExeCoder: Empowering Large Language Models with Executability Representation for Code Translation

Minghua He*, Yue Chen*, Fangkai Yang, Pu Zhao, Wenjie Yin, Yu Kang, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang

Tsinghua-A EMNLP 2025 Oral Presentation (356/8174, Top 4.3%)
Conference on Empirical Methods in Natural Language Processing
TL;DR: ExeCoder enhances LLM-based code translation by incorporating execu...

ExeCoder enhances LLM-based code translation by incorporating executability representations like syntax and semantics.

United We Stand: Towards End-to-End Log-based Fault Diagnosis via Interactive Multi-Task Learning

Minghua He*, Chiming Duan*, Pei Xiao*, Tong Jia, Siyu Yu, Lingzhe Zhang, Weijie Hong, Jing Han, Yifan Wu, Ying Li, Gang Huang

CCF-A ASE 2025 Directly Accept (113/1190, Top 9.5%)
IEEE/ACM International Conference on Automated Software Engineering
TL;DR: We propose Chimera, an end-to-end framework that unifies anomaly de...

We propose Chimera, an end-to-end framework that unifies anomaly detection and root cause localization through interactive multi-task learning and bidirectional knowledge transfer.

Weakly-supervised Log-based Anomaly Detection with Inexact Labels via Multi-instance Learning

Minghua He, Tong Jia, Chiming Duan, Huaqian Cai, Ying Li, Gang Huang

CCF-A ICSE 2025
IEEE/ACM International Conference on Software Engineering
TL;DR: We propose MIDLog, a weakly-supervised method using multi-instance ...

We propose MIDLog, a weakly-supervised method using multi-instance learning to enable log anomaly detection with inexact, bag-level labels instead of fine-grained annotation.

πŸš€ Preprints

WeDLM: Reconciling Diffusion Language Models with Standard Causal Attention for Fast Inference

WeDLM Team: Aiwei Liu*, Minghua He*, Shaoxun Zeng, Sijun Zhang, Linhao Zhang, Chuhan Wu, Wei Jia, Yuan Liu, Xiao Zhou, Jie Zhou

Technical Report HuggingFace Trending Top 10 (8/2372000)
WeChat AI, Tencent Technical Report
TL;DR: WeDLM is a diffusion language model framework built on standard cau...

WeDLM is a diffusion language model framework built on standard causal attention via Topological Reordering, enabling prefix-cache compatibility and streaming parallel decoding. It achieves up to 3Γ— speedup on reasoning benchmarks and 10Γ— in low-entropy regimes compared to vLLM-served AR baselinesβ€”the first DLLM to outperform industrial AR engines in wall-clock speed.

d-TreeRPO: Towards More Reliable Policy Optimization for Diffusion Language Models

Leyi Pan, Shuchang Tao, Yunpeng Zhai, Zheyu Fu, Liancheng Fang, Minghua He, Lingzhe Zhang, Zhaoyang Liu, Bolin Ding, Aiwei Liu, Lijie Wen

Preprint
arXiv Preprint
TL;DR: We propose d-TreeRPO, a reliable RL framework for diffusion LLMs th...

We propose d-TreeRPO, a reliable RL framework for diffusion LLMs that leverages tree-structured rollouts and bottom-up advantage computation with verifiable rewards, achieving significant gains on reasoning benchmarks.

A Survey on Parallel Text Generation: From Parallel Decoding to Diffusion Language Models

Lingzhe Zhang*, Liancheng Fang*, Chiming Duan*, Minghua He*, Leyi Pan*, Pei Xiao, Shiyu Huang, Yunpeng Zhai, Xuming Hu, Philip S. Yu, Aiwei Liu

Preprint
arXiv Preprint
TL;DR: A comprehensive survey of parallel text generation techniques, from...

A comprehensive survey of parallel text generation techniques, from parallel decoding to the latest diffusion language models.

MicroRemed: Benchmarking LLMs in Microservices Remediation

Lingzhe Zhang, Yunpeng Zhai, Tong Jia, Chiming Duan, Minghua He, Leyi Pan, Zhaoyang Liu, Bolin Ding, Ying Li

Preprint
arXiv Preprint
TL;DR: We introduce MicroRemed, the first benchmark for evaluating LLMs in...

We introduce MicroRemed, the first benchmark for evaluating LLMs in end-to-end microservice remediation, from diagnosis reports directly to executable Ansible playbooks.

CodeAD: Synthesize Code of Rules for Log-based Anomaly Detection with LLMs

Junjie Huang, Minghua He, Jinyang Liu, Yintong Huo, Domenico Bianculli, Michael R. Lyu

Preprint
arXiv Preprint
TL;DR: We present CodeAD, a novel framework that automatically synthesizes...

We present CodeAD, a novel framework that automatically synthesizes lightweight Python rule functions for LogAD using LLMs, achieving 3.6% F1 improvement while processing datasets 4x faster at a fraction of the cost.

Duet: Joint Exploration of User–Item Profiles

Yue Chen, Lu Wang, Minjie Hong, Pu Zhao, Fangkai Yang, Yifei Dong, Minghua He, Nan Hu, Jianjin Zhang, Zhiwei Dai, Yuefeng Zhan, Weihao Han, Hao Sun, Qingwei Lin, Weiwei Deng, Feng Sun, Qi Zhang, Saravan Rajmohan, Dongmei Zhang

Preprint
arXiv Preprint
TL;DR: We propose DUET, a closed-loop framework for joint exploration of u...

We propose DUET, a closed-loop framework for joint exploration of user-item textual profiles in recommendation systems. It distills raw data into concise cues, expands them into rich profiles via self-prompt construction, and optimizes profiles jointly with reinforcement learning using downstream recommendation feedback, while enabling interpretable LLM-compatible representations.

SupCLAP: Controlling Optimization Trajectory Drift in Audio-Text Contrastive Learning with Support Vector Regularization

Jiehui Luo, Yuguo Yin, Yuxin Xie, Jinghan Ru, Xianwei Zhuang, Minghua He, Aofan Liu, Zihan Xiong, Dongchao Yang

Preprint
arXiv Preprint
TL;DR: We propose Support Vector Regularization (SVR) to control optimizat...

We propose Support Vector Regularization (SVR) to control optimization trajectory drift in contrastive language-audio pretraining by using an auxiliary support vector to harness rich information from negative samples while improving training stability.

Adaptive Root Cause Localization for Microservice Systems with Multi-Agent Recursion-of-Thought

Lingzhe Zhang, Tong Jia, Kangjin Wang, Weijie Hong, Chiming Duan, Minghua He, Ying Li

Preprint
arXiv Preprint
TL;DR: Inspired by how human SREs operate, we introduce RCLAgent, a multi-...

Inspired by how human SREs operate, we introduce RCLAgent, a multi-agent recursion-of-thought framework that accurately localizes the root cause of failures in microservice systems using only a single request.

WarriorMath: Enhancing the Mathematical Ability of Large Language Models with a Defect-aware Framework

Yue Chen*, Minghua He*, Fangkai Yang, Pu Zhao, Lu Wang, Yu Kang, Yifei Dong, Yuefeng Zhan, Hao Sun, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang

Preprint
arXiv Preprint
TL;DR: We propose WarriorMath, a defect-aware framework that improves LLM ...

We propose WarriorMath, a defect-aware framework that improves LLM mathematical reasoning through targeted data synthesis and progressive training.

FusionLog: Cross-System Log-based Anomaly Detection via Fusion of General and Proprietary Knowledge

Xinlong Zhao, Tong Jia, Minghua He, Xixuan Yang, Ying Li

Preprint
arXiv Preprint
TL;DR: FusionLog routes unlabeled target logs by semantic similarity, appl...

FusionLog routes unlabeled target logs by semantic similarity, applies meta-learned small models to general logs, and distills LLM-guided pseudo labels for proprietary logs to fuse knowledge without target labels.

πŸ“„ Publications

Generality Is Not Enough: Zero-Label Cross-System Log-Based Anomaly Detection via Knowledge-Level Collaboration

Xinlong Zhao, Tong Jia, Minghua He, Ying Li

CCF-A ICSE-NIER 2026
IEEE/ACM International Conference on Software Engineering
TL;DR: GeneralLog collaborates LLMs and small models at the knowledge leve...

GeneralLog collaborates LLMs and small models at the knowledge level, routing proprietary logs to LLMs and general logs to small models to reach 90%+ F1 under fully zero-label settings.

LogAction: Consistent Cross-system Anomaly Detection through Logs via Active Domain

Chiming Duan*, Minghua He*, Pei Xiao, Tong Jia, Xin Zhang, Zhewei Zhong, Xiang Luo, Yan Niu, Lingzhe Zhang, Yifan Wu, Siyu Yu, Weijie Hong, Ying Li, Gang Huang

CCF-A ASE 2025
IEEE/ACM International Conference on Automated Software Engineering
TL;DR: We propose LogAction, a framework that integrates transfer and acti...

We propose LogAction, a framework that integrates transfer and active learning to achieve high-performance cross-system anomaly detection with minimal labeling effort.

CoorLog: Efficient-Generalizable Log Anomaly Detection via Adaptive Coordinator in Software Evolution

Pei Xiao*, Chiming Duan*, Minghua He*, Tong Jia, Yifan Wu, Jing Xu, Gege Gao, Lingzhe Zhang, Weijie Hong, Ying Li, Gang Huang

CCF-A ASE 2025
IEEE/ACM International Conference on Automated Software Engineering
TL;DR: We propose CoorLog, a framework using an adaptive coordinator for e...

We propose CoorLog, a framework using an adaptive coordinator for efficient and generalizable log anomaly detection, especially in evolving software systems.

United We Stand: Towards End-to-End Log-based Fault Diagnosis via Interactive Multi-Task Learning

Minghua He*, Chiming Duan*, Pei Xiao*, Tong Jia, Siyu Yu, Lingzhe Zhang, Weijie Hong, Jing Han, Yifan Wu, Ying Li, Gang Huang

CCF-A ASE 2025 Directly Accept (113/1190, Top 9.5%)
IEEE/ACM International Conference on Automated Software Engineering
TL;DR: We propose Chimera, an end-to-end framework that unifies anomaly de...

We propose Chimera, an end-to-end framework that unifies anomaly detection and root cause localization through interactive multi-task learning and bidirectional knowledge transfer.

Walk the Talk: Is Your Log-based Software Reliability Maintenance System Really Reliable?

Minghua He, Tong Jia, Chiming Duan, Pei Xiao, Lingzhe Zhang, Kangjin Wang, Yifan Wu, Ying Li, Gang Huang

CCF-A ASE-NIER 2025
IEEE/ACM International Conference on Automated Software Engineering
TL;DR: We introduce 'diagnostic faithfulness' as a key metric and propose ...

We introduce 'diagnostic faithfulness' as a key metric and propose FaithLog, a system that enhances model trustworthiness via a causality-guided attention mechanism.

ExeCoder: Empowering Large Language Models with Executability Representation for Code Translation

Minghua He*, Yue Chen*, Fangkai Yang, Pu Zhao, Wenjie Yin, Yu Kang, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang

Tsinghua-A EMNLP 2025 Oral Presentation (356/8174, Top 4.3%)
Conference on Empirical Methods in Natural Language Processing
TL;DR: ExeCoder enhances LLM-based code translation by incorporating execu...

ExeCoder enhances LLM-based code translation by incorporating executability representations like syntax and semantics.

ZeroLog: Zero-Label Generalizable Cross-System Log-based Anomaly Detection

Xinlong Zhao, Tong Jia, Minghua He, Ying Li, Gang Huang

CCF-B ISSRE 2025
International Symposium on Software Reliability Engineering
TL;DR: We introduce ZeroLog, a framework for zero-label, generalizable cro...

We introduce ZeroLog, a framework for zero-label, generalizable cross-system anomaly detection using logs.

CSLParser: A Collaborative Framework Using Small and Large Language Models for Log Parsing

Weijie Hong, Yifan Wu, Lingzhe Zhang, Chiming Duan, Pei Xiao, Minghua He, Xixuan Yang, Ying Li

CCF-B ISSRE 2025
International Symposium on Software Reliability Engineering
TL;DR: CSLParser presents a collaborative framework where small and large ...

CSLParser presents a collaborative framework where small and large language models work together for efficient log parsing.

From Few-Label to Zero-Label: An Approach for Cross-System Log-Based Anomaly Detection with Meta-Learning

Xinlong Zhao, Tong Jia, Minghua He, Yihan Wu, Ying Li, Gang Huang

CCF-A FSE-IVR 2025
ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
TL;DR: We propose FreeLog, a system-agnostic meta-learning approach for cr...

We propose FreeLog, a system-agnostic meta-learning approach for cross-system log anomaly detection that requires no labeled data from the target system.

Exploring Variable Potential for LLM-based Log Parsing Efficiency and Reduced Costs

Jinrui Sun, Tong Jia, Minghua He, Yihan Wu, Ying Li, Gang Huang

CCF-A FSE-IVR 2025
ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
TL;DR: We propose VISTA, a variable-centric strategy that improves the eff...

We propose VISTA, a variable-centric strategy that improves the efficiency and reduces the cost of LLM-based log parsing through novel sampling, caching, and ICL techniques.

CLSLog: Collaborating large and Small Models for Log-based Anomaly Detection

Pei Xiao, Tong Jia, Chiming Duan, Minghua He, Weijie Hong, Xixuan Yang, Yihan Wu, Ying Li, Gang Huang

CCF-A FSE-IVR 2025
ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
TL;DR: We propose CLSLog, a collaborative scheme combining LLM generalizat...

We propose CLSLog, a collaborative scheme combining LLM generalization and small model efficiency to effectively handle evolutionary logs in anomaly detection.

Weakly-supervised Log-based Anomaly Detection with Inexact Labels via Multi-instance Learning

Minghua He, Tong Jia, Chiming Duan, Huaqian Cai, Ying Li, Gang Huang

CCF-A ICSE 2025
IEEE/ACM International Conference on Software Engineering
TL;DR: We propose MIDLog, a weakly-supervised method using multi-instance ...

We propose MIDLog, a weakly-supervised method using multi-instance learning to enable log anomaly detection with inexact, bag-level labels instead of fine-grained annotation.

LLMeLog: An Approach for Anomaly Detection based on LLM-enriched Log Events

Minghua He, Tong Jia, Chiming Duan, Huaqian Cai, Ying Li, Gang Huang

CCF-B ISSRE 2024
International Symposium on Software Reliability Engineering
TL;DR: We propose LLMeLog, which leverages LLMs to enrich log event semant...

We propose LLMeLog, which leverages LLMs to enrich log event semantics and fine-tunes a BERT model on the enriched data, significantly boosting anomaly detection accuracy.

🎨 Miscellaneous

Outside of research, I recharge through travel, fitness, and landscape photography. I've chased auroras at the edge of the earth and I'm already dreaming of the next adventure.