报告题目：Rethinking the Expressive Power of GNNs via Graph Biconnectivity
报告摘要：Designing expressive Graph Neural Networks(GNNs) is a central topic in learning graph-structured data. While numerous approaches have been proposed to improve GNNs with respect to the Weisfeiler-Lehman (WL) test, for most of them, there is still a lack of deep understanding of what additional power they can systematically and provably gain. In this work, we take a fundamentally different perspective to study the expressive power of GNNs beyond the WL test. Specifically, we introduce a novel class of expressivity metrics vla graph biconnectivty and highlight their importance in both theory and practice. As biconnectivity can be easily calculated using simple algorithms that have linear computational costs, it is natural to expect that popular GNNs can learn it easily as well. However, after a thorough review of prior GNN architectures, we surprisingly find that most of them are not expressive for any of these metrics. We introduce a principled and efficient approach called the Generalized Distance Weisfeiler-Lehman (GD-WL), which is provably expressive for all biconnectivity metrics. Practically, we show GD-WL can be implemented by a Transformer-like architecture that preserves expressiveness and enjoys full parallelizability. A set of experiments on both synthetic and real datasets demonstrates that our approach can consistently outperform prior GNN architectures.
贺笛，北京大学助理教授、博士生导师。毕业于北京大学，曾担任微软亚洲研究院主管研究员。研究方向涉及自然语言处理、图神经网络及利用机器学习技术探索科学领域。其团队设计的图神经网络 Graphormer 在 KDD 2021 分子性质预测挑战赛，及 NeurIPS 2021 分子动力学模拟挑战赛中击败包括 DeepMind，Facebook AI Research 等对手夺冠。在机器学习顶级国际会议 ICML，NeurIPS，ICLR 上发表数十篇论文。长期担任机器学习顶级会议领域主席及审稿人工作。贺笛还是ICLR 2023杰出论文奖获得者。