Towards Verifiable Autonomous Systems with NeuroSymbolic Reasoning

Speaker

A/Prof. Xi (James) Zheng, ARC Future Fellow, Macquarie University

Abstract

Learning-enabled autonomous systems—such as self-driving vehicles and intelligent drones—pose unprecedented challenges for safety assurance due to the opaque and unpredictable nature of deep neural networks. This talk introduces NeuroStrata, a new neurosymbolic architecture for autonomous systems presented at FSE’25 and CAV’25, which marks a paradigm shift from black-box learning to interpretable, reasoning-based intelligence.

By integrating neural perception with symbolic reasoning, NeuroStrata enables certifiable AI, bridging the gap between data-driven adaptability and formal verifiability. This vision is now being realized through a neurosymbolic perception module deployed in collaboration with an Australian drone company, demonstrating real-world feasibility for safety-critical applications.

The framework has also garnered strong international support across leading institutions in Europe, Japan, and Singapore, including Oxford, Paris-Saclay, Hamburg, Osaka University, the University of Tokyo, Kyoto University, and SMU. Together, these efforts lay the foundation for a verifiable and certifiable AI ecosystem, redefining the future of trustworthy autonomous systems.

Bio

A/Prof. Xi Zheng (Macquarie University, Australia) is an ARC Future Fellow (2024–2028) whose research focuses on testing and verification of learning-enabled cyber-physical systems, with applications to autonomous vehicles and UAVs. He has secured over $2.4M in competitive funding and published extensively in top venues such as ICSE, FSE, and TSE. His research outputs have been adopted in industry by partners including Ant Group and UAV companies. Beyond research, he has taken on significant leadership and service roles, serving as TPC Chair (MobiQuitous 2026), OC/TPC member (ICSE 2026, FSE 2026, PerCom 2026, CAV 2025). He also co-founded the TACPS workshop series and is co-organizer of the Shonan Seminar #235 and Dagstuhl Seminar 202501048 (2026) on neurosymbolic AI and LLMs for reliable autonomous systems