Yiyang Feng (冯乙洋)

profile.jpg

Hello everyone (づ。◕‿‿◕。)づ

My name is Yiyang Feng, a first year PhD student from the Department of Computer Science at Stony Brook University, currently supervised by Prof. Jiawei Zhou, and my academic advisor is Prof. Niranjan Balasubramanian.

In July 2025, I completed my master’s degree in Computer Science at EPFL. I am very fortunated to be advised by Shaobo Cui and Prof. Boi Faltings at LIA, and Zeming Chen and Prof. Antoine Bosselut at NLP Lab. More years ago, I received my bachelor’s degree in Automation at Xi’an Jiaotong University in July 2022, where I was advised by Prof. Zhongmin Cai. I was also a research intern at PSU NLP, advised by Prof. Rui Zhang.

I’m keen on various areas of Trustworthy Large Language Models (LLMs), with a special focus on:

  • Trustworthy Chain-of-Thought Reasoning: The o1 model popularized step-wise reasoning; however, its trustworthiness remains unexplored. I am interested in its robustness, hallucination propagation, and uncertainty quantification.
  • Trustworthy Causal Reasoning : Despite the advancements in LLMs, their abilities to perform natural language reasoning are still far from satisfactory. I have been dedicated to defeasibility, uncertainty, and consistency in causal reasoning.
  • Controllable Text Generation: My research has focused on generating controllable texts for targeted human needs in various applications, including heading generation, dichotomy, and text-to-SQL systems.

News

Nov 05, 2025 I am going to be in the virtual poster session of EMNLP 2025 for our paper: Unraveling Misinformation Propagation in LLM Reasoning! Check our website and twitter (X)!
Aug 25, 2025 Today, I am a PhD student from the Department of Computer Science in Stony Brook University. I’m super excited for my new journey and Go Seawolves!
Aug 20, 2025 EMNLP 2025 accepted our paper (Findings): Unraveling Misinformation Propagation in LLM Reasoning!
Jul 17, 2025 I have successfully passed my master’s thesis defense! :sparkles: Many thanks for the supervision from Zeming and Antoine!
Dec 10, 2024 AAAI 2025 accepted our paper: Nuance Matters: Probing Epistemic Consistency in Causal Reasoning :sparkles:

Selected Publications

  1. misinformation.png
    Unraveling Misinformation Propagation in LLM Reasoning
    In Findings of EMNLP 2025, Long Paper , Nov 2025
  2. logogram.jpg
    Unveiling the Art of Heading Design: A Harmonious Blend of Summarization, Neology, and Algorithm
    Shaobo Cui*Yiyang Feng*Yisong MaoYifan Hou, and Boi Faltings
    In Findings of ACL 2024, Long Paper , Aug 2024
  3. dichotomy.png
    Conditional Dichotomy Quantification via Geometric Embedding
    Shaobo CuiWenqing LiuYiyang FengJiawei Zhou, and Boi Faltings
    In Proceedings of ACL 2025, Long Papers, Oral Paper ( 2.5% of all papers) , Jul 2025