Yiyang Feng (冯乙洋)
I am a third-year master’s student in Computer Science at EPFL, currently doing research with Shaobo Cui and Prof. Boi Faltings at LIA. I also collaborate with Prof. Jiawei Zhou at Stony Brook University.
Previously, I received my bachelor’s degree in Automation at Xi’an Jiaotong University in July 2022, where I was advised by Prof. Zhongmin Cai. I was also a research intern at PSU NLP, advised by Prof. Rui Zhang.
I’m keen on various areas of Trustworthy Large Language Models (LLMs), with a special focus on:
- Trustworthy Causal Reasoning : Despite the advancements in LLMs, their abilities to perform natural language reasoning are still far from satisfactory. I have been dedicated to defeasibility, uncertainty, and consistency in causal reasoning.
- Controllable Text Generation: My research has focused on generating controllable texts for targeted human needs in various applications, including heading generation, dichotomy, and text-to-SQL systems.
- Trustworthy Chain-of-Thought Reasoning: The o1 model popularized step-wise reasoning; however, its trustworthiness remains unexplored. I am interested in its robustness, hallucination propagation, and uncertainty quantification.
News
Dec 10, 2024 | AAAI 2025 accepted our paper: Nuance Matters: Probing Epistemic Consistency in Causal Reasoning |
---|---|
Aug 27, 2024 | Check out our new preprint paper Nuance Matters: Probing Epistemic Consistency in Causal Reasoning |
Aug 15, 2024 | Check out our new paper Unveiling the Art of Heading Design: A Harmonious Blend of Summarization, Neology, and Algorithm (Accepted to ACL 2024 Findings) |
May 16, 2024 | ACL 2024 accepted 2 of our papers (Findings): Unveiling the Art of Heading Design: A Harmonious Blend of Summarization, Neology, and Algorithm and Exploring Defeasibility in Causal Reasoning! See you in Bangkok! ✈️ |
Jan 06, 2024 | Check out our new preprint paper Exploring Defeasibility in Causal Reasoning (Accepted to ACL 2024 Findings) |