Publications

See my Google Scholar for the most up-to-date list of papers.

2025

  1. arXiv
    graph.png
    When Do Transformers Learn Heuristics for Graph Connectivity?
    Qilin Ye*Deqing Fu*Robin Jia, and Vatsal Sharan
    In arXiv, 2025
    *Equal Contribution
  2. arXiv
    zebra-cot.png
    Zebra-CoT: A Dataset for Interleaved Vision Language Reasoning
    In arXiv, 2025
    *Equal Contribution
  3. arXiv
    resa.png
    Resa: Transparent Reasoning Models via SAEs
    Shangshang Wang, Julian Asilis, Ömer Faruk Akgül, Enes Burak Bilgin, Ollie LiuDeqing Fu, and Willie Neiswanger
    In arXiv, 2025
  4. arXiv
    saegull.png
    Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models
    Woody Haosheng Gan*Deqing Fu*, Julian Asilis*Ollie Liu*, Dani Yogatama, Vatsal SharanRobin Jia, and Willie Neiswanger
    In arXiv, 2025
    *Equal Contribution
  5. arXiv
    fone.png
    FoNE: Precise Single-Token Number Embeddings via Fourier Features
    Tianyi ZhouDeqing Fu, Mahdi Soltanolkotabi, Robin Jia, and Vatsal Sharan
    In arXiv, 2025
  6. NeurIPS
    visuallens.png
    VisualLens: Personalization through Visual History
    Wang Bill ZhuDeqing Fu , Kai Sun, Yi Lu, Zhaojiang Lin, Seungwhan Moon, Kanika Narang, Mustafa Canim , Yue Liu, Anuj Kumar , and Xin Luna Dong
    In Conference on Neural Information Processing Systems (NeurIPS), 2025
  7. ICLR
    tldr.png
    TLDR: Token-Level Detective Reward Model for Large Vision Language Models
    Deqing Fu, Tong Xiao , Rui Wang , Wang Zhu, Pengchuan Zhang, Guan Pang, Robin Jia , and Lawrence Chen
    In International Conference on Learning Representations (ICLR), 2025
  8. ICLR
    sensitivity.png
    Transformers Learn Low Sensitivity Functions: Investigations and Implications
    Bhavya Vasudeva*Deqing Fu*Tianyi Zhou, Elliot Kau , You-Qi Huang, and Vatsal Sharan
    In International Conference on Learning Representations (ICLR), 2025
    *Equal Contribution
  9. ICLR
    dellma.png
    DeLLMa: Decision Making Under Uncertainty with Large Language Models
    Ollie Liu*Deqing Fu*, Dani Yogatama, and Willie Neiswanger
    In International Conference on Learning Representations (ICLR), 2025
    Spotlight (Top 5.1%), *Equal Contribution
  10. NAACL
    dreamsync.png
    DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback
    Jiao Sun*Deqing Fu*Yushi Hu* , Su Wang, Royi Rassin, Da-Cheng Juan, Dana Alon, Charles Herrmann, Sjoerd Steenkiste, Ranjay Krishna, and Cyrus Rashtchian
    In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2025
    *Equal Contribution

2024

  1. NeurIPS
    transformer-icl.png
    Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression
    Deqing Fu , Tian-Qi Chen, Robin Jia, and Vatsal Sharan
    In Conference on Neural Information Processing Systems (NeurIPS), 2024
    SoCalNLP Symposium 2023 Best Paper Award
  2. NeurIPS
    fourier.png
    Pre-trained Large Language Models Use Fourier Features to Compute Addition
    Tianyi ZhouDeqing FuVatsal Sharan, and Robin Jia
    In Conference on Neural Information Processing Systems (NeurIPS), 2024
  3. COLM
    isobench.png
    IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
    Deqing Fu*, Ruohao Guo*, Ghazal Khalighinejad*Ollie Liu*, Bhuwan Dhingra, Dani Yogatama, Robin Jia, and Willie Neiswanger
    In Conference on Language Modeling (COLM), 2024
    *Equal Contribution

2023

  1. EMNLP
    scene.png
    SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples
    Deqing Fu, Ameya Godbole, and Robin Jia
    In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023