Rundi Wu | 吴润迪

I'm a final-year PhD student at Columbia University, advised by Changxi Zheng. Before that, I obtained my B.S. degree in 2020 from Turing Class, Peking University, where I was fortunate to work with Baoquan Chen. I interned at Tencent America in summer 2022, and at Google Research in summer 2023. I'm a recipient of the Columbia Engineering School Dean Fellowship.

I'm mostly interested in generative models, 3D vision and graphics.

Email  /  CV  /  Github  /  Google Scholar  /  LinkedIn

profile photo
Research
Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis
Basile Van Hoorick , Rundi Wu, Ege Ozguroglu , Kyle Sargent ,
Ruoshi Liu , Pavel Tokmakov , Achal Dave , Changxi Zheng , Carl Vondrick
ECCV 2024 (Oral)
project page / github / paper

A video-to-video model that synthesizes large-angle novel viewpoints of dynamic scenes.

PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation
Tianyuan Zhang , Hong-Xing "Koven" Yu , Rundi Wu, Brandon Y. Feng ,
Changxi Zheng , Noah Snavely , Jiajun Wu , William T. Freeman
ECCV 2024 (Oral)
project page / github / paper

Enabling interaction with static 3D objects by distilling material parameters from video generation models.

ReconFusion: 3D Reconstruction with Diffusion Priors
Rundi Wu*, Ben Mildenhall* , Philipp Henzler , Keunhong Park , Ruiqi Gao , Daniel Watson , Pratul P. Srinivasan , Dor Verbin , Jonathan T. Barron , Ben Poole , Aleksander Holynski*
CVPR 2024
project page / arXiv

Building a multi-view conditioned diffusion model and using it as a prior to regularize radiance field reconstruction from only a few images.

Sin3DM: Learning a Diffusion Model from a Single 3D Textured Shape
Rundi Wu, Ruoshi Liu, Carl Vondrick, Changxi Zheng
ICLR 2024
project page | paper | code

A diffusion model trained on a single 3D textured shape, enabling generating diverse high-quality variations of it.

Zero-1-to-3: Zero-shot One Image to 3D Object
Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, Carl Vondrick
ICCV 2023
project page | paper | code | demo

Finetuning the stable diffusion model on synthetic 3D data to enable zero-shot novel view synthesis from a single image of an object.

Implicit Neural Spatial Representations for Time-dependent PDEs
Honglin Chen*, Rundi Wu*, Eitan Grinspun, Changxi Zheng, Peter Yichen Chen
ICML 2023
project page | paper | code

Solving time-dependent PDEs by evolving an implicit neural spatial representation over time.

Learning to Generate 3D Shapes from a Single Example
Rundi Wu, Changxi Zheng
SIGGRAPH Asia 2022 (Journal Track)
project page | paper | code | video

Training a multi-scale patch GAN on a single example to generate 3D shapes that locally resemble the input.

Dynamic Sliding Window for Realtime Denoising Networks
Jinxu Xiang, Yuyang Zhu, Rundi Wu, Ruilin Xu, Yuko Ishiwaka, Changxi Zheng
ICASSP 2022
paper

A realtime speech denoising system with a dynamic sliding window.

DeepCAD: A Deep Generative Network for Computer-Aided Design Models
Rundi Wu, Chang Xiao, Changxi Zheng
ICCV 2021
project page | paper | code

Using transformers to encode and decode parametric CAD modeling sequences.

Listening to Sounds of Silence for Speech Denoising
Ruilin Xu, Rundi Wu, Yuko Ishiwaka, Carl Vondrick, Changxi Zheng
NeurIPS 2020
project page | paper | code

Denoising speech signals by recovering noise from "silent" time periods.

Multimodal Shape Completion via Conditional Generative Adversarial Networks
Rundi Wu*, Xuelin Chen*, Yixin Zhuang, Baoquan Chen
ECCV 2020 (spotlight)
project page | paper | code

Using conditional GANs to complete a partial 3D scan with multiple plausible outputs.

PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes
Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, Baoquan Chen
CVPR 2020
paper | code | slides

Using a sequence-to-sequence network to generate 3D shapes via sequential part assembly.

Learning Character-Agnostic Motion for Motion Retargeting in 2D
Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or
SIGGRAPH 2019
project page | paper | code | video

Transferring the video-captured motion from one performer to another by learning a character-agnostic motion representation.

Misc
  • I’m a great sports fan, loving snowboarding/F1/basketball/baseball/soccer.
  • I like playing strategy games. Europa Universails IV is my favorite.