Bio
I'm a researcher at Columbia University, fortunate to be advised by Carl Vondrick. I also collaborate with the wonderful Andrew Owens on cross-modal learning, Alexandra Horowitz on olfaction/behavior, and Nikolaus Kriegeskorte on comp. cognitive neuroscience.
Previously, I graduated from Columbia with a B.S. in Computer Science, where I was introduced to research by Shuran Song and teaching by Paul Blaer. I am also an alumnus of Robert College of Istanbul, where I completed my high school education.
Research
Toward understanding the computations & representations in domain-specific cortical regions at both perceptual and post-perceptual levels, my research interests span the following (click to expand):
Studying human cognitive neuroscience through a systems lens, linking visual representations to abstract, higher-order functions like social and physical inference.
For instance, are representations computed in category-selective visual regions tailored for post-perceptual use by higher-order networks, such as the mentalizing and physics networks?
Experimentally characterizing inter-areal cortical transformations.
How are visual representations selectively preserved within and transformed across the ventral, dorsal, and lateral visual streams, or between functionally specialized regions?
Modeling sensory and cognitive functions with task-driven neural networks.
How might we leverage modern computer vision to instantiate these abilities—with or without neurobiological plausibility—as computationally precise hypotheses about the underlying neural substrates?
Representational alignment between cortical responses, behavior, and image-computable models.
Systematically comparing representational alignment with high-throughput testing, accounting for the roles of sensory experience (training diets), functional objectives (tasks), and inductive biases (architectures).
My research thus far in computer vision and olfaction approached cognitive questions by bridging levels of explanation—studying invariant object representations across modalities, with ongoing work comparing them to neural & behavioral responses.
Publications
-
New York Smells: A Large Multimodal Dataset for Olfaction Ege Ozguroglu, Junbang Liang, Ruoshi Liu, Mia Chiquier, Michael DeTienne, Wesley Wei Qian, Alexandra Horowitz, Andrew Owens, Carl Vondrick
arXivPaperProject Page
arXiv 2025
-
pix2gestalt: Amodal Completion by Synthesizing Wholes Ege Ozguroglu, Ruoshi Liu, Dídac Surís, Dian Chen, Achal Dave, Pavel Tokmakov, Carl Vondrick
arXivPaperProject PageCode
CVPR 2024 (Highlight)
-
Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, Carl Vondrick
arXivPaperProject PageCode
ECCV 2024 (Oral)
-
Real-World Visuomotor Policy Learning via Video Generation Junbang Liang*, Ruoshi Liu*, Ege Ozguroglu, Sruthi Sudhakar, Achal Dave, Pavel Tokmakov, Shuran Song, Carl Vondrick
arXivPaperProject PageCode
CoRL 2024
Teaching/Service
Teaching AssistantSpring 2023
Head Teaching AssistantSpring 2021, Summer B 2021, Fall 2021, Spring 2022, Spring 2023
Head Teaching AssistantSummer A 2021, Fall 2021, Summer A 2022, Fall 2022