Stephan Zheng

stephan [at]
CV Github Google Scholar

Hi! I'm a PhD candidate in the Machine Learning group at Caltech, advised by Professor Yisong Yue. I currently work on deep reinforcement learning research. Generally speaking, I'm interested in developing artificial intelligence and its useful applications.

I'm fortunate to have interned with Google Research (working with Yang Song, Thomas Leung and Ian Goodfellow) and Google Brain (working with Andrew Dai and Samy Bengio) over 2 summers.

Previously, I studied mathematics and theoretical physics, focusing on superstring theory and geometric structures in supersymmetric quantum field theory. I wrote my master's thesis, under the guidance of Professor Robbert Dijkgraaf and Professor Stefan Vandoren.

Before I came to sunny California, I received my BSc and MSc from Utrecht University in the wonderful Netherlands, an MASt from the University of Cambridge and was a visiting student at Harvard University.

  • July 2017: Talks at Beijing University, Shanghai Jiaotong University, Zhejiang University and Didi Chuxing!
  • June 2017: Connecting The Dots paper accepted.
  • June 2017: ICML workshop paper accepted.
  • October 2016: NIPS paper accepted!
  • Summer 2016: interning at Google Brain!
  • Jan 2016: CVPR paper accepted!

Machine Learning Conference Papers

Multi-resolution Learning of Interpretable Spatial Models via Dynamic Gradient Control.
S. Zheng, Yisong Yue, in submission


How do you quickly learn interpretable tensor models from high-resolution spatiotemporal data? We explain how to do this by using multi-resolution learning! With examples on predicting when basketball players shoot at the basket and how fruitflies behave in pairs..

Structured Exploration via Deep Hierarchical Coordination.
S. Zheng, Yisong Yue, in submission


We demonstrate a structured exploration approach to multi-agent reinforcement learning, in which agents learn to coordinate with other.

HEP.TrkX project: DNNs for HL-LHC online and offline tracking.
Steven Farrell, Dustin Anderson, Paolo Calafiura, Giuseppe Cerati, indsey Gray, Jim Kowalkowski, Mayur Mudigonda, Prabhat, Panagiotis pentzouris, Maria Spiropoulou, Aristeidis Tsaris, Jean-Roch Vlimant, S. Zheng, Connecting The Dots / Intelligent Trackers 2017

[PDF] [Link]

We evaluate modern neural networks on the task of predicting particle tracks, including forecasting uncertainty margins.

Generating Long-term Trajectories Using Deep Hierarchical Networks.
S. Zheng, Y. Yue, P. Lucey, Neural Information Processing Systems (NIPS) 2016

[PDF] [Data]

How can models learn to move like professional basketball players? By training hierarchical policy networks using imitation learning from human demonstrations! Our paper shows how predicting a macro-goal significantly improves track generation and fools professional sports analysts.

Improving the Robustness of Deep Neural Networks via Stability Training.
S. Zheng, Y. Song, I. Goodfellow, T. Leung, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016


Neural network predictions can be unstable and make errors, even if only small noise (such as JPEG compression noise) is present in the input. In this paper, we show a simple stochastic data augmentation technique that makes neural networks more robust to input perturbations.

Machine Learning Workshop Papers

Learning Chaotic Dynamics with Tensor Recurrent Neural Networks.
R. Yu*, S. Zheng*, Time-series workshop, ICML 2017
(* equal contribution)

[PDF (coming)]

Long-term forecasting is a central issue in science. We show how tensor-RNNs can do much better forecasting than other recurrent neural networks that are currently popular.

Learning Long-term Planning in Basketball Using Hierarchical Memory Networks.
S. Zheng, Y. Yue, Large-scale sports analytics workshop, KDD 2016


Learning realistic movement policies for AIs playing basketball is hard. We show how hierarchical policies can predict macro-goals dynamically and generate much more realistic behavior than 'flat' policies.

Scalable Training of Interpretable Spatial Latent Factor Models.
S. Zheng, Y. Yue, Workshop on Non-convex Optimization for Machine Learning: Theory and Practice at NIPS 2015


Spatiotemporal tensor models can scale badly with increasing resolution. We propose a novel adaptive optimization scheme to learn flexible multi-resolution tensor models.

Exotic path integrals and dualities.
S. Zheng


My (rather bulky) master's thesis! I discuss a novel class of dualities that relates quantum field theories that live on adjacent geometries (such as boundary-bulk) to each other. This duality relies on a complex version of Morse theory, which relates the geometry of curved manifolds with the extremal point spectrum of their Morse functions. I show that this duality can be analogously used in topological string theory to relate different classes of topological branes to each other.

Screening of Heterogeneous Surfaces: Charge Renormalization of Janus Particles.
N. Boon, E. C. Gallardo, S. Zheng, E. Eggen, M. Dijkstra, R. van Roij, J. Phys.: Cond. Matter 22 (2010) 10410


Janus particles are synthetic particles that are actively investigated for e.g. localized cancer treatment drug delivery. In this paper, we study the electromagnetic screening effects of such particles in plasmas by (numerically) solving the non-linear Poisson-Boltzmann equation.