Prof. Jia Deng has been awarded a Google Faculty Research Award for his work in computer vision and machine learning.
His project addresses single-view 3D perception, a fundamental problem in computer vision. A major diﬃculty of computer vision stems from the fact that the 3D world is projected to a 2D image. Recovering 3D scene geometry is critical for high-level scene understanding, including the understanding of objects, actions, aﬀordances, and intuitive physics.
A key obstacle is the lack of training data for unconstrained images. Deng and his collaborators plan to collect qualitative 3D annotations from humans and develop corresponding learning algorithms, with the goal of making monocular 3D perception work in the wild. Images in the wild means to take images without constraints on cameras, scenes, and objects.
Prof. Roya Ensafi received the award for her work in security and privacy.
Research in censorship measurement has recently made major advances through the introduction of techniques for remotely and rapidly measuring whether distant clients and servers can communicate without interference. Prof. Ensafi helped introduce these techniques, and her research agenda aims apply them to monitor censorship continuously and globally. Her Censored Planet initiative (censoredplanet.com) will establish a global censorship observatory, which will publish ongoing data feeds about what sites are being blocked in countries around the world. With tens of thousands of vantage points, this approach will provide an unprecedented level of detail and worldwide coverage.
Prof. Manos Kapritsos received the award for his work in distributed systems.
His proposed research will rethink the design and implementation of replicated services in modern large-scale environments—like the cloud—where services are no longer standalone, but are components of larger systems and frequently interact with other services.
The aims of his project are both foundational and practical. On the foundational front, Kapritsos and his collaborators aim to debunk the long-standing assumption that replication protocols are implemented solely within the client-server model and to consider the implications that this has on how we design replicated services. On the practical side, they need to ensure that, as the world moves increasingly towards large-scale deployments, fault tolerant replication will continue to be relevant, giving practitioners a useful tool to provide high-availability without giving up on consistency.