USC-Amazon Summer Undergraduate Research Experience (SURE)
Research Positions
Click on each project title to find a description of the lab and any preferred background. Many of these labs are interdisciplinary, so please look through all of them.
We will be adding more labs during the weeks, so please come back to check out any additional labs that have been posted.
Faculty: David Barnhart
Project Description: CLING-ERS is a genderless docking system that has gotten approval to be tested onboard the ISS in the summer of 2022.
Related Background: EE, ME, AE, CS
Prerequisites: Some hands on EE experience, RTOS programming (C or Python)
Faculty: David Barnhart
Project Description: STARFISH is a unique robotic system meant to operate in space and uses a completely soft flexible material and distributed processing to operate very similar to a biological starfish does.
Related Background: EE, CS, AE with Controls background
Prerequisites: Some hands on EE experience, RTOS programming (C or Python), 3D printing experience
Faculty: Quan Nguyen
Project Description: Design and Control of a Highly Dynamic Wheel-legged Robot. A video of the current result can be seen at: https://youtu.be/9vkS0IoGp0s
Related Background: Computer Science, Mechanical Engineering, Electrical Engineering
Prerequisites: Background in Design, Control, Robotics
Faculty: David Barnhart
Project Description: LEAPFROG is a standalone lunar lander prototype vehicle that uses an RC turbine engine and paintball tanks to fly repeatedly to test in simulated lunar landing conditions.
Related Background: EE, MechE, CS, AE
Prerequisites: Some hands on EE experience, GNC knowledge
Faculty: Jesse Thomason
Project Description: An agent interpreting natural language instructions in a real or simulated world needs to identify salient objects in the world to which that language refers. Many methods for benchmarks like ALFRED [https://askforalfred.com/] use off-the-shelf object detectors like Faster R-CNN, minimally fine-tuning them for objects in the 3D environment. We hypothesize that understanding what kind of object an agent is looking for can influence the accuracy of object detections. For example, a detector trained specifically to recognize "pickupable" objects will more likely detect small objects like forks and spoons, one for "openable" objects cabinet and drawer faces, and one for "closable" objects cabinet and drawer interiors. The project will involve using action types as conditioning information for object detectors in a language-guided task completion benchmark. The student will become familiar with the ALFRED benchmark and with one or more state of the art models that tackle the challenge. We will aim to achieve a new state of the art by adding action-conditioned visual object recognition.
Related Background: Should be comfortable programming in python. Familiarity with pytorch would be a plus.
Prerequisites: CSCI 360 or CSCI 467
Faculty: Chia Hsu
Project Description: We are developing computational imaging methods that can reconstruct volumetric 3D images even inside an opaque scattering medium that typically cannot be seen through. The student will take part in exploring different reconstruction algorithms, using experimental data that measured in our lab and using data computed from numerical simulations.
Related Background: Able to code and to debug with MATLAB. Knowledge of wave equations.
Prerequisites: Familiar with MATLAB
Faculty: Peter Beerel
Project Description: This project goal is to develop energy-efficient multi-object detection and tracking models for autonomous driving applications. In particular, we are exploring downsampling/compression techniques that can aggressively reduce the size of the activation maps early in the object detection/tracking network (eg. Faster R-CNN, YOLOvx), such as dynamic neural networks, early exits, etc.
Prerequisites: The student should be familiar with ML frameworks, such as Pytorch/Tensorflow/MXNet and is preferred to have some background in object detection.
Faculty: Chia Hsu
Project Description: We are developing imaging methods that can reconstruct volumetric 3D images even inside an opaque scattering medium that typically cannot be seen through. The student will take part in building the experimental setup, data acquisition, instrument automation and calibration, and sample preparation.
Related Background: Optics experiment experience
Prerequisites: Optics experiment experience
Faculty: Chia Hsu
Project Description: We are developing imaging methods that can reconstruct volumetric 3D images even inside an opaque scattering medium that typically cannot be seen through. The student will take part in building the experimental setup, data acquisition, instrument automation and calibration, and sample preparation.
Related Background: Optics experiment experience
Prerequisites: Optics experiment experience
Faculty: Somil Bansal
Project Description: Machine learning-driven vision and perception components make a core part of the navigation and autonomy stacks for modern robotic systems. On the one hand, they enable robots to make intelligent decisions in cluttered and a priori unknown environments based on what they see. On the other hand, the lack of reliable tools to analyze the failures of learning-based vision models make it challenging to integrate them into safety-critical robotic systems, such as autonomous cars and aerial vehicles. We propose a robust control-based safety monitor for visual navigation and mobility in unknown environments. Our key insight is that rather than directly reasoning about the accuracy of the individual vision components and their effect on the robot safety, we can design a safety monitor for the overall system. This monitor detects safety-critical failures in the overall navigation stack (e.g., due to a vision component itself or its interaction with the downstream components) and provides safe corrective action if necessary. The latter is more tractable because the safety analysis of the overall system can be performed in the state-space of the system, which is generally much lower dimensional than the high-dimensional raw sensory observations.A key characteristic of our framework is that since the robot is operating in an unknown environment, the safety monitor itself is updated online as new observations are obtained. Preliminary results on simulated and real robots demonstrate that our framework can ensure robot safety in various environments despite the vision component errors. Other than ensuring robot safety, we also propose using our framework to mine the critical failures at scale and improve robot perception over time.
Related Background: Experience and background (if any) in control, and/or robotics. Experience (if any) with MATLAB/Python, ROS, and/or working with real hardware. Note that experience is not necessary, but knowing your background can help with finding a fit.
Faculty: Somil Bansal
Project Description: Autonomous robot navigation is a fundamental and well-studied problems in robotics. However, developing a fully autonomous robot that can navigate in a priori unknown environments is difficult due to challenges that span dynamics modeling, on-board perception, localization and mapping, trajectory generation, and optimal control. Classical approaches such as the generation of a real-time globally consistent geometric map of the environment are computationally expensive and confounded by texture-less, transparent or shiny objects, or strong ambient lighting. End-to-end learning can avoid map building, but is sample inefficient. Furthermore, end-to-end models tend to be system-specific. In this project, we will explore modular architectures to operate autonomous systems in completely novel environments using the onboard perception sensors. These architectures use machine learning for high-level planning based on the perceptual information; this high-level plan is then used for low-level planning and control via leveraging classical control-theoretic approaches. This modular approach enables the conjoining of the best of both worlds: autonomous systems learn navigation cues without extensive geometric information, making the model relatively lightweight; the inclusion of the physical system structure in learning reduces sample complexity relative to pure learning approaches. Our preliminary results indicate a 10x improvement in sample complexity for wheeled ground robots. Our hypothesis is that this gap will only increase further as the system dynamics become more complex, such as for an aerial or a legged robot, opening up new avenues for learning navigation policies in robotics.
Related Background: Experience and background (if any) in ML, control, and/or robotics. Experience (if any) with MATLAB/Python, training deep networks, ROS, and/or working with real hardware. Note that experience is not strictly necessary.
Faculty: Chia Hsu
Project Description: Maxwell's equations describe phenomena over the full electromagnetic spectrum from visible light to radio waves. Numerous problems, such as optical computing, metasurface design, inverse-scattering imaging, and stealth aircraft design, require computing the scattered wave given a very large number of distinct incident waves. However, existing Maxwell solvers scale poorly--either in computing time or in memory--with the number of input states of interest. The student will take part in our development of a new class of Maxwell solvers that can readily handle millions of distinct input states with orders-of-magnitude speed-up versus existing solvers.
Related Background: Programming experience. Familiarity with the differential form of Maxwell's equations (or wave equations in general).
Prerequisites: Programming; Maxwell's equations
Faculty: Sze-Chuan Suen
Project Description: We have recently developed and validated a model to predict need for massive transfusion (MT) using modern machine learning (ML) methods. MT may be needed when an injured patient enters the trauma center, and MT protocols require blood for transfusion to be supplied from inventory. Accurate prediction of the need for MT may reduce delays and unnecessary inventory requests. We are now working on building a user-friendly online interface to help physicians use our ML method in emergency settings. This may require simplification of the model to fewer variables, as well as developing methods to increase usability of the interface (layout design, etc.).
Related Background: Experience with usability studies, design of websites/interfaces
Prerequisites: Experience working with data (machine learning experience welcomed)
Faculty: Mayank Kejriwal
Project Description: AI, Networks and Society is a multi-year collection of projects in our group that seeks to use various sources of data to learn more about society using data-driven tools and frameworks, including machine learning and AI. By drawing on an empirically rigorous methodology, we seek to study complex systems in domains such as elections, social media, health and finance. Individual projects are often published in top-tier journals and conferences, and some have received widespread press coverage, including in Popular Science and San Francisco Times.
Related Background: A course in probability & statistics is preferred, and some background in AI or machine learning (whether applied or through coursework) is preferred as well.
Prerequisites: Introductory programming courses, including data structures
Faculty: Meisam Razaviyayn
Project Description: The goal of this project is to train neural networks using measures of performance other than accuracy in the presence of content-shifts. This is particularly important in applications such as classification of hateful/misinformation posts on social media platforms. In this application, the number of positive samples (sample posts containing misinformation) is small compared to the entire number of samples. Hence non-decomposable measures of performance, such as AUROC or accuracy at the top, are needed for auto-enforcement of the integrity-related policies on social media platforms. However, these measures of performance are vulnerable to content shifts. Our goal is to develop scalable algorithms for training neural networks based on measures of performance related to accuracy at the top. Furthermore, the resulting model needs to be robust against content-shifts. This is because the topic of misinformation changes over social media platforms over time.
Related Background: Basic knowledge of machine learning and neural networks. Being familiar with PyTorch and TensorFlow.
Prerequisites: Machine Learning, PyTorch, TensorFlow
Faculty Name: Peter Beerel
Faculty Department: The Mork Family Department of Chemical Engineering and Materials Science
Website: https://sites.usc.edu/eessc/research-areas/interdisciplinary-research/
Project Description: California wildfires have, in 2020 alone, burned over 4 million acres, damaged or destroyed more than 10000 buildings, and caused more than 30 fatalities. The destructive impact of these fires is projected to only worsen unless innovative solutions are researched, demonstrated, commercialized, and adopted. Our group’s vision is to build a collaborative team of researchers to leverage the massive advances in machine learning and drone technologies and build a network of drones for wildfire detection and fighting. The objective of the system is to automatically detect a wildfire within the first ~5 minutes of its creation and extinguish it before it grows over ~0.5 acres in size and perform structure protection by detecting, tracking, and following embers.
Related Background: Background on ML
Prerequisite: Object detection and tracking experience desired.
Faculty Name: Andrea Armani
Faculty Department: The Mork Family Department of Chemical Engineering and Materials Science
Website: https://armani.usc.edu/
Project Description: The over-arching mission of the research group is to develop novel nonlinear materials and integrated optical devices that can be used in understanding disease progression and in quantum optics. As part of these efforts, we have numerous collaborations in tool and technology development to enable research and discovery across a wide range of fields. This work combines many topics including organic and inorganic materials synthesis, nonlinear optics and integrated photonics, and cell/tissue biology. As a result of the multi-disciplinary nature of the research being pursued in the Armani group, undergraduate research projects are tailored to the undergraduate researcher’s interests within the general scope of the research activities of lab. Example projects being pursued in the Armani Lab by undergraduate researchers include the synthesis of nanoparticles and polymers and fabrication of integrated optical devices. In developing a project, the student’s academic background, prior research experience, and areas of interest are balanced.
Related Background: Degree in STEM field
Prerequisite: none
Faculty Name: Vatsal Sharan
Faculty Department: Computer Science
Website: https://vatsalsharan.github.io/
Project Description: The student will explore foundational questions regarding machine learning, using a combination of systematic experiments and theoretical analysis. Both students interested in performing systematic experiments to tease out phenomenon in practice, and those interested in using theoretical tools to prove novel guarantees are welcome. The exact questions and their scope is broad, but the following are some potential options: 1. Understanding deep learning: One particular question of interest here is to understand why neural networks generalize despite having the capacity to overfit. We've been exploring what role the data itself has to play in this mystery, exploring connections to the amazing self-supervised learning capability of neural networks in the process. 2. Data augmentation and amplification: In some recent work ("Sample Amplification"), we showed that it is often possible to generate new samples from a distribution without even learning it. We will explore how this ties into various data augmentation techniques, and develop new frameworks to increase dataset size. 3. Fairness and robustness: We will explore how to train models which are robust in many ways, such as to changes in the data distribution, or to do well on minority sub-populations (and not just on average over the entire data). 4. Computational-statistical tradeoffs: This is a more theoretical direction, to understand when computational efficiency might be at odds with statistical requirements (the data needed to learn). Recent work has opened by much uncharted territory, particularly with respect to the role of memory in learning, which we will explore.
Related Background: Some basic understanding of machine learning would be very helpful for certain projects.
Prerequisite: Familiarity with probability, linear algebra, calculus, and analysis of algorithms.
Faculty Name: Feifei Qian
Faculty Department: Ming Hsieh Department of Electrical and Computer Engineering
Website: https://minghsiehece.usc.edu/directory/faculty/profile/?lname=Qian&fname=Feifei
Project Description: The selected candidate will work closely with our research group to support our mission in (1) understanding the mechanism of robot interaction with obstacles, and (2) creating innovative strategies for robots to take advantage of obstacle interactions to navigate in complex environments with minimal control effort In this role, the candidate will perform the following tasks: program simple robot gaits; perform systematic experiments to test performance of different gaits during obstacle negotiation; use MATLAB to perform simple analysis and create plots to communicate results; (optional) develop simple algorithms to adapt gaits through different environments.
Related Background: Sophomore or above, with major in Electrical Engineering, Mechanical Engineering, physics, or related areas; Experience with Solidworks, MATLAB, and C++ is desired; Experience with robotics is a plus
Prerequisite: intro physics, basic programming, mechanical design
Faculty Name: Feifei Qian
Faculty Department: Ming Hsieh Department of Electrical and Computer Engineering
Website: https://sites.google.com/usc.edu/roboland
Project Description: The selected candidate will work closely with Dr. Feifei Qian's group to support our mission in (1) develop high-mobility legged robots with embodied sensing capabilities, to help human scientists in exploration of complex natural environments such as deserts, forests, and muddy terrains; (2) enable the robot to infer human exploration objectives and aid human experts with adaptation of sampling strategies in response to incoming information. In this role, the candidate will perform the following tasks: design and control multi-legged robots for sand and mud traversal; perform systematic experiments to characterize robot leg force sensing capabilities and measure terrain reaction forces; use MATLAB to perform simple analysis and create plots to communicate results; develop simple algorithms for robot to suggest sampling strategies to humans and get feedback.
Related Background: mechanical engineering, electrical engineering, computer science, physics, or other related majors
Prerequisite: SolidWorks, programming, microcontroller-related experiences, basic physics