USC Viterbi Summer Undergraduate Research Experience (SURE)
Electrical & Computer Engineering
Project Titles
Click on each project title to find a description of the lab and any preferred background. Many of these labs are interdisciplinary, so please look through projects that are housed in other departments.
We will be adding more labs during the weeks, so please come back to check out any additional labs that have been posted.
Faculty: Quan Nguyen
Project Description: Design and Control of a Highly Dynamic Wheel-legged Robot. A video of the current result can be seen at: https://youtu.be/9vkS0IoGp0s
Related Background: Computer Science, Mechanical Engineering, Electrical Engineering
Prerequisites: Background in Design, Control, Robotics
Faculty: David Barnhart
Project Description: LEAPFROG is a standalone lunar lander prototype vehicle that uses an RC turbine engine and paintball tanks to fly repeatedly to test in simulated lunar landing conditions.
Related Background: EE, MechE, CS, AE
Prerequisites: Some hands on EE experience, GNC knowledge
Faculty: David Barnhart
Project Description: CLING-ERS is a genderless docking system that has gotten approval to be tested onboard the ISS in the summer of 2022.
Related Background: EE, ME, AE, CS
Prerequisites: Some hands on EE experience, RTOS programming (C or Python)
https://www.isi.edu/centers/serc/rendezvous_and_proximity_operations_rpo_research
Faculty: David Barnhart
Project Description: STARFISH is a unique robotic system meant to operate in space and uses a completely soft flexible material and distributed processing to operate very similar to a biological starfish does.
Related Background: EE, CS, AE with Controls background
Prerequisites: Some hands on EE experience, RTOS programming (C or Python), 3D printing experience
https://www.isi.edu/centers/serc/rendezvous_and_proximity_operations_rpo_research
Faculty: Ellis Meng
Project Description: Implantable medical microdevices enable exciting possibilities such as communicating with neurons and monitoring disease conditions inside the body. Many of these devices rely on specialized thin-film polymers to electrically insulate them from fluids and tissues. However, the harsh environment inside the body can affect this insulation and cause the devices to fail. This project will involve making samples of specialized polymers in a microfabrication cleanroom, subjecting the samples to a simulated in vivo environment, and measuring their insulating qualities over time. The findings will tell researchers and engineers how they can best make implantable medical microdevices so that performance inside the body is reliable and long-lasting.
Prerequisites: Enrolled in engineeering, applied physics, physics, or related undergraduate program
Faculty: Heather Culbertson
Project Description: This project focuses on the design, building, and control of haptic devices for virtual reality. Current VR systems lack any touch feedback, providing only visual and auditory information to the user. However, touch is a critical component for our interactions with the physical world and with other people. This research will investigate how we use our sense of touch to communicate with the physical world and use this knowledge to design haptic devices and rendering systems that allow users to interact with and communicate through the virtual world. To accomplish this, the project will integrate electronics, mechanical design, programming, and human perception to build and program a device to display artificial touch sensations to a user with the goal of creating a natural and realistic interaction.
Related Background: Background in computer science, electrical engineering, mechanical engineering, or related majors. Experience with circuits and mechanical design a plus, but not required.
Prerequisites: Programming experience (C++ preferred)
Faculty: Chia Hsu
Project Description: We are developing computational imaging methods that can reconstruct volumetric 3D images even inside an opaque scattering medium that typically cannot be seen through. The student will take part in exploring different reconstruction algorithms, using experimental data that measured in our lab and using data computed from numerical simulations.
Related Background: Able to code and to debug with MATLAB. Knowledge of wave equations.
Prerequisites: Familiar with MATLAB
Faculty: Peter Beerel
Project Description: This project goal is to develop energy-efficient multi-object detection and tracking models for autonomous driving applications. In particular, we are exploring downsampling/compression techniques that can aggressively reduce the size of the activation maps early in the object detection/tracking network (eg. Faster R-CNN, YOLOvx), such as dynamic neural networks, early exits, etc.
Prerequisites: The student should be familiar with ML frameworks, such as Pytorch/Tensorflow/MXNet and is preferred to have some background in object detection.
Faculty: Andreas Molisch
Project Description: Deep neural networks have achieved outstanding success on many tasks in a supervised learning setting with enough labeled data. Yet, current AI systems are limited in understanding the world around us, as shown in a limited ability to transfer and generalize between tasks. The goal of the project is to investigate the machine learning-based wireless data augmentation and its possible application in the challenging city-level localization or to investigate how one can learn the optimal representation of wireless data from a reasonable set of assumptions as well as the experimental design of performing interventions (i.e., data interpolations) and acquiring labeled data efficiently. Work will entail designing, implementing, and evaluating machine learning-based fingerprinting-based outdoor localization in a wire-band wireless communication system under challenging environments, i.e., Non-line-of-sight (NLOS) radio propagation. Particular emphasis will be paid to the time series data augmentation and related data-efficient machine learning algorithms, one/few-shot learning, semi-supervised domain adaption, etc.
Related Background: EITHER good wireless communication background OR experience with common deep learning frameworks/software engineering skills, i.e., AT LEAST one of the following: 1. Wireless communication basics: Wireless signal propagation Mechanisms; Statistical Descri
Prerequisites: No formal pre-reques, but see "acceptable background" for required skills.
Faculty: Chia Hsu
Project Description: Maxwell's equations describe phenomena over the full electromagnetic spectrum from visible light to radio waves. Numerous problems, such as optical computing, metasurface design, inverse-scattering imaging, and stealth aircraft design, require computing the scattered wave given a very large number of distinct incident waves. However, existing Maxwell solvers scale poorly--either in computing time or in memory--with the number of input states of interest. The student will take part in our development of a new class of Maxwell solvers that can readily handle millions of distinct input states with orders-of-magnitude speed-up versus existing solvers.
Related Background: Programming experience. Familiarity with the differential form of Maxwell's equations (or wave equations in general).
Prerequisites: Programming; Maxwell's equations
Faculty: Chia Hsu
Project Description: We are developing imaging methods that can reconstruct volumetric 3D images even inside an opaque scattering medium that typically cannot be seen through. The student will take part in building the experimental setup, data acquisition, instrument automation and calibration, and sample preparation.
Related Background: Optics experiment experience
Prerequisites: Optics experiment experience
Faculty: Somil Bansal
Project Description: Machine learning-driven vision and perception components make a core part of the navigation and autonomy stacks for modern robotic systems. On the one hand, they enable robots to make intelligent decisions in cluttered and a priori unknown environments based on what they see. On the other hand, the lack of reliable tools to analyze the failures of learning-based vision models make it challenging to integrate them into safety-critical robotic systems, such as autonomous cars and aerial vehicles. We propose a robust control-based safety monitor for visual navigation and mobility in unknown environments. Our key insight is that rather than directly reasoning about the accuracy of the individual vision components and their effect on the robot safety, we can design a safety monitor for the overall system. This monitor detects safety-critical failures in the overall navigation stack (e.g., due to a vision component itself or its interaction with the downstream components) and provides safe corrective action if necessary. The latter is more tractable because the safety analysis of the overall system can be performed in the state-space of the system, which is generally much lower dimensional than the high-dimensional raw sensory observations.A key characteristic of our framework is that since the robot is operating in an unknown environment, the safety monitor itself is updated online as new observations are obtained. Preliminary results on simulated and real robots demonstrate that our framework can ensure robot safety in various environments despite the vision component errors. Other than ensuring robot safety, we also propose using our framework to mine the critical failures at scale and improve robot perception over time.
Related Background: Experience and background (if any) in control, and/or robotics. Experience (if any) with MATLAB/Python, ROS, and/or working with real hardware. Note that experience is not necessary, but knowing your background can help with finding a fit.
Faculty: Somil Bansal
Project Description: Autonomous robot navigation is a fundamental and well-studied problems in robotics. However, developing a fully autonomous robot that can navigate in a priori unknown environments is difficult due to challenges that span dynamics modeling, on-board perception, localization and mapping, trajectory generation, and optimal control. Classical approaches such as the generation of a real-time globally consistent geometric map of the environment are computationally expensive and confounded by texture-less, transparent or shiny objects, or strong ambient lighting. End-to-end learning can avoid map building, but is sample inefficient. Furthermore, end-to-end models tend to be system-specific. In this project, we will explore modular architectures to operate autonomous systems in completely novel environments using the onboard perception sensors. These architectures use machine learning for high-level planning based on the perceptual information; this high-level plan is then used for low-level planning and control via leveraging classical control-theoretic approaches. This modular approach enables the conjoining of the best of both worlds: autonomous systems learn navigation cues without extensive geometric information, making the model relatively lightweight; the inclusion of the physical system structure in learning reduces sample complexity relative to pure learning approaches. Our preliminary results indicate a 10x improvement in sample complexity for wheeled ground robots. Our hypothesis is that this gap will only increase further as the system dynamics become more complex, such as for an aerial or a legged robot, opening up new avenues for learning navigation policies in robotics.
Related Background: Experience and background (if any) in ML, control, and/or robotics. Experience (if any) with MATLAB/Python, training deep networks, ROS, and/or working with real hardware. Note that experience is not strictly necessary.