Duke University I^3T Lab has multiple openings for PhD students and postdocs.
We work in the area of pervasive mobile and sensing systems broadly, and pervasive mobile Augmented Reality (AR) and next-generation intelligence for the Internet of Things in particular. Our work is generously supported by the National Science Foundation (NSF), the Lord Foundation of North Carolina, IBM, Facebook, and DARPA. We are also a part of the recently established NSF AI Institute for Edge Computing Leveraging Next Generation Networks, where our work is focused on building next-generation AI-powered mobile augmented reality.
If you are interested in joining the lab as a PhD student, please e-mail professor Gorlatova (maria.gorlatova /at/ duke.edu) your CV, transcripts, and a brief note about your research interests. We have openings for both CS and ECE PhD students. Strong candidates for PhD studies generally have undergraduate GPA above 3.6/4 and have experience either conducting research or developing advanced technical solutions outside of classroom settings (e.g., in independent studies, internships, employment, as extra-curricular projects). We have openings for PhD students with start dates in August 2022.
If you are interested in joining the lab as a postdoc, please e-mail professor Gorlatova (maria.gorlatova /at/ duke.edu) your CV, a brief note about your research interests, and 1-3 papers that you believe represent your best work to date. Candidates for postdoctoral positions need to have a track record of publishing their work in top venues of the field, and need to be self-driven and independent. The postdoctoral positions’ start dates are flexible, and can be as early as January 2022 and as late as August 2022. Lab’s previous postdoctoral affiliate Dr. Guohao Lan has successfully secured an independent Assistant Professor position at a top university.
This summer we were fortunate to be able to virtually host 3 Research Experience for Undergraduates (REU) students in the I^3T Lab, through the Duke University REU Site for Meeting Grand Challenges in Engineering. The research students were engaged in is supported in part by NSF grants CSR-1903136, CNS-1908051, and CAREER-2046072, and by an IBM Faculty Award. Continue reading
Delightful news today: Apple is planning to build a brand new campus here in Durham, spending 1 bln dollars over 10 years, and creating 3,000 highly skilled jobs, in particularly in AI, ML, and software engineering.
Exciting news for the region as a whole, and for the tech scene in particular, especially following Google’s announcement, only 6 weeks ago, of creating a cloud computing hub with over 1,000 jobs here in Durham as well. Very exciting news for myself as a computer systems faculty. So many opportunities for discussions and collaborations. So many new invited speakers and seminar attendees. So many local opportunities for student internships and full-time positions. Good news all around.
It was a true pleasure to attend the 2021 National Academy of Engineering Frontiers of Engineering Symposium (NAE FOE).
The NAE FOE is unique in bringing together engineers from all engineering disciplines. It was a blast. A rare opportunity to step back and learn about challenges in a wide variety of engineering areas, and to think about how one’s own work fits with the broader view of engineering as a profession. I kept thinking about the Iron Ring on my finger, and back to the ceremony, the Ritual of the Calling of an Engineer, where my graduating University of Ottawa class received these rings. I am thrilled that my professional journey has taken me from that ceremony to a Symposium dedicated to the frontiers of the profession.
Iron Ring is a reminder of the professional commitment of the engineer.
7 ECE and CS independent undergraduate research projects have been completed in the I^3T lab over the Fall of 2020. The projects are summarized below. This work is supported in part by NSF grants CSR-1903136 and CNS-1908051, IBM Faculty Award, and by the Lord Foundation of North Carolina.
Evaluating Object Detection Models through Photo-Realistic Synthetic Scenes in Game Engines
Achintya Kumar and Brianna Butler
We build an automatic pipeline to evaluate object recognition algorithms within the generated photo-realistic 3D scenes in game engines, i.e., Unity (with High Definition Render Pipeline) and Unreal. Specifically, we test the detection accuracy and intersection over union (IoU) under conditions of different lighting, reflections and transparency levels, camera or object rotations, blurring, occlusions, and object textures. In the automatic pipeline, we collect a large-scale dataset under these conditions without manual capturing, by controlling multiple parameters in game engines. For example, we control illumination conditions by changing the lux values and types of the light sources; we control reflections and transparency levels by using the custom render pipelines; and we control the texture by collecting a texture library and then randomly choosing the object texture from the texture library. Another important component of the automatic pipeline is to generate per-pixel ground truth, where the RGB value of the ground truth image indicating the corresponding object ID of each pixel. With the ground truth generation, the detection accuracy and IoU are obtained without manual labeling.
In our recent work that appeared in ACM SenSys 2020, we take a close look at the detection of cognitive context, the state of the person’s mind, through users’ eye tracking.
Eye tracking is a fascinating human sensing modality. Eye movements are correlated with deep desires and personality traits; careful observers of one’s eyes can discern focus, expertise, and emotions. Moreover, many elements of eye movements are involuntary, more readily observed by an eye tracking algorithm than the user herself.
The high-level motivation for this work is the wide availability of eye trackers in modern augmented and virtual reality (AR and VR) devices. For instance, both Magic Leap and HoloLens AR headsets are integrated with eye trackers, which have many potential uses including gaze-based user interface and gaze-adapted rendering. Traditional wearable human activity monitoring recognizes what the user does while she is moving around – running, jumping, walking up and down stairs. However, humans spend significant portions of their days engaging in different cognitive tasks that are not associated with discernible large-scale body movements: reading, watching videos, browsing the Internet. The differences in these activities, indistinguishable for motion-based sensing, are readily picked up by eye trackers. Our work focuses on improving the recognition accuracy of eye movement-based cognitive context sensing, and on enabling its operation with few training instances — the so-called few-shot learning, which is particularly important due to the particularly sensitive and personal nature of eye tracking data.
Our paper and additional information:
- G. Lan, B. Heit, T. Scargill, M. Gorlatova, GazeGraph: Graph-based Few-Shot Cognitive Context Sensing from Human Visual Behavior, in Proc. ACM SenSys’20, Nov. 2020 (20.6% acceptance rate). [Paper PDF] [ Dataset and codebase ] [ Video of the talk ]
Jointly with the the group of Prof. Neil Gong, we were the finalists for the Facebook Research Exploration of Trust in AR, VR, and Smart Devices Awards.
PI Gorlatova has previously contributed to the 2019 University of Washington Industry-Academia Summit on Mixed Reality Security, Privacy, and Safety, and the associated Summit Report. Our group continues to explore several topics directly related to AR security, privacy, and safety, such as examining the uncertainties in spatial stability of holograms in a given environment and training of eye tracking-based cognitive context classifiers with privacy-preserving techniques relying on limited user data.
Our paper on image recognition for mobile augmented reality in the presence of image distortions appeared in IEEE/ACM IPSN’20 and received the conference’s Best Research Artifact Award [ Paper PDF ] [ Presentation slides ] [ Video of the presentation ]
CollabAR: System Architecture
Our work on using edge computing to transmit holograms to augmented reality (AR) users has recently appeared in the IEEE SmartEdge Workshop, co-located with IEEE PerCom, as an invited paper. [ Paper PDF ] [ Presentation slides ] [ Video of the presentation ]
High-level architecture: edge computing supporting different users’ augmented reality (AR) experiences.
5 independent undergraduate research projects have been completed in the I^3T lab this semester. In these projects students investigated different elements of mobile augmented reality (AR), including edge-based integration of AR with low-end IoT devices, user perception of different types of shadows, and mechanisms for multi-user coordination for mobile AR. 4 projects are highlighted below.
This work is supported in part by NSF grants CSR-1903136 and CNS-1908051, and by the Lord Foundation of North Carolina. Continue reading