Postdoc in Context-aware Augmented Reality

Duke University I3T Lab is looking for a postdoc in the area of edge computing-supported context-aware augmented reality. The postdoc will work closely with PI Gorlatova and two very capable and driven ECE and CS PhD students, and will have opportunity to participate in the activities of the NSF AI ATHENA Institute for Edge Computing Leveraging Next Generation Networks.

The position is best suited to a person who has experience with ML in mobile systems or pervasive sensing contexts, such as real-time human activity recognition, real-time edge video analytics, or ML-based QoS or QoE prediction. Exposure to augmented and virtual reality (e.g., Unity or Unreal, mobile AR SDKs, AR or VR QoS and QoE, V/VI-SLAM) is advantageous but not required.

If you are interested in this position, please e-mail maria.gorlatova@duke.edu with your up to date CV, transcripts, and 2 papers that you believe represent your best work.

This position can start in September 2023, January 2024, or May 2024.

Lab’s previous postdoctoral associate Dr. Guohao Lan has successfully secured an independent Assistant Professor position at a top university.

Duke University is an exceptionally selective private Ivy Plus school, with a beautiful gothic campus located in a lively progressive Raleigh-Durham area of North Carolina. It is a wonderful place to work and a wonderful place to live.

Posted in Academia, Durham NC, Hiring, Research | Leave a comment

Duke VISION Magazine Profile

Sarah Eom’s research on augmented reality for retinal laser therapy is profiled by the Duke Eye Center VISION Magazine.  This project, done in collaboration with Miroslav Pajic (Duke ECE) and Majda Hadziahmetovic (Duke Ophthalmology), not only demonstrates a new application of augmented reality, but also advances the state of AR support for detail-oriented tasks broadly.

Continue reading

Posted in AR-assisted surgery, Augmented reality, Augmented Reality For Good, Edge computing, Internet of Things, Students, Undergraduate research, Women in technology | Leave a comment

Spring 2023 Student Successes

We have had an exciting end of the Spring 2023 semester, with many recognitions of outstanding and cross-disciplinary achievements of group’s students. Neha Vutakuri‘s year-long engagement with the lab culminated with a successful undergraduate dissertation defense and graduation with distinction in Neuroscience. Seijung Kim, who has been with us for almost two years and made important and unique contributions to two different projects, graduated with distinction in Biomedical Engineering and received the Howard G. Clark Award for outstanding undergraduate research. Ritvik Janamsetty, a Pratt Fellow in the ECE Department, received the 3rd place Independent Study Best Poster Award at the ECE undergraduate research showcase. Lin Duan received the 2nd place Best Poster Award at the Athena NSF AI Institute annual showcase. Heartfelt congratulations!

Posted in Uncategorized | Leave a comment

Appearing at IEEE INFOCOM’23: Edge-Assisted Adaptive SLAM with Resource Constraints

AdaptSLAM, our recent work led by Ying Chen, explores new approaches for adapting edge computing-supported Visual and Visual-Inertial Simultaneous Localization and Mapping (V- and VI-SLAM) to computation and communication resource constraints.

Adapt SLAM’s system architecture. Our design centers on the two highlighted modules. We optimize our algorithms to run in real time on mobile devices. 

Continue reading

Posted in Achievement, Augmented reality, Communication networks, Edge computing, Mobile computing, Publications, Research | Leave a comment

SenSys 2022 Demo: Through an AR Lens

Sarah Eom has demonstrated our ongoing work on AR-based magnification for hand-held loupes at ACM SenSys’22 in Boston, MA. This work is based on a collaboration with Miroslav Pajic (Duke ECE) and Majda Hadziahmetovic (Duke Ophthalmology). [Demo description PDF] [Accompanying poster PDF]

Continue reading

Posted in AR-assisted surgery, Augmented reality, Augmented Reality For Good, Conferences, Demonstrations, Duke University, Edge computing, Exciting! News and updates, Internet of Things, Mobile computing, Research | Leave a comment

ACM UbiComp 2022 Best Poster Award

A poster we presented at ACM UbiComp 2022 in Cambridge, UK, titled “IoT-Enabled Environment Illuminance Optimization for Augmented Reality“, received the ACM UbiComp’22 Best Poster Award. [ Poster submission  ] [ Poster presented at the conference ]

This poster outlines a system that uses a smart lightbulb, an edge server, and environmental sensors (a camera and a light sensor) to change the level of light in the environment to maximize the performance of two elements of augmented reality, pose tracking and eye tracking. To our knowledge, this is the first automatic environment optimization system for augmented reality that adapts to both environment lighting and textures.

Continue reading

Posted in Achievement, Augmented reality, Duke University, Exciting! News and updates, Internet of Things, Technology, Undergraduate research | Leave a comment

Two papers appearing at IEEE ISMAR’22

We are delighted to have two of lab’s papers appear at the top AR/MR conference, IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2022 (acceptance rate: 21%).

In a paper led by PhD student Sarah Eom titled NeuroLens: Augmented Reality (AR)-based Contextual Guidance through Surgical Tool Tracking in Neurosurgery, we developed a system that provides neurosurgeons with real-time guidance on how to approach a target located inside a patient’s skull. This system, developed in collaboration with Dr. Shervin Rahimpour, was evaluated in a study with 33 medical students, who conducted both AR-guided and unassisted (freehand) trials of catheter insertion into a phantom model of a human head. The study has demonstrated that our system significantly improves students’ targeting accuracy. The study has also revealed important differences in the behavior of participants who achieved different levels of results in AR-assisted settings. More than 93% of the participants agreed or strongly agreed that the developed system is useful for learning to conduct neurosurgical procedures. [ Paper PDF ] Continue reading

Posted in Achievement, AR-assisted surgery, Augmented reality, Augmented Reality For Good, Exciting! News and updates, Gaming engines, Indoor environments, Mobile computing, Research | Leave a comment

AI Institute I3T Lab Showcase

At the Athena AI Institute‘s 1st Annual Summit held at Duke University in August 2022, 4 members of the lab presented posters and showcased the work of the lab during the I3T Lab Tour.

The projects showcased during the summit covered different elements of next-generation edge computing-supported augmented reality (AR): edge-supported resource-efficient SLAM for AR, edge-coordinated IoT for improving the performance of AR, context-aware AR for neurosurgery, and robust object detection for AR.

Posted in AR-assisted surgery, Augmented reality, Demonstrations, Duke University, Durham NC, Edge computing, Indoor environments, Internet of Things, Lab tours, Mobile computing | Leave a comment

Summer REU Presentation: Real-time Object Detection for AR-aided Language Learning

Jeremy Suh, Grand Challenges in Engineering NSF REU Fellow at the Pratt School of Engineering, who has spent his summer in the I^3T Lab, has presented a poster and a demo of his work at the Duke University summer REU showcase.

Jeremy with his mentors, Tim Scargill and Lin Duan.

Jeremy’s research centered on showcasing how edge computing-supported object detection can be used as part of augmented reality (AR) applications in foreign language learning. The framework Jeremy has created will be used as a foundation for a range of semantically aware AR applications that are being developed in the lab.

Posted in Augmented reality, Duke University, Edge computing, Research, Undergraduate research | Leave a comment

IEEE INFOCOM 2022 Paper and Demo

This week Ying Chen is presenting our work on evaluating and exploiting characteristics of user pose in VR at IEEE INFOCOM’22, in a paper and an accompanying demonstration:

 VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations presents the first statistical model of viewport pose in VR and develops the first analytically grounded algorithm that establishes which contents should be reused across the frames [PDF] [Code and data]

Demo: Pixel Similarity-Based Content Reuse in Edge-Assisted Virtual Reality showcases how adaptive cross-frame content reuse reduces bandwidth requirements in edge computing-supported VR  [Demo abstract PDF] [Video of the demo]

Posted in Communication networks, Conferences, Duke University, Edge computing, Mobile computing, Publications, Technology, Virtual reality | Comments Off on IEEE INFOCOM 2022 Paper and Demo