Two Papers Appearing at Workshops Co-Located with IEEE VR 2022

Two papers from the group are set to be presented at workshops co-located with IEEE VR 2022. Both papers are centered on end applications of next-generation augmented reality (AR) techniques we are currently developing in our lab. Continue reading

Posted in Achievement, AR-assisted surgery, Augmented reality, Augmented Reality For Good, Edge computing, Internet of Things, Mobile computing, Technology, Virtual reality, Wearable computing | Leave a comment

Fall 2021: 9 BS Independent Studies on Next-Generation AR and VR

9 ECE and CS independent undergraduate research projects on next-generation AR and VR were completed in the lab over this semester. This work is supported in part by NSF grants CSR-1903136, CNS-1908051, and CAREER-2046072, and by an IBM Faculty Award.

AR-assisted Neurosurgery
Seijung Kim


Experimental setup (left), and the view of holograms overlaid in AR (right)
Seijung worked on improving the AR-assisted surgery platform we have been developing. The current platform runs with two world coordinates from OptiTrack and HoloLens 2, thus requires calibration to convert one coordinate system to another. She designed and built a 3D printed cube with ARToolKit marker to successfully calibrate the two world coordinate systems between OptiTrack and HoloLens 2. Then she worked on enhancing the robustness of optical marker tracking, for stationary and mobile objects. Compared to a stationary phantom model, a surgical tool is a mobile object during surgeries and the tracking suffers from false positives and noise of optical markers. Seijung explored various designs of 3D printed mount for a surgical tool, to determine the optimal configuration in terms of the number of markers, the distance between each marker, and the direction of marker attachments. She will continue working on integrating more contextual guidance into the system and achieving the level of robustness and accuracy of the system for our user studies in clinical settings.

Continue reading

Posted in AR-assisted surgery, Augmented reality, Duke University, Edge computing, Indoor environments, Internet of Things, Research, Undergraduate research, Virtual reality | Leave a comment

I3T Lab Hiring PhD Students and Postdocs

Duke University I^3T Lab has multiple openings for PhD students and postdocs.

We work in the area of pervasive mobile and sensing systems broadly, and pervasive mobile Augmented Reality (AR) and next-generation intelligence for the Internet of Things in particular. Our work is generously supported by the National Science Foundation (NSF), the Lord Foundation of North Carolina, IBM, Facebook, and DARPA. We are also a part of the recently established NSF AI Institute for Edge Computing Leveraging Next Generation Networks, where our work is focused on building next-generation AI-powered mobile augmented reality.

If you are interested in joining the lab as a PhD student, please e-mail professor Gorlatova (maria.gorlatova /at/ duke.edu) your CV, transcripts, and a brief note about your research interests. We have openings for both CS and ECE PhD students. Strong candidates for PhD studies generally have undergraduate GPA above 3.6/4 and have experience either conducting research or developing advanced technical solutions outside of classroom settings (e.g., in independent studies, internships, employment, as extra-curricular projects). We have openings for PhD students with start dates in August 2022.

If you are interested in joining the lab as a postdoc, please e-mail professor Gorlatova (maria.gorlatova /at/ duke.edu) your CV, a brief note about your research interests, and 1-3 papers that you believe represent your best work to date. Candidates for postdoctoral positions need to have a track record of publishing their work in top venues of the field, and need to be self-driven and independent. The postdoctoral positions’ start dates are flexible, and can be as early as January 2022 and as late as August 2022. Lab’s previous postdoctoral affiliate Dr. Guohao Lan has successfully secured an independent Assistant Professor position at a top university.

Additional information:

Posted in Augmented reality, Duke University, Edge computing, Graduate school, Hiring, Internet of Things, Mobile computing, Research, Students | Leave a comment

Summer 2021: 3 REU Projects on Next-generation Mobile AR

This summer we were fortunate to be able to virtually host 3 Research Experience for Undergraduates (REU) students in the I^3T Lab, through the Duke University REU Site for Meeting Grand Challenges in Engineering. The research students were engaged in is supported in part by NSF grants CSR-1903136, CNS-1908051, and CAREER-2046072, and by an IBM Faculty Award.  Continue reading

Posted in AR-assisted surgery, Augmented reality, Duke University, Gaming engines, Indoor environments, Internet of Things, Mobile computing, Technology, Undergraduate research | Leave a comment

Apple is coming to Durham

Delightful news today: Apple is planning to build a brand new campus here in Durham, spending 1 bln dollars over 10 years, and creating 3,000 highly skilled jobs, in particularly in AI, ML, and software engineering.

Exciting news for the region as a whole, and for the tech scene in particular, especially following Google’s announcement, only 6 weeks ago, of creating a cloud computing hub with over 1,000 jobs here in Durham as well. Very exciting news for myself as a computer systems faculty. So many opportunities for discussions and collaborations. So many new invited speakers and seminar attendees. So many local opportunities for student internships and full-time positions. Good news all around.

Posted in Duke University, Durham NC, Exciting! News and updates, In local news, Technology | Comments Off on Apple is coming to Durham

2021 US NAE Frontiers of Engineering Symposium

It was a true pleasure to attend the 2021 National Academy of Engineering Frontiers of Engineering Symposium (NAE FOE).

The NAE FOE is unique in bringing together engineers from all engineering disciplines. It was a blast. A rare opportunity to step back and learn about challenges in a wide variety of engineering areas, and to think about how one’s own work fits with the broader view of engineering as a profession. I kept thinking about the Iron Ring on my finger, and back to the ceremony, the Ritual of the Calling of an Engineer, where my graduating University of Ottawa class received these rings. I am thrilled that my professional journey has taken me from that ceremony to a Symposium dedicated to the frontiers of the profession.

Iron Ring is a reminder of the professional commitment of the engineer.

Posted in Achievement, Career, Events, Technology | Comments Off on 2021 US NAE Frontiers of Engineering Symposium

Fall 2020: 7 BS projects on mobile AR, VR, cognitive context sensing, and gaming engine-based algorithm training

7 ECE and CS independent undergraduate research projects have been completed in the I^3T lab over the Fall of 2020. The projects are summarized below. This work is supported in part by NSF grants CSR-1903136 and CNS-1908051, IBM Faculty Award, and by the Lord Foundation of North Carolina.

Evaluating Object Detection Models through Photo-Realistic Synthetic Scenes in Game Engines
Achintya Kumar and Brianna Butler

We build an automatic pipeline to evaluate object recognition algorithms within the generated photo-realistic 3D scenes in game engines, i.e., Unity (with High Definition Render Pipeline) and Unreal. Specifically, we test the detection accuracy and intersection over union (IoU) under conditions of different lighting, reflections and transparency levels, camera or object rotations, blurring, occlusions, and object textures. In the automatic pipeline, we collect a large-scale dataset under these conditions without manual capturing, by controlling multiple parameters in game engines. For example, we control illumination conditions by changing the lux values and types of the light sources; we control reflections and transparency levels by using the custom render pipelines; and we control the texture by collecting a texture library and then randomly choosing the object texture from the texture library. Another important component of the automatic pipeline is to generate per-pixel ground truth, where the RGB value of the ground truth image indicating the corresponding object ID of each pixel. With the ground truth generation, the detection accuracy and IoU are obtained without manual labeling.

Continue reading

Posted in Augmented reality, Duke University, Gaming engines, Research, Technology, Undergraduate research, Virtual reality | Comments Off on Fall 2020: 7 BS projects on mobile AR, VR, cognitive context sensing, and gaming engine-based algorithm training

ACM SenSys’20 paper: Gaze-based Cognitive Context Sensing

In our recent work that appeared in ACM SenSys 2020, we take a close look at the detection of cognitive context, the state of the person’s mind, through users’ eye tracking.

Eye tracking is a fascinating human sensing modality. Eye movements are correlated with deep desires and personality traits;  careful observers of one’s eyes can discern focus, expertise, and emotions. Moreover, many elements of eye movements are involuntary, more readily observed by an eye tracking algorithm than the user herself.

The high-level motivation for this work is the wide availability of eye trackers in modern augmented and virtual reality (AR and VR) devices. For instance, both Magic Leap and HoloLens  AR headsets are integrated with eye trackers, which have many potential uses including gaze-based user interface and gaze-adapted rendering. Traditional wearable human activity monitoring recognizes what the user does while she is moving around – running, jumping, walking up and down stairs. However, humans spend significant portions of their days engaging in different cognitive tasks that are not associated with discernible large-scale body movements: reading, watching videos, browsing the Internet. The differences in these activities, indistinguishable for motion-based sensing, are readily picked up by eye trackers. Our work focuses on improving the recognition accuracy of eye movement-based cognitive context sensing, and on enabling its operation with few training instances — the so-called few-shot learning, which is particularly important due  to the particularly sensitive and personal nature of eye tracking data.

Our paper and additional information:

  • G. Lan, B. Heit, T. Scargill, M. Gorlatova, GazeGraph: Graph-based Few-Shot Cognitive Context Sensing from Human Visual Behavior, in Proc. ACM SenSys’20, Nov. 2020 (20.6% acceptance rate). [Paper PDF] [ Dataset and codebase ] [ Video of the talk ]
Posted in Augmented reality, Being human, Conferences, Exciting! News and updates, Publications, Research, Undergraduate research, Wearable computing | Comments Off on ACM SenSys’20 paper: Gaze-based Cognitive Context Sensing

Finalist Team, Facebook Research Awards: Exploration of Trust in AR, VR, and Smart Devices

Jointly with the the group of Prof. Neil Gong, we were the finalists for the Facebook Research Exploration of Trust in AR, VR, and Smart Devices Awards.

PI Gorlatova has previously contributed to the 2019 University of Washington Industry-Academia Summit on Mixed Reality Security, Privacy, and Safety, and the associated Summit Report. Our group continues to explore several topics directly related to AR security, privacy, and safety, such as examining the uncertainties in spatial stability of holograms in a given environment and training of eye tracking-based cognitive context classifiers with privacy-preserving techniques relying on limited user data.

Posted in Augmented reality, Industry impact, Research, Safety, Security | Comments Off on Finalist Team, Facebook Research Awards: Exploration of Trust in AR, VR, and Smart Devices

IEEE/ACM IPSN Paper: Edge-assisted Collaborative Image Recognition for Mobile Augmented Reality

Our paper on image recognition for mobile augmented reality in the presence of image distortions appeared in IEEE/ACM IPSN’20 and received the conference’s Best Research Artifact Award [ Paper PDF ] [ Presentation slides ] [ Video of the presentation ]

CollabAR: System Architecture
Continue reading

Posted in Achievement, Augmented reality, Awards, Edge computing, Mobile computing, Publications | Comments Off on IEEE/ACM IPSN Paper: Edge-assisted Collaborative Image Recognition for Mobile Augmented Reality