Apple is coming to Durham

Delightful news today: Apple is planning to build a brand new campus here in Durham, spending 1 bln dollars over 10 years, and creating 3,000 highly skilled jobs, in particularly in AI, ML, and software engineering.

Exciting news for the region as a whole, and for the tech scene in particular, especially following Google’s announcement, only 6 weeks ago, of creating a cloud computing hub with over 1,000 jobs here in Durham as well. Very exciting news for myself as a computer systems faculty. So many opportunities for discussions and collaborations. So many new invited speakers and seminar attendees. So many local opportunities for student internships and full-time positions. Good news all around.

Posted in Duke University, Durham NC, Exciting! News and updates, In local news, Technology | Leave a comment

2021 US NAE Frontiers of Engineering Symposium

It was a true pleasure to attend the 2021 National Academy of Engineering Frontiers of Engineering Symposium (NAE FOE).

The NAE FOE is unique in bringing together engineers from all engineering disciplines. It was a blast. A rare opportunity to step back and learn about challenges in a wide variety of engineering areas, and to think about how one’s own work fits with the broader view of engineering as a profession. I kept thinking about the Iron Ring on my finger, and back to the ceremony, the Ritual of the Calling of an Engineer, where my graduating University of Ottawa class received these rings. I am thrilled that my professional journey has taken me from that ceremony to a Symposium dedicated to the frontiers of the profession.

Iron Ring is a reminder of the professional commitment of the engineer.

Posted in Achievement, Career, Events, Technology | Leave a comment

Fall 2020: 7 BS projects on mobile AR, VR, cognitive context sensing, and gaming engine-based algorithm training

7 ECE and CS independent undergraduate research projects have been completed in the I^3T lab over the Fall of 2020. The projects are summarized below. This work is supported in part by NSF grants CSR-1903136 and CNS-1908051, and by the Lord Foundation of North Carolina.

Evaluating Object Detection Models through Photo-Realistic Synthetic Scenes in Game Engines
Achintya Kumar and Brianna Butler

We build an automatic pipeline to evaluate object recognition algorithms within the generated photo-realistic 3D scenes in game engines, i.e., Unity (with High Definition Render Pipeline) and Unreal. Specifically, we test the detection accuracy and intersection over union (IoU) under conditions of different lighting, reflections and transparency levels, camera or object rotations, blurring, occlusions, and object textures. In the automatic pipeline, we collect a large-scale dataset under these conditions without manual capturing, by controlling multiple parameters in game engines. For example, we control illumination conditions by changing the lux values and types of the light sources; we control reflections and transparency levels by using the custom render pipelines; and we control the texture by collecting a texture library and then randomly choosing the object texture from the texture library. Another important component of the automatic pipeline is to generate per-pixel ground truth, where the RGB value of the ground truth image indicating the corresponding object ID of each pixel. With the ground truth generation, the detection accuracy and IoU are obtained without manual labeling.

Continue reading

Posted in Augmented reality, Duke University, Gaming engines, Research, Technology, Undergraduate research, Virtual reality | Leave a comment

ACM SenSys’20 paper: Gaze-based Cognitive Context Sensing

In our recent work that appeared in ACM SenSys 2020, we take a close look at the detection of cognitive context, the state of the person’s mind, through users’ eye tracking.

Eye tracking is a fascinating human sensing modality. Eye movements are correlated with deep desires and personality traits;  careful observers of one’s eyes can discern focus, expertise, and emotions. Moreover, many elements of eye movements are involuntary, more readily observed by an eye tracking algorithm than the user herself.

The high-level motivation for this work is the wide availability of eye trackers in modern augmented and virtual reality (AR and VR) devices. For instance, both Magic Leap and HoloLens  AR headsets are integrated with eye trackers, which have many potential uses including gaze-based user interface and gaze-adapted rendering. Traditional wearable human activity monitoring recognizes what the user does while she is moving around – running, jumping, walking up and down stairs. However, humans spend significant portions of their days engaging in different cognitive tasks that are not associated with discernible large-scale body movements: reading, watching videos, browsing the Internet. The differences in these activities, indistinguishable for motion-based sensing, are readily picked up by eye trackers. Our work focuses on improving the recognition accuracy of eye movement-based cognitive context sensing, and on enabling its operation with few training instances — the so-called few-shot learning, which is particularly important due  to the particularly sensitive and personal nature of eye tracking data.

Our paper and additional information:

  • G. Lan, B. Heit, T. Scargill, M. Gorlatova, GazeGraph: Graph-based Few-Shot Cognitive Context Sensing from Human Visual Behavior, in Proc. ACM SenSys’20, Nov. 2020 (20.6% acceptance rate). [Paper PDF] [ Dataset and codebase ] [ Video of the talk ]
Posted in Augmented reality, Being human, Conferences, Exciting! News and updates, Publications, Research, Undergraduate research, Wearable computing | Leave a comment

Finalist Team, Facebook Research Awards: Exploration of Trust in AR, VR, and Smart Devices

Jointly with the the group of Prof. Neil Gong, we were the finalists for the Facebook Research Exploration of Trust in AR, VR, and Smart Devices Awards.

PI Gorlatova has previously contributed to the 2019 University of Washington Industry-Academia Summit on Mixed Reality Security, Privacy, and Safety, and the associated Summit Report. Our group continues to explore several topics directly related to AR security, privacy, and safety, such as examining the uncertainties in spatial stability of holograms in a given environment and training of eye tracking-based cognitive context classifiers with privacy-preserving techniques relying on limited user data.

Posted in Augmented reality, Industry impact, Research, Safety, Security | Leave a comment

IEEE/ACM IPSN Paper: Edge-assisted Collaborative Image Recognition for Mobile Augmented Reality

Our paper on image recognition for mobile augmented reality in the presence of image distortions appeared in IEEE/ACM IPSN’20 and received the conference’s Best Research Artifact Award [ Paper PDF ] [ Presentation slides ] [ Video of the presentation ]

CollabAR: System Architecture
Continue reading

Posted in Achievement, Augmented reality, Awards, Edge computing, Mobile computing, Publications | Leave a comment

Edge-based Provisioning of Holographic Content for Augmented Reality

Our work on using edge computing to transmit holograms to augmented reality (AR) users has recently appeared in the IEEE SmartEdge Workshop, co-located with IEEE PerCom, as an invited paper. [ Paper PDF ] [ Presentation slides ] [ Video of the presentation ]

High-level architecture: edge computing supporting different users’ augmented reality (AR) experiences.

Continue reading

Posted in Augmented reality, Communication networks, Edge computing, Mobile computing, Publications, Research, Students, Undergraduate research | Leave a comment

5 Undergraduate Independent Study Projects On Mobile Augmented Reality Completed in Fall 2019

5 independent undergraduate research projects have been completed in the I^3T lab this semester. In these projects students investigated different elements of mobile augmented reality (AR), including edge-based integration of AR with low-end IoT devices, user perception of different types of shadows, and mechanisms for multi-user coordination for mobile AR. 4 projects are highlighted below.

This work is supported in part by NSF grants CSR-1903136 and CNS-1908051, and by the Lord Foundation of North Carolina. Continue reading

Posted in Augmented reality, Career, Communication networks, Demonstrations, Duke University, Edge computing, Internet of Things, Students, Undergraduate research | Comments Off on 5 Undergraduate Independent Study Projects On Mobile Augmented Reality Completed in Fall 2019

Two Demos Showcased at ACM SenSys’19

Two demos developed in the lab were presented at ACM SenSys’19 in New York City, NY, in November 2019.

A demo led by Joseph DeChicchis, titled Adaptive AR Visual Output Security Using Reinforcement Learning Trained Policies, demonstrates how reinforcement learning-based policies for hologram positioning perform on Magic Leap One augmented reality sets. This demo builds on the work on learning for hologram positioning in AR led by Surin Ahn, previously evaluated via simulations alone. Joseph’s trip to present this demo at ACM SenSys was partially supported by an ACM SIGMOBILE Travel Grant and by a Duke University Undergraduate Research Office Travel Grant. [Video of the demo] Related work:

  • S. Ahn, M. Gorlatova, P. Naghizadeh, M. Chiang, Personalized Augmented Reality Via Fog-based Imitation Learning, in Proc. IEEE Workshop on Fog Computing and the IoT, Apr. 2019 (co-located with IEEE CPS-IoT Week). [Paper PDF] [Imitation learning demo] [Extended version of the paper]
  • S. Ahn, M. Gorlatova, P. Naghizadeh, M. Chiang, P. Mittal, Adaptive Fog-based Output Security for Augmented Reality, in Proc. ACM SIGCOMM VR/AR Network Workshop, Budapest, Hungary, Aug. 2018. [Paper PDF]

Joseph demonstrating how a reinforcement learning-trained policy can move holograms out of the way of a stop sign.

Additionally, a demo led by Jovan Stojkovic, Zida Liu, and Guohao Lan, titled Edge Assisted Collaborative Image Recognition for Augmented Reality, presents a dynamic approach for handling heterogeneous multi-user visual inputs for image recognition in augmented reality, demonstrated on an edge computing-assisted Google ARCore platform. [Video of the demo]

Demo references:

[SenSys19a] J. DeChicchis, S. Ahn, M. Gorlatova, Demo: Adaptive Augmented Reality Visual Output Security Using Reinforcement Learning Trained Policies, in Proc. ACM Conference on Embedded Networked Sensor Systems (ACM SenSys’19), New York City, NY, Nov. 2019. [Demo abstract PDF] [Video of the demo]

[SenSys19b] J. Stojkovic, Z. Liu, G. Lan, C. Joe-Wong, M. Gorlatova, Demo: Edge-assisted Collaborative Image Recognition for Augmented Reality, in Proc. ACM Conference on Embedded Networked Sensor Systems (ACM SenSys’19), New York City, NY, Nov. 2019. [Demo abstract PDF] [Video of the demo]

Posted in Achievement, Augmented reality, Communication networks, Demonstrations, Edge computing, Publications, Students, Undergraduate research | Comments Off on Two Demos Showcased at ACM SenSys’19

Summer Research Experience for Undergraduates Students Presenting Their Work on Edge Computing and Augmented Reality

Two summer Research Experiences for Undergraduates students have presented their summer projects at the Duke University REU showcase.

Courtney Johnson presenting his summer research on precision eye tracking for Magic Leap performance and user experience optimization.

Jovan Stojkovic, Duke ECE REU Fellow and a rising senior at the University of Belgrade, presented his poster titled Edge Computing Platform for Collaborative Augmented Reality. Over the summer, Jovan has built a platform that allows multiple users’ related images, captured with Android phones running Google ARCore, to be processed jointly on an edge server, improving user’s quality of object recognition.

Courtney Johnson, Grand Challenges in Engineering NSF REU Fellow at the Pratt School of Engineering and a rising junior at North Carolina A&T State University, presented his poster titled Intelligent Augmented Reality Adaptation to Users’ Eyes. Over the summer, Courtney has been exploring a range of uses of precision eye tracking, available in modern augmented reality headsets, to optimize the performance of augmented reality systems.

Both Jovan’s and Courtney’s research results will be integrated into paper submissions later this year.

Posted in Achievement, Augmented reality, Duke University, Edge computing, Mobile computing, Research, Undergraduate research | Comments Off on Summer Research Experience for Undergraduates Students Presenting Their Work on Edge Computing and Augmented Reality