ACM SenSys’20 paper: Gaze-based Cognitive Context Sensing

In our recent work that appeared in ACM SenSys 2020, we take a close look at the detection of cognitive context, the state of the person’s mind, through users’ eye tracking.

Eye tracking is a fascinating human sensing modality. Eye movements are correlated with deep desires and personality traits;  careful observers of one’s eyes can discern focus, expertise, and emotions. Moreover, many elements of eye movements are involuntary, more readily observed by an eye tracking algorithm than the user herself.

The high-level motivation for this work is the wide availability of eye trackers in modern augmented and virtual reality (AR and VR) devices. For instance, both Magic Leap and HoloLens  AR headsets are integrated with eye trackers, which have many potential uses including gaze-based user interface and gaze-adapted rendering. Traditional wearable human activity monitoring recognizes what the user does while she is moving around – running, jumping, walking up and down stairs. However, humans spend significant portions of their days engaging in different cognitive tasks that are not associated with discernible large-scale body movements: reading, watching videos, browsing the Internet. The differences in these activities, indistinguishable for motion-based sensing, are readily picked up by eye trackers. Our work focuses on improving the recognition accuracy of eye movement-based cognitive context sensing, and on enabling its operation with few training instances — the so-called few-shot learning, which is particularly important due  to the particularly sensitive and personal nature of eye tracking data.

Our paper and additional information:

  • G. Lan, B. Heit, T. Scargill, M. Gorlatova, GazeGraph: Graph-based Few-Shot Cognitive Context Sensing from Human Visual Behavior, in Proc. ACM SenSys’20, Nov. 2020 (20.6% acceptance rate). [Paper PDF] [ Dataset and codebase ] [ Video of the talk ]
Posted in Augmented reality, Being human, Conferences, Exciting! News and updates, Publications, Research, Undergraduate research, Wearable computing | Comments Off on ACM SenSys’20 paper: Gaze-based Cognitive Context Sensing

Finalist Team, Facebook Research Awards: Exploration of Trust in AR, VR, and Smart Devices

Jointly with the the group of Prof. Neil Gong, we were the finalists for the Facebook Research Exploration of Trust in AR, VR, and Smart Devices Awards.

PI Gorlatova has previously contributed to the 2019 University of Washington Industry-Academia Summit on Mixed Reality Security, Privacy, and Safety, and the associated Summit Report. Our group continues to explore several topics directly related to AR security, privacy, and safety, such as examining the uncertainties in spatial stability of holograms in a given environment and training of eye tracking-based cognitive context classifiers with privacy-preserving techniques relying on limited user data.

Posted in Augmented reality, Industry impact, Research, Safety, Security | Comments Off on Finalist Team, Facebook Research Awards: Exploration of Trust in AR, VR, and Smart Devices

IEEE/ACM IPSN Paper: Edge-assisted Collaborative Image Recognition for Mobile Augmented Reality

Our paper on image recognition for mobile augmented reality in the presence of image distortions appeared in IEEE/ACM IPSN’20 and received the conference’s Best Research Artifact Award [ Paper PDF ] [ Presentation slides ] [ Video of the presentation ]

CollabAR: System Architecture
Continue reading

Posted in Achievement, Augmented reality, Awards, Edge computing, Mobile computing, Publications | Comments Off on IEEE/ACM IPSN Paper: Edge-assisted Collaborative Image Recognition for Mobile Augmented Reality

Edge-based Provisioning of Holographic Content for Augmented Reality

Our work on using edge computing to transmit holograms to augmented reality (AR) users has recently appeared in the IEEE SmartEdge Workshop, co-located with IEEE PerCom, as an invited paper. [ Paper PDF ] [ Presentation slides ] [ Video of the presentation ]

High-level architecture: edge computing supporting different users’ augmented reality (AR) experiences.

Continue reading

Posted in Augmented reality, Communication networks, Edge computing, Mobile computing, Publications, Research, Students, Undergraduate research | Comments Off on Edge-based Provisioning of Holographic Content for Augmented Reality

5 Undergraduate Independent Study Projects On Mobile Augmented Reality Completed in Fall 2019

5 independent undergraduate research projects have been completed in the I^3T lab this semester. In these projects students investigated different elements of mobile augmented reality (AR), including edge-based integration of AR with low-end IoT devices, user perception of different types of shadows, and mechanisms for multi-user coordination for mobile AR. 4 projects are highlighted below.

This work is supported in part by NSF grants CSR-1903136 and CNS-1908051, and by the Lord Foundation of North Carolina. Continue reading

Posted in Augmented reality, Career, Communication networks, Demonstrations, Duke University, Edge computing, Internet of Things, Students, Undergraduate research | Comments Off on 5 Undergraduate Independent Study Projects On Mobile Augmented Reality Completed in Fall 2019

Two Demos Showcased at ACM SenSys’19

Two demos developed in the lab were presented at ACM SenSys’19 in New York City, NY, in November 2019.

A demo led by Joseph DeChicchis, titled Adaptive AR Visual Output Security Using Reinforcement Learning Trained Policies, demonstrates how reinforcement learning-based policies for hologram positioning perform on Magic Leap One augmented reality sets. This demo builds on the work on learning for hologram positioning in AR led by Surin Ahn, previously evaluated via simulations alone. Joseph’s trip to present this demo at ACM SenSys was partially supported by an ACM SIGMOBILE Travel Grant and by a Duke University Undergraduate Research Office Travel Grant. [Video of the demo] Related work:

  • S. Ahn, M. Gorlatova, P. Naghizadeh, M. Chiang, Personalized Augmented Reality Via Fog-based Imitation Learning, in Proc. IEEE Workshop on Fog Computing and the IoT, Apr. 2019 (co-located with IEEE CPS-IoT Week). [Paper PDF] [Imitation learning demo] [Extended version of the paper]
  • S. Ahn, M. Gorlatova, P. Naghizadeh, M. Chiang, P. Mittal, Adaptive Fog-based Output Security for Augmented Reality, in Proc. ACM SIGCOMM VR/AR Network Workshop, Budapest, Hungary, Aug. 2018. [Paper PDF]

Joseph demonstrating how a reinforcement learning-trained policy can move holograms out of the way of a stop sign.

Additionally, a demo led by Jovan Stojkovic, Zida Liu, and Guohao Lan, titled Edge Assisted Collaborative Image Recognition for Augmented Reality, presents a dynamic approach for handling heterogeneous multi-user visual inputs for image recognition in augmented reality, demonstrated on an edge computing-assisted Google ARCore platform. [Video of the demo]

Demo references:

[SenSys19a] J. DeChicchis, S. Ahn, M. Gorlatova, Demo: Adaptive Augmented Reality Visual Output Security Using Reinforcement Learning Trained Policies, in Proc. ACM Conference on Embedded Networked Sensor Systems (ACM SenSys’19), New York City, NY, Nov. 2019. [Demo abstract PDF] [Video of the demo]

[SenSys19b] J. Stojkovic, Z. Liu, G. Lan, C. Joe-Wong, M. Gorlatova, Demo: Edge-assisted Collaborative Image Recognition for Augmented Reality, in Proc. ACM Conference on Embedded Networked Sensor Systems (ACM SenSys’19), New York City, NY, Nov. 2019. [Demo abstract PDF] [Video of the demo]

Posted in Achievement, Augmented reality, Communication networks, Demonstrations, Edge computing, Publications, Students, Undergraduate research | Comments Off on Two Demos Showcased at ACM SenSys’19

Summer Research Experience for Undergraduates Students Presenting Their Work on Edge Computing and Augmented Reality

Two summer Research Experiences for Undergraduates students have presented their summer projects at the Duke University REU showcase.

Courtney Johnson presenting his summer research on precision eye tracking for Magic Leap performance and user experience optimization.

Jovan Stojkovic, Duke ECE REU Fellow and a rising senior at the University of Belgrade, presented his poster titled Edge Computing Platform for Collaborative Augmented Reality. Over the summer, Jovan has built a platform that allows multiple users’ related images, captured with Android phones running Google ARCore, to be processed jointly on an edge server, improving user’s quality of object recognition.

Courtney Johnson, Grand Challenges in Engineering NSF REU Fellow at the Pratt School of Engineering and a rising junior at North Carolina A&T State University, presented his poster titled Intelligent Augmented Reality Adaptation to Users’ Eyes. Over the summer, Courtney has been exploring a range of uses of precision eye tracking, available in modern augmented reality headsets, to optimize the performance of augmented reality systems.

Both Jovan’s and Courtney’s research results will be integrated into paper submissions later this year.

Posted in Achievement, Augmented reality, Duke University, Edge computing, Mobile computing, Research, Undergraduate research | Comments Off on Summer Research Experience for Undergraduates Students Presenting Their Work on Edge Computing and Augmented Reality

2019 NCWIT Seed Fund Award

Prof. Gorlatova is part of a Duke University team that received a 2019 National Center for Women and Information Technology (NCWIT) Seed Fund award to support further engagement of undergraduate women in systems and networking research. The award was presented at the 2019 NCWIT Summit in Nashville, TN. [ More information about the award ]

Posted in Achievement, Duke University, Undergraduate research, Women in technology | Comments Off on 2019 NCWIT Seed Fund Award

Undergraduate Students Presenting Their Work on Next-generation Augmented Reality at Duke University Undergraduate Research Showcases

Three undergraduate students have presented the results of their independent studies in CS and ECE poster sessions and demonstrations here at Duke University.

Michael Glushakov presented a poster and a demo of his work on edge computing-supported augmented reality with Google ARCore, demonstrating how edge computing can be used to enable persistent and personalized augmented reality experiences, and developing a portal that allows people without coding background to create personalized AR experiences. Joseph DeChicchis presented a poster and a demo of his work on using reinforcement learning to teach holograms to move out of the way of real-world objects. Joseph’s demonstration showcased this capability on Magic Leap One devices. Madeline Wilkinson presented a poster of her work on using eye tracking to personalize user experiences in augmented reality on Magic Leap One devices.

Michael Glushakov presenting his work on edge-enhanced ARCore experiences.

Posted in Achievement, Augmented reality, Demonstrations, Duke University, Edge computing, Exciting! News and updates, Research, Students, Undergraduate research | 1 Comment

NSF Computer Systems Research Grant: Multi-tier Service Architecture in IoT-Edge-Cloud-Paradigms

Yale University Prof. Wenjun Hu and Duke University Prof. Maria Gorlatova received an NSF Computer Systems Research (CSR) Small Collaborative grant to examine joint concurrent optimization of multiple applications in multi-tier edge/fog computing architectures. [Award Information]

Posted in Edge computing, Fog computing, Funding, Internet of Things, Mobile computing, Research | Comments Off on NSF Computer Systems Research Grant: Multi-tier Service Architecture in IoT-Edge-Cloud-Paradigms