About

I am a Lisa Wissner-Slivka & Benjamin Slivka Assistant Professor in Computer Science at Northwestern University where I direct the Sensing, Perception, Interactive Computing & Experiences (SPICE) Lab . I received my Ph.D. in Human-Computer Interaction from Carnegie Mellon University in 2023 and B.Tech. in Computer Science in 2017. I am a recipient of the Forbes 30 under 30, MIT 35 innovators under 35 Asia Pacific, and ACM SIGCHI Outstanding Dissertation Award. I have recently been a Visiting Research Scientist at Google leading efforts on Augmented Reality and Experiences, and have previously worked at Apple, Microsoft Research, Meta Reality Labs and IBM Research.

My research group creates cutting-edge computing technologies that sense, track, and understand humans to augment their interactions and assist them in daily life. We tackle challenging research problems in high-impact application areas such as mobile health sensing, extended reality, embodied perception and natural user interfaces. We leverage expertise in novel sensors & sensing techniques, embedded systems, multimodal learning, computer vision, and on-device machine learning to deploy and evaluate these technologies in real-world settings. Many of our projects have been open-sourced, deployed in the wild, licensed, shipped as product features (including features used by over 10 million users) and have influenced flagship products at leading companies like Google and Apple.

Prospective students: I am hiring at all levels (post-docs, PhDs, masters, undergrads, and visitors); please fill in this form if you are interested in joining the lab.

Selected Research

I publish my research at premiere venues in computer science. Click on the details button to open the project-specific webpage to access supporting materials. For a full list of my patents and publications, please download my CV.

EItPose

EITPose: Wearable and Practical Electrical Impedance Tomography for Continuous Hand Pose Estimation

Alexander Kyu, Hongyu Mao, Junyi Zhu, Mayank Goel, and Karan Ahuja CHI 2024

LemurDx

LemurDx: Using Unconstrained Passive Sensing for an Objective Measurement of Hyperactivity in Children with no Parent Input

Riku Arakawa, Karan Ahuja, Kristie Mak, Gwendolyn Thompson, Sam Shaaban, Oliver Lindhiem, and Mayank Goel UbiComp 2023

IMUPoser

IMUPoser: Full-Body Pose Estimation using IMUs in Phones, Watches, and Earbuds

Vimal Mollyn, Riku Arakawa, Mayank Goel, Chris Harrison, and Karan Ahuja CHI 2023

Samosa

SAMoSA: Sensing Activities with Motion and Subsampled Audio

Vimal Mollyn, Karan Ahuja, Dhruv Verma, Chris Harrison, and Mayank Goel UbiComp 2022

rgbdgaze

RGBDGaze: Gaze Tracking on Smartphones with RGB and Depth Data

Riku Arakawa, Mayank Goel, Chris Harrison and Karan Ahuja ICMI 2022

ContollerPose

ControllerPose: Inside-Out Body Capture with VR Controller Cameras

Karan Ahuja, Vivian Shen, Cathy Fang, Nathan Riopelle, Andy Kong, Chris Harrison CHI 2022

TriboTouch

TriboTouch: Micro-Patterned Surfaces for Low Latency Touchscreens

Craig Shultz, Daewha Kim, Karan Ahuja, Chris Harrison CHI 2022

TouchPose

TouchPose: Hand Pose Prediction, Depth Estimation, and Touch Classification from Capacitive Images

Karan Ahuja, Paul Streli, Christian Holz UIST 2021

EyeMU

EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices

Andy Kong, Karan Ahuja, Mayank Goel, Chris Harrison ICMI 2021

CoolMoves

Cool Moves: User Motion Accentuation in Virtual Reality

Karan Ahuja, Eyal Ofek, Mar Gonzalez-Franco, Christian Holz, Andrew Wilson UbiComp 2021

Classroom Digital Twin

Classroom Digital Twins with Instrumentation-Free Gaze Tracking

Karan Ahuja, Deval Shah, Sujeath Pareddy, Franceska Xhakaj, Amy Ogan, Yuvraj Agarwal, Chris Harrison CHI 2021

Pose-on-the-Go

Pose-on-the-Go: Approximating Partial User Pose with Smartphone Sensor Fusion and Inverse Kinematics

Karan Ahuja, Sven Mayer, Mayank Goel, Chris Harrison CHI 2021

Vid2Doppler

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition

Karan Ahuja, Yue Jiang, Mayank Goel, Chris Harrison CHI 2021

Direction-of-Voice

Direction-of-Voice (DoV) Estimation for Intuitive Speech Interaction with Smart Devices Ecosystems

Karan Ahuja, Andy Kong, Mayank Goel, Chris Harrison UIST 2020

BodySLAM: Opportunistic User Digitization in Multi-User AR/VR Experiences

Karan Ahuja, Mayank Goel, Chris Harrison SUI 2020

MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets

Karan Ahuja, Robert Xiao, Mayank Goel, Chris Harrison UIST 2019

LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces

Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, Chris Harrison UIST 2019

EduSense: Practical Classroom Sensing at Scale

Karan Ahuja, Dohyun Kim, Franceska Xhakaj, Virag Varga, Anne Xie, Stanley Zhang, Jay Eric Townsend, Chris Harrison, Amy Ogan, Yuvraj Agarwal UbiComp 2019

ScratchThat: Supporting Command-Agnostic Speech Repair in Voice-Driven Assistants

Jason Wu, Karan Ahuja, Richard Li, Victor Chen, Jeffrey Bigham. UbiComp 2019

GymCam: Detecting, recognizing, and tracking simultaneous exercises in unconstrained scenes

Rushil Khurana, Karan Ahuja, Zac Yu, Jennifer Mankoff, Chris Harrison, Mayank Goel. UbiComp 2019

Ubicoustics: Plug-and-Play Acoustic Activity Recognition

Gierad Laput, Karan Ahuja, Mayank Goel, Chris Harrison. UIST 2018

EyeSpyVR: Interactive Eye Sensing Using Off-the-Shelf Smartphone based VR headsets

Karan Ahuja, Rahul Islam, Varun Parashar, Kuntal Dey, Chris Harrison, Mayank Goel. UbiComp 2018

OptiDwell: Intelligent Adjustment of Dwell Click Time

Anand Nayyar, Utkarsh Dwivedi, Karan Ahuja, Nitendra Rajput, Seema Nagar, Kuntal Dey. IUI 2017

Convolutional Neural Networks for Ocular Smartphone-Based Biometrics

Karan Ahuja, Rahul Islam, Ferdous Barbhuiya, Kuntal Dey. PRL 2017

Eye center localization and detection using radial mapping

Karan Ahuja, Ruchika Banarjee, Seema Nagar, Kuntal Dey, Ferdous Barbhuiya. ICIP 2016