Aniket Konkar

My research interests span computer vision, robotics, and neuromorphic computing. Currently, I am working on creating a dataset of Event camera recordings of objects responding to sound stimuli, where audio is intentionally not captured, and exploring methods to reconstruct the underlying acoustic signals from these events. I am fortunate to be supervised by Prof. Robert Pless.

I graduated with an MS in Computer Science from The George Washington University. I also spent two years as a research assistant, working on data analysis and statistical modeling under the supervision of Prof. Rob Olsen. In addition, I have two years of professional experience as a software engineer, with a focus on building reliable and well-engineered systems.

Publications

Simple Transformer with Single Leaky Neuron for Event Vision
Himanshu Kumar, Aniket Konkar
WACV Workshop, 2025

This paper introduces a lightweight Transformer for event-based vision that combines a ResNet feature extractor, a spiking PLIF neuron, and multi-head attention, achieving 98.3% on DVS Gesture, 99.3% on N-MNIST, and 75.9% on CIFAR10-DVS. Our model outperforms several event-driven and spiking architectures—including transformer-based ones—while matching the accuracy of far more complex models at a fraction of the computational cost, using 4× fewer parameters (15.3M vs. 60–66M) and over an order of magnitude fewer synaptic operations (1.82G vs. 9.74–65.28G SOPs).

A Review of Transformer-Based and Hybrid Deep Learning Approaches for EEG Analysis
Aniket Konkar, Xiaodong Qu
HCII, 2025

A review paper highlighting how transformer-based and hybrid deep learning models advance EEG decoding across tasks while identifying key trends, gaps, and future research directions.

Miscellaneous

Seeing Motion from Sound with Event Cameras - Exploratory observation using a Prophesee EVK4 event camera