Taimoor Tariq

Scientist+Engineer who can usually be found dwindling in the intersections of human vision science and computer graphics. I currently work full-time on Camera Algorithms at Apple . I did my PhD studies under the mentorship of Piotr Didyk, working towards realizing the dream of real-time AR/VR that is visually indistinguishable from the real world. I have also worked with the Applied Perception Science and Display Systems Research teams at Meta on real-time perceptually optimized computational display algorithms for AR/VR. Before all this as a young-ish Master’s student, I was a research fellow at KAIST, investigating how image/video enhancement neural networks understand visual quality, and how we can teach them to perceive it the same way humans do.

Email  /  CV  /  Google Scholar  /  Photography 📷  / 

profile photo

Research

My primary interests lie in the intersection of vision science and real-time computer graphics. More specifically, I work on understanding, quantifying and maximizing perceptual realism and quality (with constituents such as spatial quality, dynamic range, depth, motion and color) for real-time image/video capture (computational photography), synthesis (rendering/graphics) and display (computational display). The long term goals I aim to push towards are:

  • A comprehensive understanding of how the human visual system understands visual realism and aesthetic quality.
  • Real-time immersive displays (VR/AR) that are perceptually indistinguishable from the real-world.
  • Real-time cameras that can not only capture the world exactly as our eyes are seeing see it, but also subjectively understand and optimize for perceived aesthetic attiributes associated with the captured scenes.


Recent News
  • [Oct-2024] I have joined the Camera Algorithms Team at Apple full-time.
  • [Mar-2024] Our work on preserving motion perception in AR/VR to be presented at SIGGRAPH 2024.
  • [Feb-2024] Gave an invited talk at UCL on my work on Perceptual Optimization of Realism for real-time AR/VR. Thank you Kaan Aksit for the invitation.
  • [Aug-2023] Our work on ultra-fast perceptually adaptive tone mapping on VR headsets to be presented at SIGGRAPH Asia 2023.
  • [Oct-2022] I have joined the Applied Perception Science team at Facebook Reality Labs (Sunnyvale, CA) as a Research Scientist Intern.
  • [Apr-2022] Our work on perceptual enhancement for real-time AR/VR to be presented at SIGGRAPH 2022.

Publications

Representative projects are highlighted.

Towards Motion Metamers for Foveated Rendering
Taimoor Tariq, Piotr Didyk

We demonstrate that foveated rendering may inhibit motion perception, making AR/VR appear slower than it physically is. We propose the theory of Motion Metamers of human vision; videos that are structurally different from one another but indistinguishable to human peripheral vision in both spatial and motion perception. We present the first technique to synthesize motion metamers for AR/VR; all in real-time and completely unsupervised (no high-quality reference required).

SIGGRAPH 2024 [journal]

Perceptually Adaptive Real-Time Tone Mapping
Taimoor Tariq, Nathan Matsuda, Eric Penner, Jerry Jia, Douglas Lanman, Ajit Ninan, Alexandre Chapiro

An ultra-fast (under 1ms per-frame on standalone VR) framework that adaptively maintains the perceptual appearence of HDR content after tone-mapping. The framework relates human contrast perception across very different lumainances scales, and then optimizes any tone-mapping curve to minimize perceptual difference.

SIGGRAPH Asia 2023

Noise-based Enhancement for Foveated Rendering
Taimoor Tariq, Cara Tursun, Piotr Didyk

The fastest (200FPS at 4K) and first no-reference spatial metamers of human peripheral vision that we know of; specifically tailored for direct integration into the real-time VR foveated rendering pipeline. Save upto 40% (rendering time) over tranditional foveated rendering, without visible loss in quality.

SIGGRAPH 2022 [journal]

Why Are Deep Representations Good Perceptual Quality Features?
Taimoor Tariq, Okan Tarhan Tursun, Munchurl Kim, Piotr Didyk

An investigation into why the representations learned by image recognition CNNs work remarkably well as features of perceptual quality (e.g perceptual loss). We theorize that these image classification representations learn to be spectrally sensitive to the same spatial frequencies which the human visual system is most sensitive to, so they can effectively encode perceptually visible distortions.

ECCV 2020

A HVS-Inspired Attention to Improve Loss Metrics for CNN-Based Perception-Oriented Super-Resolution
Taimoor Tariq, Juan Luis Gonzalez Bello, Munchurl Kim

A human contrast perception inspired spatial attention mask that makes the deep learning pipeline aware of perceptually important visual information in images.

ICCV Workshops 2019

Computationally efficient fully-automatic online neural spike detection and sorting in presence of multi-unit activity for implantable circuits
Taimoor Tariq, Muhammad Hashim Satti, Hamid Mehmood Kamboh, Maryam Saeed, Awais Mehmood Kamboh

A signal processing pipeline for unsupervised sorting of brain signals on impalntable neural chips, primarily for neuro-prosthetics.

Computer Methods and Programs in Biomedicine, 2019

Low SNR neural spike detection using scaled energy operators for implantable brain circuits
Taimoor Tariq, Muhammad Hashim Satti, Maryam Saeed, Awais Mehmood Kamboh

A new non-linear signal processing filter for detecting noisy brain action potentials.

IEEE Engineering in Medicine and Biology Conference (EMBC), 2017