Deep Learning Super-Resolution Network Facilitating Fiducial Tangibles on Capacitive Touchscreens

要旨

Over the last years, we have seen many approaches using tangibles to address the limited expressiveness of touchscreens. Mainstream tangible detection uses fiducial markers embedded in the tangibles. However, the coarse sensor size of capacitive touchscreens makes tangibles bulky, limiting their usefulness. We propose a novel deep-learning super-resolution network to facilitate fiducial tangibles on capacitive touchscreens better. In detail, our network super-resolves the markers enabling off-the-shelf detection algorithms to track tangibles reliably. Our network generalizes to unseen marker sets, such as AprilTag, ArUco, and ARToolKit. Therefore, we are not limited to a fixed number of distinguishable objects and do not require data collection and network training for new fiducial markers. With extensive evaluation including real-world users and five showcases, we demonstrate the applicability of our open-source approach on commodity mobile devices and further highlight the potential of tangibles on capacitive touchscreens.

著者
Marius Rusu
LMU Munich, Munich, Bavaria, Germany
Sven Mayer
LMU Munich, Munich, Germany
論文URL

https://doi.org/10.1145/3544548.3580987

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Discovery Track Monday

Hall A
5 件の発表
2023-04-24 20:10:00
2023-04-24 21:35:00