Skip to content →

Sucess at CHI 2021

On top of being heavily involved in the publication process as Data & PCS Co-Chair. I am happy to announce that I also got three papers accepted.

Full paper: Super-Resolution Capacitive Touchscreens

Capacitive touchscreens are near-ubiquitous in today’s touch-driven devices, such as smartphones and tablets. By using rows and columns of electrodes, specialized touch controllers are able to capture a 2D image of capacitance at the surface of a screen. For over a decade, capacitive “pixels” have been around 4 millimeters in size – a surprisingly low resolution that precludes a wide range of interesting applications. In this paper, we show how super-resolution techniques, long used in fields such as biology and astronomy, can be applied to capacitive touchscreen data. By integrating data from many frames, our software-only process is able to resolve geometric details finer than the original sensor resolution. This opens the door to passive tangibles with higher-density fiducials and also recognition of every-day metal objects, such as keys and coins. We built several applications to illustrate the potential of our approach and report the findings of a multipart evaluation.

Sven Mayer, Xiangyu Xu, Chris Harrison: Super-Resolution Capacitive Touchscreens. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, New York, USA, 2021.

Full paper: Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics

We present Pose-on-the-Go, a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone’s front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user’s body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. We conclude with a series of demonstration applications that underscore the unique potential of our approach, which could be enabled on many modern smartphones with a simple software update.

Karan Ahuja, Sven Mayer, Mayank Goel, Chris Harrison: Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, New York, USA, 2021.

Full paper: Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and Markers 

Today’s smart cities use thousands of physical sensors distributed across the urban landscape to support decision making in areas such as infrastructure monitoring, public health, and resource management. These weather-hardened devices require power and connectivity, and often cost thousands just to install, let alone maintain. In this paper, we show how long-range laser vibrometry can be used for low-cost, city-scale sensing. Although typically limited to just a few meters of sensing range, the use of retroreflective markers can boost this to 1km or more. Fortuitously, cities already make extensive use of retroreflective materials for street signs, construction barriers, road studs, license plates, and many other markings. We describe how our prototype system can co-opt these existing markers at very long ranges and use them as unpowered accelerometers for use in a wide variety of sensing applications.

Yang Zhang, Sven Mayer, Jesse T. Gonzalez, Chris Harrison: Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and Markers. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems,, Association for Computing Machinery, New York, New York, USA, 2021.

Course: Introduction to Intelligent User Interfaces

Together with Albrecht Schmidt and Daniel Buschek I also run a 5-hour course as an introduction to IUI. All course materials are available here under the CC-SA 4.0 license.

Recent advancements in artificial intelligence (AI) create new opportunities for implementing a wide range of intelligent user interfaces. Speech-based interfaces, chatbots, visual recognition of users and objects, recommender systems, and adaptive user interfaces are examples that have majored over the last 10 years due to new approaches in machine learning (ML). Modern ML-techniques outperform in many domains of previous approaches and enable new applications. Today, it is possible to run models efficiently on various devices, including PCs, smartphones, and embedded systems. Leveraging the potential of artificial intelligence and combining them with human-computer interaction approaches allows developing intelligent user interfaces supporting users better than ever before. This course introduces participants to terms and concepts relevant in AI and ML. Using examples and application scenarios, we practically show how intelligent user interfaces can be designed and implemented. In particular, we look at how to create optimized keyboards, use natural language processing for text and speech-based interaction, and how to implement a recommender system for movies. Thus, this course aims to introduce participants to a set of machine learning tools that will enable them to build their own intelligent user interfaces. This course will include video based lectures to introduce concepts and algorithms supported by practical and interactive exercises using python notebooks.

Albrecht Schmidt, Sven Mayer, Daniel Buschek: Introduction to Intelligent User Interfaces. In: Conference on Human Factors in Computing Systems Extended Abstracts, Association for Computing Machinery, 2021.

Published in Event

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *