この勉強会は終了しました。ご参加ありがとうございました。
To support human decision making with machine learning models, we often need to elucidate patterns embedded in the models that are unsalient, unknown, or counterintuitive to humans. While existing approaches focus on explaining machine predictions with real-time assistance, we explore model-driven tutorials to help humans understand these patterns in a train- ing phase. We consider both tutorials with guidelines from scientific papers, analogous to current practices of science communication, and automatically selected examples from training data with explanations. We use deceptive review detection as a testbed and conduct large-scale, randomized human-subject experiments to examine the effectiveness of such tutorials. We find that tutorials indeed improve human performance, with and without real-time assistance. In particular, although deep learning provides superior predictive performance than simple models, tutorials and explanations from simple models are more useful to humans. Our work suggests future directions for human-centered tutorials and explanations towards a synergy between humans and AI.
Textual comments from peers with informational and emotional support are beneficial to members of online mental health communities (OMHCs). However, many comments are not of high quality in reality. Writing support technologies that assess (AS) the text or recommend (RE) writing examples on the fly could potentially help support providers to improve the quality of their comments. However, how providers perceive and work with such technologies are under-investigated. In this paper, we present a technological prototype MepsBot which offers providers in-situ writing assistance in either AS or RE mode. Results of a mixed-design study with 30 participants show that both types of MepsBots improve users' confidence in and satisfaction with their comments. The AS-mode MepsBot encourages users to refine expressions and is deemed easier to use, while the RE-mode one stimulates more support-related content re-editions. We report concerns on MepsBot and propose design considerations for writing support technologies in OMHCs.
The modern workplace is more demanding than ever before. Yet, since the industrial age, productivity measures have predominantly stayed narrowly focused on the output of the work, and not accounted for the big shift in the cognitive demands placed on the workers or the interleaving of work and life that is so common today. We posit that a more holistic conceptualization of Time Well Spent (TWS) at work could mitigate this issue. In our 1-week study, 40 knowledge workers used the experience sampling method (ESM) to rate their TWS and then define TWS at the end of the week. Our work contributes a preliminary characterization of TWS and empirical evidence that this term can capture a more holistic notion of work that also includes the worker's feelings and well-being.
Software developers, like other information workers, continuously switch tasks and applications to complete their work on their computer. Given the high fragmentation and complexity of their work, staying focused on the relevant pieces of information can become quite challenging in today's window-based environments, especially with the ever increasing monitor screen-size. To support developers in staying focused, we conducted a formative study with 18 professionals in which we examined their computer based and eye-gaze interaction with the window environment and devised a relevance model of open windows. Based on the results, we developed a prototype to dim irrelevant windows and reduce distractions, and evaluated it in a user study. Our results indicate that our model was able to predict relevant open windows with high accuracy and participants felt that integrating visual prominence into the desktop environment reduces clutter and distraction, which results in reduced window switching and an increase in focus.
Information workers perform jobs that demand constant multitasking, leading to context switches, productivity loss, stress, and unhappiness. Systems that can mediate task transitions and breaks have the potential to keep people both productive and happy. We explore a crucial initial step for this goal: finding opportune moments to recommend transitions and breaks without disrupting people during focused states. Using affect, workstation activity, and task data from a three-week field study (N=25), we build models to predict whether a person should continue their task, transition to a new task, or take a break. The R-squared values of our models are as high as 0.7, with only 15% error cases. We ask users to evaluate the timing of recommendations provided by a recommender that relies on these models. Our study shows that users find our transition and break recommendations to be well-timed, rating them as 86% and 77% accurate, respectively. We conclude with a discussion of the implications for intelligent systems that seek to guide task transitions and manage interruptions at work.