この勉強会は終了しました。ご参加ありがとうございました。
Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people, and are believed to bring many benefits over conventional search. However, while decades of research and public discourse interrogated the risk of search systems in increasing selective exposure and creating echo chambers---limiting exposure to diverse opinions and leading to opinion polarization, little is known about such a risk of LLM-powered conversational search. We conduct two experiments to investigate: 1) whether and how LLM-powered conversational search increases selective exposure compared to conventional search; 2) whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect. Overall, we found that participants engaged in more biased information querying with LLM-powered conversational search, and an opinionated LLM reinforcing their views exacerbated this bias. These results present critical implications for the development of LLMs and conversational search systems, and the policy governing these technologies.
Videos accompanied by documents---\textit{document-based videos}---enable presenters to share contents beyond videos and audience to use them for detailed content comprehension.
However, concurrently exploring multiple channels of information could be taxing.
We propose SwapVid, a novel interface for viewing and exploring document-based videos.
SwapVid seamlessly integrates a video and a document into a single view and lets the content behaves as both video and a document; it adaptively switches a document-based video to act as a video or a document upon direct manipulation (\textit{e.g.,} scrolling the document, manipulating the video timeline).
We conducted a user study with twenty participants, comparing SwapVid to a side-by-side video/document views.
Results showed that our interface reduces time and physical workload when exploring slide-based documents based on video referencing.
Based on the study findings, we extended SwapVid with additional functionalities and demonstrated that it further extends the practical capabilities.
Statistical models should accurately reflect analysts’ domain knowledge about variables and their relationships. While recent tools let analysts express these assumptions and use them to produce a resulting statistical model, it remains unclear what analysts want to express and how externalization impacts statistical model quality. This paper addresses these gaps. We first conduct an exploratory study of analysts using a domain-specific language (DSL) to express conceptual models. We observe a preference for detailing how variables relate and a desire to allow, and then later resolve, ambiguity in their conceptual models. We leverage these findings to develop rTisane, a DSL for expressing conceptual models augmented with an interactive disambiguation process. In a controlled evaluation, we find that analysts reconsidered their assumptions, self-reported externalizing their assumptions accurately, and maintained analysis intent with rTisane. Additionally, rTisane enabled some analysts to author statistical models they were unable to specify manually. For others, rTisane resulted in models that better fit the data or enabled iterative improvement.
Recent studies have shown that users of visual analytics tools can have difficulty distinguishing robust findings in the data from statistical noise, but the true extent of this problem is likely dependent on both the incentive structure motivating their decisions, and the ways that uncertainty and variability are (or are not) represented in visualisations. In this work, we perform a crowd-sourced study measuring decision-making quality in visual analytics, testing both an explicit structure of incentives designed to reward cautious decision-making as well as a variety of designs for communicating uncertainty. We find that, while participants are unable to perfectly control for false discoveries as well as idealised statistical models such as the Benjamini-Hochberg, certain forms of uncertainty visualisations can improve the quality of participants’ decisions and lead to fewer false discoveries than not correcting for multiple comparisons. We conclude with a call for researchers to further explore visual analytics decision quality under different decision-making contexts, and for designers to directly present uncertainty and reliability information to users of visual analytics tools. The supplementary materials are available at: https://osf.io/xtsfz/.
Numerical analogies (or "perspectives") that translate unfamiliar measurements into comparisons with familiar reference objects (e.g., "275,000 square miles is roughly as large as Texas") have been shown to aid readers' recall, estimation, and error detection for numbers. However, because familiar reference objects are culture-specific, analogies do not always generalize across audiences. Crowdsourcing perspectives has proven effective but is limited by scalability issues and a lack of crowdworking markets in many regions. In this research, we develop an automated technique for generating localized perspectives. We utilize several open data sources for relevance signals and develop a surprisingly simple model capable of localizing analogies to new audiences without any retraining from human judges. We validate the model by testing it in both a new domain and with a different linguistic audience residing in another country. We release the compiled dataset of 400,000 reference objects to the research community.