この勉強会は終了しました。ご参加ありがとうございました。
The rise of the Internet of Things (IoT) has given birth to transformative and massively deployed computing applications that raise the significant issue of energy sources. It is impractical and irresponsible to rely on wires and batteries to power trillion-level devices. One promising prediction is that energy harvesting technologies will serve as alternative power sources for IoT devices. However, we might be losing this prophecy for lack of understanding of how novice developers comprehend energy in developing IoT. In response, we conducted a mentored physical prototyping study with a two-day workshop involving eight novice developers. The study consisted of qualitative and quantitative analyses, the artifacts, interviews with both novice developers and an expert, and implications of designs for future tools. The findings reveal informational gaps that demand educational efforts and assistive features to facilitate novice developers. We present major findings from the study and implications for the design of future tools.
Tutorial videos are a popular help source for learning feature-rich software. However, getting quick answers to questions about tutorial videos is difficult. We present an automated approach for responding to tutorial questions. By analyzing 633 questions found in 5,944 video comments, we identified different question types and observed that users frequently described parts of the video in questions. We then asked participants (N=24) to watch tutorial videos and ask questions while annotating the video with relevant visual anchors. Most visual anchors referred to UI elements and the application workspace. Based on these insights, we built AQuA, a pipeline that generates useful answers to questions with visual anchors. We demonstrate this for Fusion 360, showing that we can recognize UI elements in visual anchors and generate answers using GPT-4 augmented with that visual information and software documentation. An evaluation study (N=16) demonstrates that our approach provides better answers than baseline methods.
Modern software engineering is in a state of flux. With more development utilizing AI code generation tools and the continued reliance on online programming resources, understanding code and the original intent behind it is becoming more important than it ever has been. To this end, we have developed the "Meta-Manager'", a Visual Studio Code extension, with a supplementary browser extension, that automatically collects and organizes changes made to code while keeping track of the provenance of each part of the code, including code that has been AI-generated or copy-pasted from popular programming resources online. These sources and subsequent changes are represented in the editor and may be explored using searching and filtering mechanisms to help developers answer historically hard-to-answer questions about code, its provenance, and its design rationale. In our evaluation of Meta-Manager, we found developers were successfully able to use it to answer otherwise unanswerable questions about an unfamiliar code base.
This paper investigates using micro Parsons problems as a novel practice approach for learning Structured Query Language (SQL). In micro Parsons problems learners arrange predefined code fragments to form a SQL statement instead of typing the code. SQL is a standard language for working with relational databases.
Targeting beginner-level SQL statements, we evaluated the efficacy of micro Parsons problems with block-based feedback and execution-based feedback compared to traditional text-entry problems. To delve into learners' experiences and preferences for the three problem types, we conducted a within-subjects think-aloud study with 12 participants. We found that learners reported very different preferences. Factors they considered included perceived learning, task authenticity, and prior knowledge. Next, we conducted two between-subjects classroom studies to evaluate the effectiveness of micro Parsons problems with different feedback types versus text-entry problems for SQL practice. We found that learners who practiced by solving Parsons problems with block-based feedback had a significantly higher learning gain than those who practiced with traditional text-entry problems.
Documentation in codebases facilitates knowledge transfer. But tools for programming are largely text-based, and so developers resort to creating ASCII diagrams---graphical artifacts approximated with text---to show visual ideas within their code. Despite real-world use, little is known about these diagrams. We interviewed nine authors of ASCII diagrams, learning why they use ASCII and what roles the diagrams play. We also compile and analyze a corpus of 507 ASCII diagrams from four open source projects, deriving a design space with seven dimensions that classify what these diagrams show, how they show it, and ways they connect to code. These investigations reveal that ASCII diagrams are professional artifacts used across many steps in the development lifecycle, diverse in role and content, and used because they visualize ideas within the variety of programming tools in use. Our findings highlight the importance of visualization within code and lay a foundation for future programming tools that tightly couple text and graphics.