The Family Education Rights and Privacy Act (FERPA) is intended to protect student privacy, but has not adapted well to current technology. We consider a special class of student data: directory information. Unlike other FERPA-controlled data, directory information (e.g., student names, contact information, university affiliation) can be shared publicly online or by request without explicit permission. To understand this policy's impact, we investigated 100 top-ranked US universities' directory information sharing practices, finding they publish student contact information online, and provide PII offline by request to many parties, including data brokers. Universities provide limited opt out choices, and focus on negative effects when advising students about opting out. Lastly, we evaluate student preferences regarding the identified directory practices through a survey of 991 US university students. Based on these results, we provide recommendations to align directory practices with student privacy preferences.
https://doi.org/10.1145/3613904.3642066
This paper proposes an Out-of-Device Privacy Scale (ODPS) - a reliable, validated psychometric privacy scale that measures users’ importance of out-of-device privacy. In contrast to existing scales, ODPS is designed to capture the importance individuals attribute to protecting personal information from out-of-device threats in the physical world, which is essential when designing privacy protection mechanisms. We iteratively developed and refined ODPS in three high-level steps: item development, scale development, and scale validation, with a total of N=1378 participants. Our methodology included ensuring content validity by following various approaches to generate items. We collected insights from experts and target audiences to understand response variability. Next, we explored the underlying factor structure using multiple methods and performed dimensionality, reliability, and validity tests to finalise the scale. We discuss how ODPS can support future work predicting user behaviours and designing protection methods to mitigate privacy risks.
https://doi.org/10.1145/3613904.3642623
The advent of telehealth revolutionizes healthcare by enabling remote consultations, yet poses complex security and privacy challenges. These are often acutely felt by lower-resourced, allied-healthcare practices. To address this, our study focuses on audiologists and speech-language pathologists (SLPs) in private practice settings, often characterized by limited information technology resources. Over the course of six months, we conducted semi-structured interviews with ten audiologists and ten SLPs to understand their telehealth experiences and concerns. Key findings reveal a diversity of opinions on technology trustworthiness, data security concerns, implemented security protocols, and patient behaviors. Given the nature of the medical practitioners' primary work, participants expressed varied concerns about data breaches and platform vulnerabilities, yet trusted third-party services like Zoom due to inadequate expertise and time to evaluate security protocols. This work underscores the imperative of bridging the technology-healthcare gap to foster secure, patient/provider-centered telehealth as the prevailing practice. It also emphasizes the need to synergize security, privacy, and usability to securely deliver care through telehealth.
https://doi.org/10.1145/3613904.3642208
Predicting users’ privacy concerns is challenging due to privacy’s subjective and complex nature. Previous research demonstrated that generic attitudes, such as those captured by Westin’s Privacy Segmentation Index, are inadequate predictors of context-specific attitudes. We introduce ContextLabel, a method enabling practitioners to capture users’ privacy profiles across domains and predict their privacy concerns towards unseen data practices. ContextLabel’s key innovations are (1) using non-mutually exclusive labels to capture more nuances of data practices, and (2) capturing users’ privacy profiles by asking them to express privacy concerns to a few data practices. To explore the feasibility of ContextLabel, we asked 38 participants to express their thoughts in free text towards 13 distinct data practices across five days. Our mixed-methods analysis shows that a preliminary version of ContextLabel can predict users’ privacy concerns towards unseen data practices with an accuracy (73%) surpassing Privacy Segmentation Index (56%) and methods using categorical factors (59%).
https://doi.org/10.1145/3613904.3642500