1. Can’t Keep Them Away: The Failures of Anti-Stalking Protocols in Personal Item Tracking Devices
    Kieron Ivy Turk, Alice Hutchings, Alastair R. Beresford
    University of Cambridge
    Abstract Personal item tracking devices are popular for locating lost items such as keys, wallets, and suitcases. These devices are now being abused by stalkers and domestic abusers to track their victims' locations over time, and some device manufacturers created `anti-stalking features' in response. We analyse the effectiveness of the anti-stalking features with five brands of tracking devices through a gamified naturalistic quasi-experiment in collaboration with the Assassins' Guild student society. Despite participants knowing they might be tracked, and being incentivised to detect and remove the tracker, the anti-stalking features were rarely used. We then explore common implementations of these anti-stalking features and analyse their limitations directly. We identify several failures of each of the features that prevent them from performing their intended purpose even if they were in use. These failures combined imply a need to greatly improve the presence of anti-stalking features to prevent trackers from being abused.


  2. Financial Abuse in Intimate Partnerships: Enabling Platform Resilience and Prevention
    Arkaprabha Bhattacharya , Rosanna Bellini, Kevin Lee, Vineeth Ravi, Jessica Staddon
    JPMorgan Chase, Cornell University
    Abstract Research has highlighted the malicious role that digital technologies can play in exacerbating existing forms of harm such as non-consensual surveillance, doxxing, and harassment in intimate partner violence (IPV) contexts. Although the growth of online and mobile consumer banking has improved convenience, protection against financial harms such as fraud and abuse remains challenging. The computer security community has been developing new techniques to detect and mitigate harms, yet further understanding of intimate partner financial abuse (IPFA) — an area with unique security challenges — is required. In this work we explore the challenges of automating support for identifying IPFA in complaints received by many financial institutions, and demonstrate that complaints are rich sources of threat intelligence for understanding financial abuse. Our contributions include a concrete definition and a corpus of examples of possible IPFA, and the identification of financial products and risk factors associated with IPFA.


  3. Anti-Privacy and Anti-Security Advice on TikTok: Case Studies of Technology-Enabled Surveillance and Control in Intimate Partner and Parent-Child Relationships
    Miranda Wei, Eric Zeng, Tadayoshi Kohno, Franziska Roesner
    University of Washington, Carnegie Mellon University
    Abstract Modern technologies including smartphones, AirTags, and tracking apps enable surveillance and control in interpersonal relationships. In this work, we study videos posted on TikTok that give advice for how to surveil or control others through technology, focusing on two interpersonal contexts: intimate partner relationships and parent-child relationships. We collected 98 videos across both contexts and investigate (a) what types of surveillance or control techniques the videos describe, (b) what assets are being targeted, (c) the reasons that TikTok creators give for using these techniques, and (d) defensive techniques discussed. Additionally, we make observations about how social factors – including social acceptability, gender, and TikTok culture – are critical context for the existence of this anti-privacy and anti-security advice. We discuss the use of TikTok as a rich source of qualitative data for future studies and make recommendations for technology designers around interpersonal surveillance and control.


  4. Hate Raids on Twitch: Echoes of the Past, New Modalities, and Implications for Platform Governance
    Catherine Han, Joseph Seering, Deepak Kumar, Jeffrey T. Hancock, Zakir Durumeric
    Stanford University
    Abstract In summer 2021, users on Twitch were targeted by “hate raids,” a form of attack that overwhelms streamers' chatrooms with hateful messages, often using bots and automation. Using a mixed-methods approach, we combine a quantitative measurement of attacks across the platform with interviews of streamers and third-party bot developers. We present evidence that some hate raids were highly-targeted, hate-driven attacks, but we also observe another mode similar to networked harassment and subcultural trolling. We show that streamers who self-identify as LGBTQ+ and/or Black were disproportionately targeted and that hate raid messages were most commonly rooted in anti-Black racism and antisemitism. We also document how these attacks elicited rapid community responses in bolstering reactive moderation and developing proactive mitigations for future attacks. We conclude by discussing how platforms can better prepare for attacks and protect at-risk communities while considering the division of labor between community moderators, tool-builders, and platforms.


  5. SnuggleSense: empowering online harm survivors through a sensemaking process
    Sijia Xiao, Amy Mathews, Jingshu Rui, Coye Cheshire, Niloufar Salehi
    UC Berkeley
    Abstract Interpersonal harm is a prevalent problem on social media platforms. Survivors are often left out of the traditional content moderation process and face uncertainty and risk of secondary harm when seeking outside help. Our research aims to empower survivors in a critical and early stage in addressing harm --- the sensemaking process. we developed SnuggleSense, a tool that empowers survivors by guiding them through a reflective sensemaking process inspired by restorative justice practices. Our evaluation found that SnuggleSense empowers participants by expanding their options for addressing harm beyond traditional content moderation methods, helping them understand their needs for restoration and healing, and increasing their engagement and emotional support in addressing harm for their friends. We discuss the implications of these findings, including the importance of providing guidance, agency and information in survivors' sensemaking of harm, as well as the educational effect of the reflection process within online communities.


  6. Account Security Interfaces: Important, Unintuitive, and Untrustworthy
    Alaa Daffalla, Marina Bohuk, Nicola Dell, Rosanna Bellini, Thomas Ristenpart
    Cornell University, Cornell Tech
    Abstract Online services increasingly rely on user-facing interfaces to communicate important security-related account information---for example, which devices are logged into a user’s account and when recent logins occurred. However, there has been no investigation into whether these interfaces work well. We begin to fill this gap by partnering with a clinic that supports survivors of intimate partner violence (IPV). We perform qualitative analysis on interview transcripts between clinic consultants and survivors seeking to infer the security status of survivor accounts. Our findings show that these interfaces suffer from a number of limitations that cause confusion and reduce their utility. We then experimentally investigate the lack of integrity of information contained in device lists and session activity logs for four major services. For all the services investigated, we show how an attacker can either hide accesses entirely or spoof access details to hide illicit logins from victims.


  7. Designing Safer Systems for Digital Intimacy
    Vaughn Hamilton, Gabriel Kaptchuk, Allison McDonald, Elissa M. Redmiles
    MPI for Software Systems, Boston University
    Abstract Sexual intimacy and expression is a prevalent yet understudied component of digital behavior: an estimated 1 in 200 people globally will work in the commercial sex industry during their lifetime and more than 80% of adults have sexted. Despite being a common activity, many risks plague those who engage in sexual intimacy online. For example, the theft and resharing of intimate media is a significant and growing form of violence, and online platforms are hostile to sexual expression and to sex workers, regardless of the legality or nature of the content. In this talk, we will discuss the unique safety challenges for digital intimacy by synthesizing a broad literature on the digital needs and experiences of sex workers. We will further draw on emerging work to describe opportunities for technical innovations that can meaningfully help sex workers, as well as others who face similar threats, navigate digital intimacy with increased safety.