Briana Vecchione is a technical researcher with the AI on the Ground program, conducting research on sociotechnical auditing and accountability in algorithmic systems, with a recent focus on large language models (LLMs) for social and emotional support. Her work examines how people live with and make sense of AI-mediated support, tracing how chatbot interactions shape expectations of care, forms of meaning-making, and practical help-seeking in everyday life. Her current research explores how LLM-based systems behave in emotionally vulnerable contexts, including how users interpret and act on what these systems provide, how safety boundaries are communicated or blurred, and how product incentives and design choices influence the experience of support. Her broader research agenda advances sociotechnical auditing methods for algorithmic systems — spanning measurement, evaluation design, and audit tooling in real-world governance settings — while connecting empirical findings to accountability frameworks for policy, organizational oversight, and public interest evaluation.
Briana holds a PhD in information science from Cornell University, and her work has been published in venues including Communications of the ACM, ACM CHI, and ACM FAccT, where her team received a Best Paper Award. Her research has been supported by Google, Meta, Microsoft, the National Science Foundation, the MacArthur Foundation, the Mozilla Foundation, and the Notre Dame–IBM Tech Ethics Lab.