HATS Workshop
August 23, 2026
Workshop on Designing Human-AI Teaming for Security @ SOUPS 2026
Exploring how to balance human agency and AI autonomy in high-stakes cybersecurity decision-making.
Position Paper Submission (Coming Soon)
Workshop Topic & Goals
The Workshop on Designing Human-AI Teaming for Security (HATS) addresses a central challenge for modern cybersecurity: as AI becomes more agentic, guaranteed human oversight is eroding.
HATS brings together researchers and practitioners to examine how human-AI teaming can strengthen security operations while preserving meaningful human agency in decision-making.
Objectives
- Co-create design principles for human-AI teaming in cybersecurity decision-making.
- Formulate actionable interaction design recommendations for research and practice.
- Build interdisciplinary networks that continue beyond the workshop.
Call for Participation
We invite short position papers and contributions related to human-AI teaming in cybersecurity.
- Submission deadline: June 1, 2026
- Notification to authors: June 15, 2026
- Camera-ready deadline: June 25, 2026
Submission portal link will be added soon.
To participate, please submit a two- to four-page position paper on decision-making and human-AI teaming in cybersecurity. Submissions must follow the SOUPS formatting template. At least one author of each accepted paper must attend in person. Accepted papers will be published as workshop proceedings.
Program
The workshop takes place on Sunday afternoon, August 23, 2026 and follows a half-day format.
- Part 1: Shared context and lightning rounds — We begin with welcome and introductions, followed by short lightning talks and group formation.
- Break
- Part 2: Collaborative design — Groups run a guided co-design session (double-diamond inspired), then share outputs in a vernissage-style gallery walk, followed by open discussion and synthesis.
Topics of Interest
Example themes include (but are not limited to):
- Designing appropriate automation levels and agentic AI in cybersecurity
- Automation bias, trust calibration, and employee perceptions of AI decisions
- Human-centric frameworks and workflows for shared security decision-making
- AI-augmented threat hunting, signal processing, and SOC triage
- Technical, psychological, interpersonal, and societal risks in human-AI security workflows
- Error handling in AI-driven security tools and organizational implications
- Deployment barriers and facilitators for human-AI collaboration in enterprise security
- Human-AI teaming opportunities to address cybersecurity workforce shortages
Organizers
LS
Lorin Schoni
ETH Zurich
AV
Alexandra von Preuschen
CISPA
NR
Neele Roch
ETH Zurich
TS
Tarini Saka
Max Planck Institute for Security and Privacy (MPI-SP)
LJ
Luisa Jansen
University of Bern
AT
Adrienn Toth
ETH Zurich
MW
Marlene Wagner
ETH Zurich
SH
Steffen Holter
ETH Zurich
VZ
Verena Zimmermann
ETH Zurich
NZ
Noe Zufferey
ETH Zurich