Human-targeted cyber threats exploit human vulnerabilities to bypass security defenses, rather than attacking technology directly. For example, attackers use social engineering to trick individuals into revealing sensitive information—making technical defenses alone insufficient. Advances in AI have further amplified these risks by enabling attackers to generate highly convincing content at scale, making such threats increasingly difficult to detect by both human users and advanced security systems. Human-Targeted Cyber Threats and Defenses (HumSec) focuses on threats designed to exploit human vulnerabilities, their countermeasures, and the evaluation of both risks and defenses from the human perspective. We bring together researchers, practitioners, and the public to explore these threats and countermeasures, raising awareness of their growing impact in today's rapidly evolving digital landscape.
We welcome submissions that explore the influence of human vulnerabilities in cybersecurity, such as security risks and defense strategies, system design, organizational practices, governance and policy, as well as works that evaluate/mitigate security threats through a human perspective.
Important Dates
Submission deadline: June 25, 2026 11:59PM AOE
Notification: July 21, 2026 11:59PM AOE
Camera-ready deadline: July 29, 2026 11:59PM AOE
Topics of Interest (but not limited to)
Understanding, Measuring, and Characterizing Human-Targeted Cyber Threats
Human-subjects studies (e.g., surveys) on online fraud, scams, phishing, misinformation/disinformation, harassment and online abuse
Measurement studies that yield new insights into Human-Targeted Cyber Threats (e.g., bottlenecks)
Analysis of attack infrastructure (e.g., phishing kits ecosystems)
AI-driven generation of human-targeted attacks
Emerging threats that exploit human vulnerabilities (e.g., urgency, fear, curiosity)
Studies identifying gaps between existing defenses and real-world threats
Governance, policy, and ethical challenges in human-centered security
Countermeasures to Mitigate Human-Targeted Threats
AI-powered defense mechanisms against human-targeted attacks
Machine Learning or other advanced techniques for detecting and mitigating human-targeted threats (e.g., phishing detectors)
Human factors in the design, usability and effectiveness of defense mechanisms
Security and privacy in human-centric systems
Adversarial robustness of defense mechanisms
Security education and training
Systematization of Knowledge (SoK) papers
Other empirical research related to the above topics
Submission Guidelines
Submissions must not substantially overlap with previously published papers or with works that are simultaneously submitted to a journal or a conference/workshop with proceedings.
Submit. Please submit your papers via EasyChair.
Format. Papers must be written in English, submitted as a single PDF file, anonymized for double-blind review,
and must follow the official LNCS template.
Length. Long papers are limited to 16 pages and short papers to 8 pages, excluding references and appendices. Note that reviewers are not required to read the appendices.
Publication & Presentation. Accepted papers will be published in a Springer joint proceedings. At least one author of each accepted paper will be required to register for the workshop and present the work orally or as a poster.
Open Science Expectations
We encourage authors to release code, data, and other materials needed to reproduce their work on a public platform (e.g., github or Zenodo), under an open source license. However, we acknowledge that sometimes it is not possible to share these openly, such as when it involves malware samples, data from human subjects that must be protected, or proprietary data obtained under agreement that precludes publishing the data itself. In those cases, authors can provide a clear explanation of why the data cannot be released in the "Open Science" appendix (which is not subject to the page limit at submission time).
Use of AI
The use of AI-generated content, including but not limited to text, figures, images, and code, must be disclosed in the acknowledgements section, which does not count toward the page limit at the time of submission. The use of AI tools solely for language editing or grammar improvement is considered common practice and is not covered by this policy. In such cases, disclosure is not required.
General Chairs
Ying Yuan, Örebro Univeristy, Sweden
Eugenio Nemmi, Sapienza University of Rome, Italy
PC Chairs
Ying Yuan, Örebro Univeristy
Eugenio Nemmi, Sapienza University of Rome
Qingying Hao, ShanghaiTech University
Program Committee
Giovanni Apruzzese, Reykjavik University
Mauro Conti, University of Padua & Örebro University
Federico Cernera, Sapienza University of Rome
Zilong Lin, University of Missouri-Kansas City
Ruofan Liu, National University of Singapore
Luigi V. Mancini, Sapienza University of Rome
Alberto Maria Mongardini, Technical University of Denmark
Margie Ruffin, Spelman College
Angelo Spognardi, Sapienza University of Rome
Francesco Sassi, Sapienza University of Rome
For questions, please contact: ying.yuan@oru.se and eugenio.nemmi@uniroma1.it