IEEE Access, 2026 (SCI-Expanded, Scopus)
As cybersecurity threats become more sophisticated, the integration of Large Language Models (LLMs) into defensive and analytical systems is transforming the field. This paper presents a PRISMA-guided bibliometric and thematic review of 149 studies published between 2015 and 2025, including 117 peer-reviewed journal and conference articles, examining publication trends and dominant research themes in LLM-enabled cybersecurity, organized around five research questions: (i) secure incorporation of LLMs into cyber threat intelligence workflows; (ii) hybrid architectures for privacy-preserving and real-time threat detection; (iii) LLM-enabled secure code remediation; (iv) adversarial misuse and dual-use risks; and (v) multi-layer defense strategies addressing prompt injection, model inversion, and data poisoning. Drawing on over 100 primary studies, the analysis highlights key trends, methodological innovations, and recurring vulnerabilities. Notable developments include decentralized trust-enhanced frameworks, context-aware remediation systems, and simulation-based red teaming. However, gaps persist in adversarial robustness, standardization of evaluation, and ethical governance. By mapping research across technical, operational, and policy dimensions, this review provides a structured basis for advancing trustworthy, resilient, and secure LLMs deployments in high-stakes cybersecurity contexts.