A systematic review published in Frontiers of Engineering Management has mapped the dual role of large language models (LLMs), identifying them as powerful tools for innovation that also enable significant security threats and ethical risks. The study, which analyzed 73 key research papers, found that LLMs such as GPT, BERT, and T5, while transforming sectors from healthcare to digital governance, expose systems to cyber-attacks, model manipulation, misinformation, and biased outputs that can amplify social inequalities.
The research categorizes threats into misuse-based risks and malicious attacks. Misuse includes the generation of highly fluent phishing emails, automated malware scripting, identity spoofing, and large-scale false information production. Malicious attacks occur at both the data/model level—through techniques like model inversion, poisoning, and extraction—and the user interaction level via prompt injection and jailbreaking, which can access private training data or bypass safety filters to induce harmful content.
On defense, the study outlines three main technical approaches: parameter processing to reduce attack exposure, input preprocessing to detect adversarial triggers, and adversarial training using red-teaming frameworks to improve model robustness. Detection technologies like semantic watermarking and tools such as CheckGPT can identify model-generated text with up to 99% accuracy. However, the review notes that defenses often lag behind evolving attack techniques, pointing to a need for scalable, low-cost, and multilingual-adaptive solutions. The full findings are detailed in the publication available at https://doi.org/10.1007/s42524-025-4082-6.
The implications for business and technology leaders are substantial. Without systematic regulation and enhanced defense mechanisms, LLM misuse threatens data security, public trust, and social stability. The authors argue that technical safeguards must be paired with ethical governance, integrating transparency, verifiable content traceability, and cross-disciplinary oversight. Ethical review frameworks, dataset audits, and public awareness education are deemed essential to prevent misuse and protect vulnerable groups.
For industry, the development of robust defense systems could protect financial systems from sophisticated phishing, reduce medical misinformation, and maintain scientific integrity. Watermark-based traceability and red-teaming may become standard practices for model deployment. The researchers advocate for future work focused on AI responsible governance, unified regulation frameworks, safer training datasets, and model transparency reporting. If managed effectively, LLMs can evolve into reliable tools supporting education, digital healthcare, and innovation ecosystems while minimizing risks associated with cybercrime and social misinformation.


