First-Person View (FPV) drone swarms are revolutionizing asymmetric warfare by merging low-cost hardware with decentralized machine learning, enabling resource-constrained actors to challenge conventional militaries. This paper analyzes their tactical efficacy and ethical risks through the lens of the Russia-Ukraine conflict, where over 50,000 FPV drones are deployed monthly, reducing artillery costs by 80%. We formalize swarm coordination as a decentralized partially observable Markov decision process (Dec-POMDP), introducing a reinforcement learning framework with dynamic role allocation and counterfactual regret minimization (CFR) to optimize resilience under adversarial conditions. Simulations in Gazebo reported a 93% mission success rate for swarms using Q-learning with dynamic roles—37% higher than centralized systems—even under GPS spoofing and communication jamming. Field data from Ukraine’s "Army of Drones" initiative reveals how $500 drones neutralize $5M armored vehicles via AI-optimized top-attack profiles and open-source command-and-control (C2) software. Ethically, we identify systemic risks in autonomous targeting through analysis of 342 strike recordings. Collateral damage near civilian infrastructure (18% of cases) stems from map data latency (45-minute delays) and path optimization biases prioritizing efficiency over International Humanitarian Law (IHL) compliance. Accountability gaps emerge when swarms override operator commands due to sensor spoofing or signal loss, challenging the legal notion of "meaningful human control." To mitigate these risks, we propose dynamic geofencing—a real-time restricted zone system using satellite/SIGINT feeds—and explainable AI (XAI) mandates enforced via SHAP-based audits. Simulations show geofencing reduces no-strike zone violations by 62%, while XAI logs identified 22 high-risk autonomy overrides in field trials. Our findings underscore the dual-use dilemma of machine learning: FPV swarms democratize military power but necessitate adaptive governance frameworks to balance innovation with humanitarian imperatives. We advocate for modular regulation, quantum-resistant encryption, and global certification bodies to address evolving threats like quantum-enabled jamming. This work bridges algorithmic rigor and policy pragmatism, offering a roadmap for IHL-compliant autonomous systems in high-stakes environments.
Published in | Advances (Volume 6, Issue 3) |
DOI | 10.11648/j.advances.20250603.12 |
Page(s) | 93-99 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2025. Published by Science Publishing Group |
Autonomous Weapons, Swarm Robotics, Reinforcement Learning, International Humanitarian Law, Explainable AI
Strategy | Success Rate | Avg. Collisions |
---|---|---|
Centralized Control | 58% | 12.4 |
Q-Learning (Static Roles) | 76% | 6.8 |
Q-Learning (Dynamic Roles) | 93% | 2.1 |
Metric | Centralized UAVs | Bio-Inspired Swarms | Our Approach |
---|---|---|---|
Latency Tolerance | 200 ms | 500 ms | 50 ms |
Scalability (Drones) | ≤10 | ≤30 | ≤100 |
Jamming Resistance | Low | Medium | High |
Regulatory Initiative | FPV Swarm Coverage | Key Gaps |
---|---|---|
UN CCW (2023) | Limited | No binding rules on autonomy. |
EU Drone Regulation (2023) | Partial | Ignores military applications. |
Our Framework | Comprehensive | Integrates IHL + ML governance. |
FPV | First-person View |
Dec-POMDP | Decentralized Partially Observable Markov Decision Process |
CFR | Counterfactual Regret Minimization |
IHL | International Humanitarian Law |
XAI | Explainable Artificial Intelligence |
[1] | Brambilla, M., Ferrante, E., Birattari, M., & Dorigo, M. (2013). Swarm robotics: A review from the swarm engineering perspective. Swarm Intelligence, 7(1), 1–41. |
[2] | Tan, Y., & Zheng, Z. (2021). Research advances in swarm robotics. Defence Technology, 17(1), 1–17. |
[3] | Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (2nd ed.). MIT Press. |
[4] | Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., & Whiteson, S. (2018). Counterfactual multi-agent policy gradients. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). |
[5] | Horowitz, M. C., & Scharre, P. (2015). Ethical and legal implications of autonomous military systems. International Review of the Red Cross, 97(900), 1–20. |
[6] | United Nations Institute for Disarmament Research. (2023). Autonomous weapons systems and international humanitarian law. UNIDIR. |
[7] | Kofman, M., & Lee, R. (2023). Drone warfare in Ukraine: Tactics, technology, and lessons learned. CNA. |
[8] | Cummings, M. L. (2017). Artificial intelligence and the future of warfare. Chatham House. |
[9] | Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. US Department of Defense. |
[10] | Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. CRC Press. |
[11] | Sharkey, N. (2012). The evitability of autonomous robot warfare. International Review of the Red Cross, 94(886), 787–799. |
[12] |
NATO. (2022). Electronic warfare and counter-drone tactics in modern conflict. NATO Review.
https://www.nato.int/docu/review/articles/2022/electronic-warfare |
[13] | OpenAI. (2023). Explainable AI (XAI): Methods and challenges. OpenAI Research. |
[14] | Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30. |
[15] | Defense Advanced Research Projects Agency. (2016). Perdix drone swarm demonstration. DARPA. |
[16] | International Committee of the Red Cross. (2021). Autonomous weapons: Key legal and policy challenges. ICRC. |
[17] | Ukraine Ministry of Digital Transformation. (2023). Army of Drones: Field deployment report. |
[18] | Russell, S. J., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. |
[19] | Schneier, B. (2023). Quantum-resistant cryptography: A primer. Harvard Kennedy School. |
[20] | United Nations Office for Disarmament Affairs. (2023). The impact of emerging technologies on global security. UNODA. |
APA Style
Nasehi, M. (2025). FPV Drone Swarms in Asymmetric Warfare: Tactical Innovations and Ethical Challenges. Advances, 6(3), 93-99. https://doi.org/10.11648/j.advances.20250603.12
ACS Style
Nasehi, M. FPV Drone Swarms in Asymmetric Warfare: Tactical Innovations and Ethical Challenges. Advances. 2025, 6(3), 93-99. doi: 10.11648/j.advances.20250603.12
AMA Style
Nasehi M. FPV Drone Swarms in Asymmetric Warfare: Tactical Innovations and Ethical Challenges. Advances. 2025;6(3):93-99. doi: 10.11648/j.advances.20250603.12
@article{10.11648/j.advances.20250603.12, author = {Mojtaba Nasehi}, title = {FPV Drone Swarms in Asymmetric Warfare: Tactical Innovations and Ethical Challenges }, journal = {Advances}, volume = {6}, number = {3}, pages = {93-99}, doi = {10.11648/j.advances.20250603.12}, url = {https://doi.org/10.11648/j.advances.20250603.12}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.advances.20250603.12}, abstract = {First-Person View (FPV) drone swarms are revolutionizing asymmetric warfare by merging low-cost hardware with decentralized machine learning, enabling resource-constrained actors to challenge conventional militaries. This paper analyzes their tactical efficacy and ethical risks through the lens of the Russia-Ukraine conflict, where over 50,000 FPV drones are deployed monthly, reducing artillery costs by 80%. We formalize swarm coordination as a decentralized partially observable Markov decision process (Dec-POMDP), introducing a reinforcement learning framework with dynamic role allocation and counterfactual regret minimization (CFR) to optimize resilience under adversarial conditions. Simulations in Gazebo reported a 93% mission success rate for swarms using Q-learning with dynamic roles—37% higher than centralized systems—even under GPS spoofing and communication jamming. Field data from Ukraine’s "Army of Drones" initiative reveals how $500 drones neutralize $5M armored vehicles via AI-optimized top-attack profiles and open-source command-and-control (C2) software. Ethically, we identify systemic risks in autonomous targeting through analysis of 342 strike recordings. Collateral damage near civilian infrastructure (18% of cases) stems from map data latency (45-minute delays) and path optimization biases prioritizing efficiency over International Humanitarian Law (IHL) compliance. Accountability gaps emerge when swarms override operator commands due to sensor spoofing or signal loss, challenging the legal notion of "meaningful human control." To mitigate these risks, we propose dynamic geofencing—a real-time restricted zone system using satellite/SIGINT feeds—and explainable AI (XAI) mandates enforced via SHAP-based audits. Simulations show geofencing reduces no-strike zone violations by 62%, while XAI logs identified 22 high-risk autonomy overrides in field trials. Our findings underscore the dual-use dilemma of machine learning: FPV swarms democratize military power but necessitate adaptive governance frameworks to balance innovation with humanitarian imperatives. We advocate for modular regulation, quantum-resistant encryption, and global certification bodies to address evolving threats like quantum-enabled jamming. This work bridges algorithmic rigor and policy pragmatism, offering a roadmap for IHL-compliant autonomous systems in high-stakes environments. }, year = {2025} }
TY - JOUR T1 - FPV Drone Swarms in Asymmetric Warfare: Tactical Innovations and Ethical Challenges AU - Mojtaba Nasehi Y1 - 2025/07/07 PY - 2025 N1 - https://doi.org/10.11648/j.advances.20250603.12 DO - 10.11648/j.advances.20250603.12 T2 - Advances JF - Advances JO - Advances SP - 93 EP - 99 PB - Science Publishing Group SN - 2994-7200 UR - https://doi.org/10.11648/j.advances.20250603.12 AB - First-Person View (FPV) drone swarms are revolutionizing asymmetric warfare by merging low-cost hardware with decentralized machine learning, enabling resource-constrained actors to challenge conventional militaries. This paper analyzes their tactical efficacy and ethical risks through the lens of the Russia-Ukraine conflict, where over 50,000 FPV drones are deployed monthly, reducing artillery costs by 80%. We formalize swarm coordination as a decentralized partially observable Markov decision process (Dec-POMDP), introducing a reinforcement learning framework with dynamic role allocation and counterfactual regret minimization (CFR) to optimize resilience under adversarial conditions. Simulations in Gazebo reported a 93% mission success rate for swarms using Q-learning with dynamic roles—37% higher than centralized systems—even under GPS spoofing and communication jamming. Field data from Ukraine’s "Army of Drones" initiative reveals how $500 drones neutralize $5M armored vehicles via AI-optimized top-attack profiles and open-source command-and-control (C2) software. Ethically, we identify systemic risks in autonomous targeting through analysis of 342 strike recordings. Collateral damage near civilian infrastructure (18% of cases) stems from map data latency (45-minute delays) and path optimization biases prioritizing efficiency over International Humanitarian Law (IHL) compliance. Accountability gaps emerge when swarms override operator commands due to sensor spoofing or signal loss, challenging the legal notion of "meaningful human control." To mitigate these risks, we propose dynamic geofencing—a real-time restricted zone system using satellite/SIGINT feeds—and explainable AI (XAI) mandates enforced via SHAP-based audits. Simulations show geofencing reduces no-strike zone violations by 62%, while XAI logs identified 22 high-risk autonomy overrides in field trials. Our findings underscore the dual-use dilemma of machine learning: FPV swarms democratize military power but necessitate adaptive governance frameworks to balance innovation with humanitarian imperatives. We advocate for modular regulation, quantum-resistant encryption, and global certification bodies to address evolving threats like quantum-enabled jamming. This work bridges algorithmic rigor and policy pragmatism, offering a roadmap for IHL-compliant autonomous systems in high-stakes environments. VL - 6 IS - 3 ER -