Adaptation for a New Geopolitical Era
We must improve our strategic and tactical adaptation, particularly in the wake of the events of last week. Algorithmic support to military learning and adaptation at all levels will help.
There is currently a lot doom and pessimism about the global security environment and the post-WW2 alliance system in Europe. Increasing consideration is also being given to the likelihood of a similar geopolitical shift in American relationships with its partners in the Pacific.
But one thing is certain. All of us are going to have to learn and adapt to the new strategic circumstances. At the same time, the events of the past week have elevated - not reduced - the changes of military conflicts in both Europe and the Pacific. Whether it is strategic adaptation or tactical adaptation, we all need to be better at it.
To that end, the following is the text of a speech that I gave at the Australian National University in Canberra last night that explores military and national security learning and adaptation, how it can be improved with AI, and how we might apply this meshed human-AI adaptive stance to degrade and corrupt enemy learning and adaptation.
My journey of learning about adaptation began over two decades ago. In fact, just over 22 years ago, I returned from two years at Quantico where I attended the Marine Corps Staff College and School of Advanced Warfighting. I was posted to military strategy branch in Russell Offices. As we all know, in Russell Offices, the most important functions share the fifth floor with the CDF and Secretary. In those days, military strategy branch was on the fifth floor.
Early in my time there, my boss the brilliant John Blackburn asked me to take a brief from some scientist. At the appointed time, the scientist turned up for a half hour briefing on her research on adaptation and the military. Well, being a scientist, there was no way a briefing was going to be restricted to just 30 minutes. I recall at the two-hour mark asking how much longer the briefing had to go.
Now you might wonder why I waited until the two-hour mark. The reason was that I was being exposed to something new and fascinating. And ever since then, I have sought to learn and apply the theories of adaptation, complex adaptive systems, and organisational change in my command and staff appointments.
I based the concept of operations for our 1st Reconstruction Task Force in Afghanistan on learning and adaptation. As the lead staff officer for the Army’s Adaptive Army initiative in 2008 and 2009, the combination of theoretical knowledge and practical application was crucial. Since then, I have continued to write about it, study it in theory and on the ground in places like Ukraine, Israel, Taiwan, Iraq, Afghanistan and in our own military institutions.
The interaction between military forces, be it training activities in peacetime or the most violent interaction during war, provides many opportunities for individuals and institutions to learn. Learning and adaptation is one of the ways that military commanders and institutions seek to reduce uncertainty[i] and the potential for tactical and strategic surprise.
But, as the multi-millennia history of military affairs demonstrates, not every military organisation has the learning culture necessary to recognise the need for change and then conduct disciplined, multi-level adaptation.[ii] It demands an array of different leadership, training, educational, technological and cultural elements that are put in place and practiced in peace, so that the institution may be reflexively adaptive when war eventually occurs.
This learning and adaptation culture is perhaps the most element of a military organization. The ability to adapt has always been important, but the pace of 21st century technological and geopolitical change makes it more important than ever. The events of last week in Europe are just one example of this.
Rapid technological change is driving faster adaptation cycles, but it also provides part of the solution to the challenge of recognising change, developing solutions, sharing them and repeating that process constantly and consistently. In the past decade, as the impact of artificial intelligence has become clearer and more compelling, it occurred to me that the meshing of adaptation theory and AI might provide an even better way for individuals and institutions to learn, adapt, improve and succeed in modern strategic competition and war.
AI offers the chance to improve all aspects of adaptive cycles for individuals, institutions and nations. But it also offers the chance to understand, and interfere with enemy adaptation cycles, in an approach called Counter Adaptation. The aim of this to degrade the impact of enemy coalition learning communities, such as the one we have seen emerge between Russia, Iran, North Korea and China.
So, the aim of my talk this evening is to propose an evolved concept for multi-level, military adaptation, through the fusion of new learning processes and Artificial Intelligence (AI) with the aim of speeding up, and enhancing the quality, of military adaptation and strategic decision-making in peace and war.
Adaptation is a Fundamental Institutional Imperative
The exploration of adaptation by military institutions has resulted in the development of a range of concepts that underpin the understanding of how adaptation occurs and how it can be applied. One of these is the concept of adaptive cycles.
In military literature, the best-known adaptive cycle is Colonel John Boyd’s OODA (observe-orient-decide-act) loop. Boyd’s fascination with gaining advantage through reacting and manoeuvring faster than an opponent was to constitute the basis for nearly everything he thought and did later. Boyd’s 1976 paper “Destruction and Creation” synthesized his ideas and theories to that date. His 1977 “Patterns of Conflict: Warp X” briefing contained the start of the OODA loop philosophy. Boyd continued to develop this thesis until the mature OODA loop concept appeared in a 1978 briefing entitled “Patterns of Conflict: Warp XII.” Boyd’s work has had significant influence in the military institutions of the United States and beyond.
By the late 1980s, Boyd’s concepts received advocacy from theorist William Lind and U.S. Marine Corps General Al Gray. The advocacy by Lind and Gray, widespread debate inside and outside the Corps, and the ideas of Boyd finally coalesced in the publication of the seminal publication, Warfighting, in 1989.
The 2009 publication by the Australian Army, called Adaptive Campaigning, examined the short- and medium-term applications of adaptation on the battlefield and in other military support functions. It proposed an Adaptation Cycle, based on the premise that “the complexities of the modern battlespace are such that it cannot be understood by remote analysis alone; rather, detailed situational understanding will only flow from physical interaction with the problem and success is achieved by learning from this interaction.”[iii]
But what does this look like in practice? Well, there are five key things that need to occur.
First, military institutions must build and sustain environmental awareness: This includes fields such as geopolitics, national policy, demography, technologies, and national and institutional relationships.
Second, they must develop a view of what is likely to succeed in that environment, from the tactical to the strategic level. The development of this view of what is likely to result in success and the extensive testing of such views by military organizations is crucial.
Military organisations need to make changes that get them closer to their view of fitness and learn from those changes. This includes new and evolved doctrine and organisations.
They must retain and share knowledge in themselves and in individuals about the information that improves their chances of success. This includes the ability to collect and absorb lessons, disseminate the implications of these lessons (new tactics and strategies, evolved training and education) and continue to learn based on the interaction of the institution with its environment.
Finally, military organisations need to measure success and failure of engagement with the environment: This is the capacity of an institution to gauge its actions in moving toward this definition of fitness, which leads to further change in institutional and individual actions, objectives, and notions of suitability.
Modern technology and the new wars of the 21st century have sped this process up to historically unprecedented speeds.
Uncrewed systems, with growing AI capabilities, have been at the forefront of many examinations of adaptation in the wars in Ukraine and the Middle East. Multiple studies have examined not only the massive expansion in the number of drones used in war, but the extraordinary fast learning and adaptation cycle in the improvement of drone capabilities and the expanding number of functions they are used for.[iv] As Oleksandra Molloy notes in her recent study of drones in the Ukraine War: The rapidly evolving nature of modern warfare in Ukraine necessitates an accelerated cycle of innovation, which currently ranges from a week to approximately three months.[v]
There are other examples of rapid innovation and adaptation in modern war. The Ukrainian capacity to mesh civil and military sensor networks and analytical capacity[vi] on the battlefield, in the air defence environment and in other national security endeavours is another important case study of adaptation. So too are the Russian developments to improve their electronic warfare capacity to degrade the performance of western precision munitions[vii] and minimise the impact of different kinds of Ukrainian drones on the frontline.[viii]
More recently we have seen the reports about Israel’s use of AI to speed up its targeting cycle and the Pentagon’s application of AI to enhance its kill chains. In the past month, we have also been surprised by the emergence of DeepSeek-R1 AI which, when copied by Western developers, may offer cheaper AI much closer to the coalface for many military institutions.
Despite these advances, adaptation does always not guarantee success. Systems have demonstrated adaptive capacity but have still suffered partial or catastrophic failure. For example, the U.S. Army, which demonstrated the capacity to adapt to the Cold War central European front and use the tactics developed there successfully against Iraq in 1991, then failed to quickly recognise changed circumstances after capturing Baghdad in 2003.
In the Russian system, collecting and sharing lessons is exacerbated by a fear of reporting failure and a culture of centralised command. This was examined in a Royal United Services Institute report on preliminary lessons from the war in November 2022. The report described how the ‘reporting culture’ of the Russian Army was deficient because it “does not encourage honest reporting of failures.”[ix] Anyone who is perceived to have failed is normally replaced or punished. For more senior leaders in the Army, failure can result in important missions being stripped from the organization they command.
This provides useful targetable vulnerabilities which I will discuss shortly.
Wartime Adaptation is Built on Peace Time Developments
But learning and adaptation is not just a wartime concern. War is normally only a small proportion of the life of any military professional. Many military personnel can spend their entire careers in a military organisation and not participate in an operational deployment. More importantly, it is the processes, technologies, leadership philosophies and cultures put in place between wars that provide the foundation for military effectiveness and adaptation in war. In general, the military institution that a nation begins a war with is not the military institution that it wins with.
Monitoring the readiness[x] of a military institution between wars, a process that has traditionally been subject to ‘gaming’[xi], as well as process and data corruption,[xii] can also be vastly improved with new approaches to ensure governments and senior military leaders better understand their deterrence and defence capabilities. Decision-making on other peacetime functions, such as testing options for different force structures and equipment procurements, training and education, logistics and personnel management and the strategic management of alliance interactions might also be improved through better adaptive processes that employ AI.
This also applies to our adversaries. Potential enemy states are rapidly absorbing and deploying new technologies, and their associated new doctrines, which must be understood and countered in peace and war. Russia has demonstrated a capable, if uneven, ability to learn and adapt in Ukraine.[xiii] China has demonstrated a well-developed adaptive stance over the last couple of decades.[xiv]
The question is: how might we influence and degrade this?
We must degrade enemy and competitor adaptation through AI supported counter-adaptation
In their 1990 book Military Misfortunes, Eliot Cohen and John Gooch explored significant military failures over the past one hundred years, producing failure matrices that identify the critical pathways to misfortune and failure. In seeking to adopt a more systemic approach to their analysis of failure, Cohen and Gooch defined the three types of errors that can result in either simple or complex failure: failure to learn, failure to anticipate, and failure to adapt.
Counteradaptation seeks to induce this failure to adapt—or at least prevent effective change—in our adversaries.
Counteradaptation that is informed by AI would need to focus on attacking the five elements of adaptation discussed earlier. These operations should seek to deny an adversary the capacity to effectively adapt to a friendly force’s strategy, presence, and activities. This will reduce the enemy’s range of options against friendly forces, as well as degrade the enemy’s fitness for operations and their capacity to influence friendly activities. Further, counteradaptation operations should also decrease friendly predictability during the conduct of military operations.
Counter adaptation has five components:
First, it must degrade Adversary Environmental Awareness. The aim is to ensure that the adversary has a qualitatively poorer awareness of the environment than friendly forces do. Examples include deception, signature management, and information operations.
Next it must Influence the adversary’s Notions of Fitness. We must influence how an adversary might exploit the picture they have of the strategic and operational environment.
Third, we need to shape, Influence, and Corrupt Change Mechanisms. We must induce (or reinforce) in our competitors and adversaries what Peter Senge has called an organizational learning disability. Given the current techno-authoritarian learning community of Iran, North Korea, Russia and China, this must focus on corrupting international as well as national learning systems. They already have institutional behaviours such as centralisation and fear of reporting failure; we must reinforce this behaviour.
Fourth, we need to Corrupt Sources of enemy Corporate Lessons. A competitor’s ability to adapt to friendly force operations is reliant on their capability to collect and then disseminate information about friendly activities. Interfering with the enemy’s ability to learn more about friendly operations and denying their capacity to share what they do learn are essential to counteradaptation.
Finally, we must Monitor, Degrade, and Influence their feedback Loops. We will require an ongoing assessment process with adaptive measures of success and failure to ensure we are moving toward our objectives while influencing our adversary’s ability to adapt.
But…There are Many Risks in Algorithmic Adaptation Support
In a recent investigation of Israeli use of AI in its Gaza operations in 2023 and 2024, it was found that the headquarters of the Israeli Defence Force in seeking to cope with the war’s rapid tempo, “turned to an elaborate artificial intelligence tool called Habsora — or “the Gospel” — which could quickly generate hundreds of additional targets.”
However, critics have proposed that the focus on AI in intelligence analysis and targeting was one of the reasons Israel was surprised by Hamas on 7 October 2023. Human analysts who had warned of the Hamas attacks were ignored because the algorithmic assessments had not provided the same findings. And, in generating so many targets so quickly, there is unlikely to be sufficient humans to validate all such targets, thereby increasing the risk of increased civilian casualties.[xv]
There are an array of other risks with algorithmic support to adaptation.
One risk is that user adoption of AI for supporting analysis and decision-making about adaptation is too low for it to make a viable contribution. This might be the result of a lack of trust by users or that user interfaces are too complex. This might be addressed with building enhanced technological literacy in an institution as well as better design of the kinds of command and control and decision-support tools being used by personnel in a military organisation.
A second risk is corruption of data or analytical AI used in adaptation processes. This might be caused by environmental factors such as damage to servers by weather or adversary action. But it might also be the result of deliberate intrusion and attack by a state or non-state actor. While security protocols will be critical to addressing this, so too will basic cyber hygiene training for personnel. Ultimately this will be a risk for any organisation that actively employs AI in its decision-support processes. The ultimate manifestation of this risk would be an adversary corrupting the AI supported adaptation processes without our knowledge and that this results in increasingly maladaptive processes at different levels of an institution.
Another risk is that in the multitude of different programs to acquire different AI for institutional functions, algorithms that support learning and adaptation are given a low priority and essentially slip to the back of the line for funding. This has been a constant challenge in many training and education programs in military organisations and there are precedents for such behaviour.
In addressing these risks, it is also necessary to define where AI should not be used in adaptation support. There are a range of authorities and responsibilities currently assigned to humans, such as the authorities for lethal force or life-saving medical decisions or even crucial strategic and political decisions, that may be ‘carved out’ and not assigned to AI as the final arbiter.
Adaptation and AI: Building a Meshed Human-AI Adaptive Capacity
The foundational hypothesis of my talk this evening is that all of these aspects military and national adaptation processes might be significantly improved by meshed human-AI adaptation. To achieve this I propose that there are five elements in which military and national security institutions might wish to invest their time, people and technology.
The First Element: Embrace adaptation in the institutional culture. Adaptation, if it is to have a strategic rather than a local influence, normally doesn’t just happen – it must be led. While there are individual imperatives in some circumstances for rapid learning and adaptation, which is where the term ‘adapt of die’ comes from, even these most immediate of learnings can and should be shared to enhance the overall survivability of teams and larger formations. Senior commanders and other leaders must nurture people and formations that are actively learning and capable of changing where it is safe and effective to do so.[xvi] This culture must begin with clear statements about the leadership environment, and its tolerance for risk and new ideas. What leaders can and should do at every level to observe, collect, record and share lessons about combat and non-combat aspects about military affairs must be well defined and disseminated widely.
This must also be accompanied by definitions for acceptable failure, because failure is an integral part of learning. The tolerance levels for failure are likely to be different at various levels of military endeavour. For example, at the tactical level, there may be a greater tolerance for failure because the opportunities for learning are greater and the consequences for failure are less serious than they are at higher levels.
However, there will also be areas where failure cannot be tolerated, such as in unnecessary death of non-combatants or destruction to strategic assets, or in strategic decision-making where the stakes are much higher than on the battlefield. These ‘intolerable failures’ should also be clarified.
Adapted incentive frameworks will required to encourage risk taking to improve military effectiveness. In essence, military organisations must mature their institutional cultures so that they approximate what Martin Dempsey has described in No Time for Spectators as “responsible rebellion”.[xvii]
In changing their systems to embrace learning through failure, military organisations must also achieve a balance of rapid learning on one hand and not rushing to failure on the other. Sometimes, initial lessons from the tactical level may not be indicative of wider changes in the character war. There is a need for analytical processes that can achieve getting the right lessons to the right people at the right time and not leaping on every new observation as some profound shift in warfare.
The Second Element: Scale AI support from individual to institution. There is unlikely to be a one size fits all algorithm or process that can enhance learning and adaptation at every level of military endeavour. For example, the processes and context for the interaction of politicians and senior military leaders in policy and strategy discussions is very different from that of a tactical leader in the land, sea or air domains. Therefore, a virtual arms room of adaptation support algorithms will be necessary in any institution-wide approach to adaptation. At the tactical level, these algorithms will need to be very simple to train people on (otherwise they won't use them) and easy to use by people who are tired, hungry and under constant time pressure.
Strategic level adaptation tools must support strategic level decision making in peace and war about force size and structure, readiness and posture, options for different capabilities to achieve strategic effects and the foundational strategic wargaming that underpins decision-making. These tools are likely to be very different to those used at lower levels but must be linked to them. AI-enabled adaptation systems will also be needed that better aggregate the multitude of lower-level observations and quickly communicate them to the right assessment agencies.
We must use AI to build better learning communities inside our international alliances. As such, I propose that algorithmic supported adaptation should comprise Pillar 3 of AUKUS.
Element Three: Know where adaptation relevant data is found, stored and shared. In his 2018 book, The Fifth Risk, Michael Lewis wrote about how AI has been used to better manage farmland and predict weather in America. While the algorithms that underpin this were important, more crucial was the process to discover, connect and apply datasets from many different sources including lost basements and older computers.[xviii] This is also the case in military institutions.
Over the past decade military institutions, as well as other government agencies, have discovered that they possess vast troves of digital data but have generally been poor stewards of that data. Data is held in different formats, in different security classifications with firewalls between different nations, services, and commands.
An enhanced adaptive stance in military institutions must have enhanced data awareness as a foundation. And while institutional measures will be an important element, it will also require data discipline in tactical units and by individuals. As such, data awareness and management will need to become one of the basic disciplines taught to military personnel. For example, a soldier in the field as part of their morning routine will potentially need to add a data assurance check for their digitised systems to their daily weapon clean. The key challenge moving forward will “not be finding enough data in this environment. It will be in finding the right data that is accessible to the user community that needs it.”[xix]
Element Four: Set (and evolve) Measures of Effectiveness. If AI-supported adaptation is to work effectively, there must be measures of effectiveness to guide the direction adaptation takes and where the best AI investments should be made.[xx]
At the tactical and operational levels, winning battles more often is a crude yet proven method of measuring success. But more sophisticated measures are required because not every military activity is battle.
At the strategic and political levels, AI-supported adaptation processes might be capable of measuring improvements in civil-military interaction and decision-making. This may be one of the most vital functions of AI-supported adaptive processes. In peace and war, no single strategic decision is purely military or purely civilian or purely political.[xxi] AI should not only be supporting these interactions but also used to support learning about the interaction in civil-military relations and to assist in finding areas where interaction and decision-making might be improved. Indeed, AI might become an entirely new sub-theme in the broader study of civil military relations.[xxii]
The Fifth Element is Military process and doctrine stripping and reform. The observation and absorption of lessons needs to be part of normal military interaction rather than a separate and parallel ecosystem that often has difficulty inserting itself into strategic decision making. Military institutions have made progress in this regard in recent decades, although as Ukraine, Israel and Russia have demonstrated over the past several years, this can still be an uneven process depending on the risk that personnel are facing and the priorities they are given by their leaders.
Tactical learning must be intimately connected with strategic learning, and the two must have an interactive, two-way relationship. Technology can provide for this in many ways, but human processes must also evolve to improve this interaction. Procurement and personnel decision-making processes, which remain heavily reliant on humans and humans in groups to reach decisions and assign priorities must evolve with the support of algorithmic learning support tools.
Crucially, the reduction of time and the number of senior defence committees might be needed to speed up decision-making processes, but this can only be led from the top. Government ministers and star-ranked officers must be capable and willing to lead a very different acceptance of risk in defence and military processes. Concurrently, these senior military and civilian leaders must better define acceptable failure in the learning process and accept more decision making at more junior levels in the interests of faster adaptation.
In conclusion, military organisations, at least those that wish to succeed in competition and conflict, cannot stand still. Military institutions need to be constantly adapting at different levels.[xxiii] And they must absorb AI-supported adaptive processes in peacetime that can then be applied at to improve adaptation at multiple levels during war.[xxiv]
At the same time, military forces (and the wider national security enterprise) must be continually interfering with the capacity of competitors and adversaries to do the same. This forms an ongoing adaptation battle—and it is one that is conducted at every level of military and national security activities.
War remains a human endeavour, albeit one continually evolving due to the impacts of new technologies, different warfighting ideas, and geopolitics. And it will demand continued investment in the ideas and institutions that make up the military instrument of nations.
But ultimately, the reforms necessary to implement a meshed human-AI adaptive stance to improve decision-making at all levels will require visionary and disciplined leadership.
And that is something no algorithm can provide.
I have wanted to include that footnotes and acknowledgements for the speech were included. They are provided below.
Notes
[i] It is impossible to entirely remove uncertainty in war or in any human endeavor. In the military context, one of the best examinations of this is Carl von Clausewitz’s On War, published in the 19th century. It is difficult to appreciate the basic philosophical and doctrinal underpinnings of military institutions and their approach to war without reading Clausewitz’s work.
[ii] Millett and Murray, writing in Military Effectiveness: Volume 1, define military effectiveness as “the process by which armed forces convert resources into fighting power.” In my book War Transformed, I offer an updated definition: “the process by which military forces convert resources into the capacity to influence and fight within an integrated national approach.” Mick Ryan, War Transformed, Annapolis: Naval Institute Press, 2022. 130.
[iii] Australian Army, Adaptive Campaigning – Army’s Future Land Operating Concept (Canberra: Australian Army, 2009), 31.
[iv] See Oleksandra Molloy, Drones in Modern Warfare: Lessons Learnt from the War in Ukraine (Canberra: Australian Army Research Centre, 2024); Colin Christopher, The Evolution Of UAVs In The Ukraine Conflict (U.S. Army Training and Doctrine Command, April 2024), https://oe.tradoc.army.mil/2024/06/04/the-evolution-of-uavs-in-the-ukraine-conflict/; David Hambling, “Interceptors And Escorts: Drone Tactics In Ukraine Are Evolving Fast”, Forbes, 16 April 2024, https://www.forbes.com/sites/davidhambling/2024/04/16/interceptors-and-escorts-drone-tactics-in-ukraine-are-evolving-fast/; David Kirichenko, “The Rush for AI-Enabled Drones on Ukrainian Battlefields”, Lawfare, 5 December 2024, https://www.lawfaremedia.org/article/the-rush-for-ai-enabled-drones-on-ukrainian-battlefields; Elisabeth Gosselin-Malo, “Russian forces test flying flamethrower to target Ukrainian firedrones”, Defense News, 23 November 2024, https://www.defensenews.com/global/europe/2024/11/22/russian-forces-test-flying-flamethrower-to-target-ukrainian-firedrones/
[v] Oleksandra Molloy, Drones in Modern Warfare: Lessons Learnt from the War in Ukraine (Canberra: Australian Army Research Centre, 2024), 57-58.
[vi] I examined this with Clint Hinote in Empowering the Edge: Uncrewed Systems and the Transformation of US Warfighting Capacity, Washington DC: Special Competitive Studies Program, 2024, 7-9.
[vii] Isabelle Khurshudyan and Alex Horton, Russian jamming leaves some high-tech U.S. weapons ineffective in Ukraine, Washington Post, 24 May 2024. https://www.washingtonpost.com/world/2024/05/24/russia-jamming-us-weapons-ukraine/
[viii] These examples of adaptation, among others, are explored in Mick Ryan, The War in Ukraine: Strategy and Adaptation Under Fire, USNI Books, August 2024. https://www.usni.org/press/books/war-ukraine
[ix] Mykhaylo Zabrodskyi, Jack Watling, Oleksandr Danylyuk and Nick Reynolds. Preliminary Lessons in Conventional Warfighting from Russia’s Invasion of Ukraine: February–July 2022, London: Royal United Services Institute, 2022, 49.
[x] A useful precis on military readiness reports is Luke A. Nicastro, Military Readiness: DOD Assessment and Reporting Requirements (Washington DC: Congressional Research Service, 26 October 2022). https://crsreports.congress.gov/product/pdf/IF/IF12240/3
[xi] Theo Lipsky, “Unit Status Reports and the Gaming of Readiness”, Military Review, September-October 2020, 148-157.
[xii] Philip Wasielewski, The Roots of Russian Military Disfunction, Foreign Policy Research Institute, 31 March 2023, https://www.fpri.org/article/2023/03/the-roots-of-russian-military-dysfunction/
[xiii] Mick Ryan, “Russia’s Adaptation Advantage”, Foreign Affairs, 5 February 2024. https://www.foreignaffairs.com/ukraine/russias-adaptation-advantage
[xiv] Scott Tosi, “Xi Jinping’s PLA Reforms and Redefining Active Defense”, Military Review, September-October 2023, 87-101. https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/September-October-23/Active-Defense/Active-Defense-UA1.pdf; Eric Chan, “The Adaptation Battle: the PLA and Lessons from the Russia-Ukraine War”, Global Taiwan Institute, 28 June 2023, https://globaltaiwan.org/2023/06/the-adaptation-battle-the-pla-and-lessons-from-the-russia-ukraine-war/; Phillip Saunders and others (Eds), Chairman Xi Remakes the PLA, Washington DC: NDU Press, 2019.
[xv] Elizabeth Dwoskin, “Israel built an ‘AI factory’ for war. It unleashed it in Gaza.” The Washington Post, 29 December 2024. https://www.washingtonpost.com/technology/2024/12/29/ai-israel-war-gaza-idf/
[xvi] This senior advocacy is one of the essential elements of successful institutional learning and reform. For a useful case study involving the massive transformation of the US Army in the wake of the Vietnam War, see Don Starry, “To Change an Army”, Military Review, March 1983, 20-27.
[xvii] Martin Dempsey, No Time for Spectators: The Lessons that Mattered Most from West Point to the West Wing, New York: Missionday, 185-206.
[xviii] Renee Cho, “This is how artificial intelligence can help us adapt to climate change”, Global Center on Adaptation, 26 July 2019. https://gca.org/this-is-how-artificial-intelligence-can-help-us-adapt-to-climate-change/; Michael Lewis, The Fifth Risk, Penguin Books, 2018.
[xix] Mike Groen, Digits Collide. Commanders Decide. Command and Control in a Digitally Transformed Age (Washington DC: Special Competitive Studies Project, 2024), 9.
[xx] The requirement for measures of effectiveness to guide institutional learning and adaptation is explored in multiple books and reports. Key references, among others, include Williamson Murray and Alan Millet, Military Effectiveness, Volumes 1-3, Cambridge: Cambridge University Press, 1988; Meir Finkel, On Flexibility: Recovery from Technological and Doctrinal Surprise on the Battlefield, Stanford: Stanford University Press, 2007; and, Frank Hoffman, Mars Adapting: Military Change During War, Annapolis: Naval Institute Press, 2021.
[xxi] Eliot Cohen describes this process as the unequal dialog. See Eliot Cohen, Supreme Command: Soldiers, Statesmen, and Leadership in Wartime, New York: The Free Press, 2002.
[xxii] Civil-military relations scholars such as Risa Brooks have begun to explore this particular issue. See Risa Brooks, “Technology and Future War Will test U.S. Civil Military Relations” War on the Rocks, 26 November 2018, https://warontherocks.com/2018/11/technology-and-future-war-will-test-u-s-civil-military-relations/; Risa Brooks, “The Civil-Military Implications of Emerging Technology” in Reconsidering American Civil Military Relations, November 2020. Also, James Ryseff and others, “Exploring the Civil-Military Divide over Artificial Intelligence”, RAND Corporation, May 2022, https://www.rand.org/pubs/research_reports/RRA1498-1.html
[xxiii] Mick Ryan, “Winning Modern Wars through Adaptation”, Futura Doctrina, 31 October 2024.
[xxiv] In his book, On Agility, Meir Finkel offers the following as one of the key military capabilities for the “swift and effective transition from peace to war”: “mechanisms that facilitates fast learning and the rapid circulation of lessons so that the entire military system is updated on, and informed of, potential surprises and their solutions”. Meir Finkel, On Agility: Ensuring Rapid and Effective Transition from Peace to War, Lexington: The University Press of Kentucky, 2020, 151-152.




Late reply here: not being familiar with the OODA loop, what would be the best book on it? Is it the mentioned "Warfighting" (I guess by Gray? A bit hard to search this generic term)? Or something else?