AI and Critical Thinking
A new article from Dr Frank Hoffman that explores the many opportunities and risks with the application of artificial intelligence (AI) to military decision-making.
This year, I will be inviting a range of experts on military affairs and strategy to write for Futura Doctrina. The subjects will encompass war, strategic competition, national strategy, technology, human capacity and other issues germane to learning from modern war and preparing for future conflict. The first guest contributor is Dr Frank Hoffman, who writes about the importance of aligning critical thinking skills and the employment of AI in military organisations.
If today’s undergraduate students (and future officers) are leveraging Generative AI tools extensively, future military officers may be commissioned with less developed critical thinking skills. It is not that it will rewire our minds, as much as produce students with flabby thinking.
With all the hype and hyperbole about Artificial Intelligence (AI) it is hard to determine what these rapidly evolving technologies can and cannot deliver. The opinions range from posing an existential threat or being less than useless machines. Many military institutions are now carefully exploring where AI might have payoffs. Conservative by nature, the armed services have more concerns and less consensus about AI applications due to the nonlinear character of warfare.
Ultimately however, rapid algorithmic technological advances will be available to security leaders and provide valuable assistance in strategy and operational execution. This will not make traditional professional expertise redundant or replaceable. Quite the contrary, humans will have greater value than before. Nor should we be concerned that introducing AI-enabled systems results in the end of audacity in miliary officers. But we should retain an acute appreciation for human factors in decision-making.
There are important political, moral and social dimensions of command (including purpose, leadership, empathy) that the human brain processes better than machine intelligence. However, there are also tremendous challenges of command in this age, including multi-domain complexity, cognitive overload, and accelerating temporal change that we must account for. In an age where seconds can impact outcomes, “the ability to think and act faster and more coherently than any adversary” is increasingly salient. Because of this, exploring the risks and opportunities of AI-enhanced military decision-making is now an urgent issue. The militaries that properly adopt these tools will hold a distinct warfighting advantage over opponents. The challenge is finding the proper changes in doctrine, structure, process, and education to achieve that advantage.
In this article, four questions are addressed: 1) What is critical thinking? 2) What is the impact of AI on critical thinking? 3) What are we expecting from AI? and finally 4) What should be done to enhance critical thinking and decision-making?
What is Critical Thinking?
Critical thinking is a key component of effective decision-making. It is foundational for good military judgment. To the military profession, critical thinking is a deliberative process of analyzing a problem by exploring available evidence and exposing basic assumptions, perspectives, and biases that may influence the judgment being made. Scholars contend that self-awareness and personal reflection about biases and past experiences is needed to excel at critical thinking. Self-awareness helps ensure that decisions are not colored by one’s own biases or cognitive “blind spots.” An officer’s capacity to think creatively requires both awareness about one’s own prevailing operative frameworks and openness to new ideas. Openness requires and values curiosity, creativity, and imagination. This mindset can be cultivated and trained, and a good deal of the curricula at command and staff colleges is geared to sharpening that mindset.
Enhanced critical thinking is a highly desired outcome. It is also recognized as a key component and necessary reform to professional military education. The collective Joint leadership of the U.S. miliary highlighted critical thinking in their vision for professional military education (PME). They stated that “Our collective aim is the development of strategically minded joint warfighters, who think critically and can creatively apply military power…” They challenged the PME institutions with an objective that “All graduates should possess critical and creative thinking skills, emotional intelligence, and effective written, verbal, and visual communications skills” in support of strategy formulation and complex operations.
The implementation of this vision has been limited. A team of U.S. PME professors recently found the efforts falling short and concluded that the vision is not operative. They compared the goals of this vision, and concluded, “it is not clear that fundamental change.in PME programs has actually occurred to accomplish these aims.” That vision was written at the dawn of the age of AI, but it stressed additive education in emerging disruptive technologies.
Impact of AI on Critical Thinking
The presumption behind AI is better informed and faster decision-making in competitive environments. Our larger interest in AI is about improving and accelerating critical thinking to inform judgment. Clausewitz’s conception of genius in action is predicated upon critical thinking and creativity. The underlying assumption in much of the investment with AI is that it will enhance critical thinking and professional judgment.
This assumption is challenged in some recent research. A recent study from Microsoft and Carnegie Mellon University found that popular AI chatbots may actually reduce critical thinking. Microsoft’s research suggests that the use of AI results in “diminished critical reflection” instead of deeper insights. A limited MIT study generated similar results. That research was not conclusive, but the implications reinforce startling reports from educators. Some teachers claim that the introduction of generative AI in schools is “killing critical thinking.” University professors warn that overreliance by students has reduced deep thinking and decision-making abilities, in short, AI makes you stupid. One commentator talks of AI’s “malevolent seduction: excellence without effort. It gives people the illusion that they can be good at thinking without hard work.”
More study is required but the long-term implications for defence institutions are not reassuring. Over the next decade, the armed forces are likely to have increasing numbers of new officers entering the armed forces after graduating from their formative educational years less able to skeptically challenge data outputs from generative AI systems. If today’s undergraduate students (and future officers) are leveraging Generative AI tools extensively, future military officers may be commissioned with less developed critical thinking skills. It is not that it will rewire our minds, as much as produce students with flabby thinking.
AI and Decision Advantage
There is reason to be optimistic about progress with applications of AI models and functional agentic systems. They will be increasingly valuable in supporting commanders and their strategic and operational decisions. But the state of the art in AI remains fragile or jagged. However, the trend line of AI as a general-purpose technology with increased utility seems clear.
AI is now delivering impactful results in wartime operations, particularly in intelligence and targeting, and will see more concrete benefits for commanders and staff in the near future. With properly educated leaders and thoroughly vetted decision tools, we can anticipate value from a convergence of human and machine intelligence to make good judgments. Hence I agree with advocates claiming we face a paradigm shift “from tools that assist humans, to agents that actively pursue campaign objectives alongside humans.” The human commander should remain the dominant partner in this integrated relationship.
As noted by retried Lieutenant General Jack Shanahan, military command and control is best conceptualized as human-centric and tech-enabled. He acknowledges that there are risks in AI-enabled battle systems but they must be balanced against the unmistakable evidence of human biases throughout military history. On balance, properly prepared officers can get more out of the AI systems that can a team that is not supported by one. Human-Machine teaming or what some call strategic centaurs should outperform individuals.
AI is still evolving and is yet to prove itself in our profession, but Large Language Models (LLMs) are making significant advances. There is a growing sense that AI can “revolutionize military decision-making.” Defence experts anticipate they will assist commanders particularly in the observation and orientation tasks of the Observe-Orient-Decide-Act Loop. In fact, as noted by the founding Director of the Pentagon’s Joint AI Center, advancements in generative AI will soon generate a lot of value in joint planning. AI will expedite collaborative, real-time course of action (COA) development, analysis, and offer recommendations.
Such capabilities could improve and accelerate the development of strategic and operational orders, and in assessing implementation. It is anticipated that bespoke LLMs or agentic tools will also alert commanders when critical assumptions are invalid or risks are overlooked. This should be invaluable. As Mark Twain allegedly quipped, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” A useful model will underscore outdated or erroneous assumptions and question risks. Models may offer continuous risk assessments, and be able to track the risk appetite of decision-makers. This suite of contributions enhances numerous aspects of Command and Control.
Another benefit from AI models should be COA analysis and enhanced group collaboration. The decision science and risk scholar Baruch Fischhoff of Carnegie-Mellon University has described the value of unbounded discourse in complex decisions. Overly bounded and rational discourse can occur within single discipline organizations or cultures that limit collaboration and learning. Decision-making in hierarchical institutions where deference to rank/experience can be pronounced and where participants share a common background (career experiences/doctrine/education) can be less than optimal. This tends to enforce norms and boundaries that can restrict discourse. A good LLM may help offset that and serve as a sort of red team against group think. Such systems could support the Design Movement efforts to push beyond methodical planning processes, and promote creative solutions.
Finally, there could be applications that also facilitate and accelerate the dynamic learning battle by identifying anomalies for investigation and altered praxis. Technology can play a part in supporting the speed of the adaptation competition as seen in Ukraine. Automated data collection and analysis can generate feedback loops to spur the assessment and adaptation cycle in combat.



