I started writing an article about the deep roots of the intelligence-policy divide before Kent and Kendall and ended up writing an article about how Kendall was actually right, and Kent was wrong.
The intelligence-policy divide goes way back, long before Kent and Kendall were even twinklings in their father's eyes. It started before the invention of modern intelligence agencies altogether, during the early modern period in European history.
Charles Tilley’s famous aphorism “war made the state, and the state made war, “is easily be repurposed for intelligence. Though people have employed spies forever, it wasn’t until administrative monarchies that the powerful could use what we think of today as intelligence.
The proliferation of printing presses and the growth of literacy that followed them allowed for the slowly modernizing courts of 16th-century Europe to weave comprehensive webs of intelligence collection for the first time.
Before, reading was considered a ‘free art’ and taught separately from writing, which was considered a mechanical skill—and a low one at that. In other words, most people who could read, couldn’t write.
The surge in literacy gave rulers access to many more clerks, granting them much more administrative power. They used this power to establish communications with far-flung diplomatic posts and to staff code-breaking offices that were soon gathering information of all sorts.
Never before had so few been able to access the thoughts of so many, so swiftly. It was, in the words of one scholar, a “decisive point of no return in human history.”
The autocratic rulers of early modern Europe viewed the gathering of information as their private concern, seeing information itself as their private property. The same was true for the military commanders who aimed to further their rulers’ interests in war.
For all involved, the use of intelligence was a very personal affair, inseparable from other elements of statecraft or generalship that were then so wrapped up in personal relationships. This meant that these early intelligence apparatuses were rudimentary, ad-hoc, & impermanent.
By the onset of the long 19th-century, the role of the decision-maker as arbiter of truth was well-established both in the halls of power and the command tent. In the military, the commanding general was the primary—perhaps the only person who assessed and analyzed intelligence.
This custom would begin to be challenged by the century’s end, though, when commanders and cabinet secretaries began to struggle to keep pace with rapidly-proliferating newspapers and telegraph stations.
By the Napoleonic era, the embryonic intelligence networks of the early modern period had grown dramatically; nowhere more so than in France itself. France was at the leading edge of statecraft and military technology, and Napoleon was a voracious consumer of intelligence.
He received operational & tactical intelligence from his marshals, but for strategic and political intelligence he relied upon the Section Statistique, probably the first permanent organization charged w/ the collection and dissemination of political, diplomatic, & economic info.
Still, at the outset of this period, a savvy commander could still cope with the complexity of affairs. Napoleon was himself the archetypal wielder of the coup d'oeil, the commander’s innate appreciation of the situation and knowledge of the best course to take.
Napoleon’s arch-rival, the Duke of Wellington, also relied upon newly available volumes of intelligence, but as with the emperor, there was no question who made the assessments.
During the Peninsula War, “…all intelligence came to Wellington…the appraisal of it was his alone…nor do these reports appear to have been summarized, abstracted, or collated before they reached him. What collating was done was almost certainly done by Wellington himself.”
In the aftermath of the Napoleonic era, there was still a strict separation between intelligence & analysis. Intel was largely synonymous with information, & just as a state’s information was the personal property of the ruler, an army’s information belonged to the commander.
Clausewitz, who more than anyone established the paradigm of war within which we still operate within to this day, spent his youth fighting (and losing to) Napoleon’s military machine.
Despite his first-hand experience of the efficacy of Napoleon’s military, however, he nonetheless held ‘intelligence’ in low regard. In On War (1832), he devoted a slim, single chapter to intelligence, defining it as “every sort of information about the enemy and his country.”
He dismissed most of it as false; not only useless but actually detrimental. He devoted far more attention to what he called critical analysis—the “tracing of effects to their causes,”—but thought it was wholly the commander’s purview.
Clausewitz’s rival, Baron Antoine de Jomini, had a more positive appraisal of intelligence. A Swiss officer who had served in both the French and Russian militaries, Jomini had also seen first-hand the good use Napoleon had made of intelligence.
Based upon these experiences, he wrote The Art of War (1838), a sort of how-to book that was instantly more popular than Clausewitz’s own—particularly with Americans.
Jomini encouraged commanders to use every measure at their disposal to gather intelligence, albeit with some of Clausewitz’s caution. Jomini too stressed the importance of analysis, but there was no question about who would be performing it:
“As it is impossible to obtain perfect information by the methods mentioned, a general should never move without arranging several courses of action for himself.”
Governments adapted to the growing complexity of international affairs in this period by establishing permanent intelligence bureaus for the first time. Prussia and Russia were producing “objective intelligence summaries containing strengths and dispositions” by 1830.
Britain established an Intelligence Branch within its War Office in 1873, an Indian Intelligence Branch in 1878, and an Irish Special Branch in 1883. The United States formed its Office of Naval Intelligence in 1882, followed by the Army’s Military Intelligence Division in 1885.
But despite the establishment of permanent agencies with staffs in the hundreds, the assessment and analysis of intelligence remained the sole purview generals and prime ministers.
George Armand Furse, a British Army colonel, plainly characterized the status quo in his 1895 book Information in War. Furse declared that all information belonged to the commander alone and:
“it is only for him to draw proper conclusions from a thorough consideration of the total information gathered from every possible source.”
The staff, he wrote “…would be descending into a matter which is quite beyond their province were they to try and influence his decision by a too forward submission of their views."
"When so invited, they may, possibly, from the knowledge they possess of the actual situation, be in a position to offer some valuable suggestions; still, it behooves them to refrain from doing so until they are asked.”
By the outbreak of the First World War in 1914, intelligence staffs had made the study of foreign adversaries their specialty. These produced volumes of reports on things like order of battle and geography.
Military commanders then at least partially had to rely upon the staff’s judgment in addition to their personal assessments. Michael Herman explains:
The scale of the First World War was unprecedented in human history. Never before had organizational skill combined with communications and transportation technologies to allow armies to kill so many, in so many places at once—with machine-like efficiency.
With armies numbering in the millions and logistics tails that wrapped around the world, no commander could effectively keep track of every event variable any longer, putting the final nail in the coffin of Napoleonic coup d'oeil.
In other words, war—and by extension, the politics of grand strategy—had grown too complex for decision-making principals to effectively manage without the help of agents whom they would never completely trust.
This brings us to the principal-agent problem, a concept from economics that helps to explain organizational dysfunction.
An authority—the principal—gives resources and guidance to agents to act on their behalf. But the principal can’t be certain those agents will always act in their interest.
This is one reason why policymakers prefer to fill their staff with those who owe them some measure of personal loyalty. It’s difficult for them to trust faceless experts they don’t know.
The principal-agent problem is exacerbated by the web of personal and professional obligations that crisscross any large bureaucracy, particularly in the eyes of clientelist principals who believe agents are supposed to be serving their personal interests.
Richard Hofstadter noted in his seminal Anti-Intellectualism in American Life that the “rise of the experts” produced anxiety, fear, and anger in a population that was just then realizing that it was no longer as independent as it liked to believe itself to be.
“In the original American populistic dream,” he wrote, “the omnicompetence of the common man was fundamental and indispensable. It was believed that he could, without much special preparation, pursue the professions and run the government.”
As problems grew more complex, Hofstadter’s ‘common man’ realized he was increasingly reliant upon specialized expertise to make sound decisions.
As it relates to intelligence and security, it was previously the ruler or commander who held the responsibility of assessment and judgment, just as they held the ultimate responsibility of decision.
It wasn’t until the advent of modernity, when the volume of information became too much for any individual, no matter how brilliant, to effectively grasp—that they were forced to offload some of that cognitive work to others.
You can follow @ZaknafeinDC.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: