Book Review: The Decision to Attack: Military and Intelligence Cyber Decision-Making

Aaron Franklin Brantly

University of Georgia Press, 2016, 226 pp.

Reviewed by Dr. John G. Breen Distinguished Chair for National Intelligence Studies, U.S. Army Command and General Staff College, and CIA Representative to the U.S. Army Combined Arms Center

“The Russian government hacked into the e-mail accounts of top Democratic Party officials in order to influence the outcome of the 2016 U.S. Presidential election.” This is a clear statement of guilt, definitive and direct, with little room for doubt. An attack like this demands a response. Doesn’t the manipulation of an American election warrant some sort of retaliation? Could this be an act of war? So why isn’t the U.S. (at least overtly) doing more in response?

Well, read closely the official statement about the Russian hacking from the Department of Homeland Security and the Director of National Intelligence.1 In colloquial intelligence-speak, it doesn’t really say the Russian government is definitively responsible for the compromise. The statement notes merely “confidence” that the Russian government “directed” the compromise and offers as evidence only that these attacks were “consistent with the methods and motivations of Russian-directed efforts.” The careful use of indefinite phrases such as “consistent with”, “we believe” or “we judge” leaves inconvenient room for reasonable doubt and plausible deniability about who actually conducted the attacks and who is ultimately accountable.

These types of assessments, as dissembling assurances go, sound eerily familiar, ala the 2002 Iraq WMD National Intelligence Estimate. Was there WMD in Iraq or not? Before the invasion, the community certainly said “we judge” that there was. Think of it this way; no mafia don could be convicted in a court of law by a prosecutor asserting only that the state was “confident” the individual was guilty. Offering as proof that the murder was “consistent with the methods and motivations of Mafia-directed efforts” is not sufficient. Did the don order the hit, conduct the act himself, or is he being blamed as a convenient scapegoat? These intelligence assessments simply do not seem to provide the unambiguous attribution necessary to reasonably contemplate retaliation.

This lingering ambiguity is a key issue addressed in Aaron Brantly’s 2016, The Decision to Attack: Military and Intelligence Cyber Decision-Making. An Assistant Professor at the U.S. Military Academy, Brantly provides a detailed academic exploration of cyber warfare, seeking to better understand how states interact within cyberspace. He posits that states should generally be considered rational actors and therefore will rank order their likely actions in cyberspace based on positive expected utility, i.e. how successful these actions will be compared to the risks they engender (expected utility theory). Decision to Attack is an excellent treatment of this crucial domain, packed densely with insight into a deceptively pithy 167 pages.


The research encapsulated in Decision to Attack suggests the key determinant in a state choosing to undertake an offensive cyber-attack is anonymity. That is to say, a state’s ability to keep its attack secret as it is being undertaken, as well as its capacity to hide or at least obscure the origin of the attack afterward. There is no barrier to action if there is no risk of retribution. As Brantly notes, the hurdle for choosing offensive cyber-attacks is extremely low when a state can assume some level of anonymity:

“Anonymity opens Pandora’s box for cyber conflict on the state level. Without constraints imposed by potential losses, anonymity makes rational actors of almost all states in cyberspace. Anonymity makes it possible for states at a traditional power disadvantage to engage in hostile acts against more powerful states.... Because anonymity reduces the ability to deter by means of power or skill in most instances, the proverbial dogs of war are unleashed. If the only constraints on offensive actions are moral and ethical, why not engage in bad behavior? Bad behavior in cyberspace is rational because there are few consequences for actions conducted in the domain.”2

Brantly does offer some hope that states will not rationally engage in “massively damaging” cyber-attacks, given that with greater complexity and scale these attacks become less likely to be kept truly unattributable. His assertion seems to be that these states, particularly those smaller states with less traditional or conventional power (military and otherwise), will focus on small to mid- range types of attacks. That said, even seemingly minor attacks can apparently lead to unintended significant impacts, certainly if these pile up over time -- a cyber domino effect. For example, a relatively small-scale compromise of an individual’s email account, followed by propagation of resultant inflammatory “revelations” seeded into the press and on-line social media, might lead to the upending of an otherwise democratic election.

Given the demonstrated importance of secrecy and obfuscation in the cyber domain, Brantly appears to argue in Decision to Attack that cyber-attacks should be considered a type of covert action. He points out that the U.S. government’s approach to cyberspace has to this point relied on the military, with Admiral Mike Rogers currently the commander of both the National Security Agency and Cyber Command (CYBERCOM). To Brantly, this indicates the president has given the military, and not the Central Intelligence Agency (CIA), the lead as the main covert operator in the cyber domain. He offers in criticism that this arrangement may run counter to Executive Order 12333, which provides lanes in the ethical/moral superhighway for the intelligence community. Brantly indicates though that the Department of Defense’s capacity to address the scale of the problems identified in cyber make this designation “appropriate.”3

While perhaps not the major focus of Brantly’s research, the implications of relying on the military to conduct these types of offensive operations are perhaps worth further exploration. There are reasons the CIA was designated and utilized during the cold war to be the primary organization responsible for covert action. In sum, it seems to have everything to do with plausible deniability. If you are caught by an opposing state in the conduct of covert action while in uniform, this might be considered an act of war. Is it any less so in cyber? Perhaps.

In March 2015 the CIA embarked on an unprecedented “modernization” effort designed to “ensure that CIA is fully optimized to meet current and future challenges,” largely by pooling analytical, operational and technical expertise into ten new Mission Centers.4 A new operational component -- the Directorate of Digital Innovation (DDI) -- was also added to the existing four

Directorates: Operations, Analysis, Science and Technology, and Support. The DDI is said to be focused on “cutting-edge digital and cyber tradecraft and IT infrastructure.”5 Public statements from the Agency have highlighted the importance of culture, tradecraft, and knowledge management in this new Directorate, stressing the DDI’s role in support of the CIA’s clandestine and open source intelligence collection missions.6,7

In a July 2016 speech to the Brookings Institution, CIA Director John Brennan discussed the mission of the newly created DDI and the risks posed by cyber exploitation. For example, Brennan suggested the Arab Spring revolts were influenced by on-line social media’s ability to swiftly facilitate social interaction and cause destabilization, that cyber could be used to sabotage vital infrastructure, or might be used by terrorist organizations to indoctrinate potential lone wolf actors.8 CIA of course looks to exploit this same cyber domain to its own ends. In CIA’s vernacular, destabilization then is “covert cyber-based political influence”; sabotage is a “cyber-facilitated counter proliferation covert action”; and indoctrination becomes an “on-line virtual recruitment.” What distinguishes between these actions –- anarchic destabilization versus covert cyber-based political influence -- is the intent, noble or ignoble, of the perpetrator.

Cyber espionage then at least might be thought to follow many of the same tradecraft norms and to be constrained by many of the same rules and self-imposed restrictions as real-world, “Great Game” espionage and especially some types of non-lethal covert action. For example, if caught in the midst of a recruitment operation against a foreign diplomat in some capital city, that country’s government typically would simply kick the offending CIA officer out of the country, declaring the individual persona non grata. One could say there are systems in place, most informal, that allow for a bit of espionage to be conducted without causing conflagration. This is especially true when dealing with near-peer competitors, as evidenced by decades of cold war intrigue. CIA was chosen, for example, to conduct covert action in Afghanistan during the cold war in order to avoid an act of war incident. CIA’s actions against the Soviet occupation were no less deadly, but use of foreign cutouts and misattributable materiel, i.e. tradecraft, allowed for plausible deniability and lack of attribution. The Soviets could “judge” all they wanted that the U.S. was behind their mounting losses, but without proof, this was meaningless.

CYBERCOM, the closest military counterpart to the DDI, was created as a joint headquarters in 2009. Unlike the CIA’s DDI clandestine collection posture, CYBERCOM’s stated mission appears much more broadly focused on traditional (though still cyber) offensive operations and network defense, i.e. “ensure U.S./Allied freedom of action in cyberspace and deny the same to our adversaries.”9 U.S. military joint doctrine on cyberspace operations is filled with otherwise conventional military terms such as “fires” and “battle damage.” A cyber weapon deployed against an adversary on the cyber battlefield can be called a “payload.”

The military’s cyber effort also appears to be somewhat encumbered by familiar bureaucratic challenges, not the least of which involves nominal joint efforts to operate in a domain not easily divvied up amongst services used to “owning” a particular geographic space, i.e. ocean, air, land. As noted by an Army strategist working on the Joint Staff Directorate for Joint Force Development: “The opportunity for one service to infringe on, or inadvertently sabotage, another’s cyberspace operation is much greater than in the separate physical domains. The command-and-control burden and the risk of cyberspace fratricide increase with the number of cyberwarriors from four different services operating independently in the domain.”10

How much greater then is the challenge in deconflicting operations between these disparate DoD cyber operators and those in the intelligence community and CIA’s DDI, all engaged on the same cyber field of play? If CIA has worked for years to gain cyber access to a particular source of protected information, and another actor wants to “deliver a payload” against that target, who decides which mission is most important? Do we choose the intelligence collection activity we need to better understand the enemy or is it the cyber-attack that cripples an adversary’s critical capability? And is there an advantage in having the military or a civilian organization conduct either of these covert action operations? These and many other important questions that spin off from reading Decision to Attack await further exploration.

Ultimately this decision-making should perhaps extend beyond the inside view taken by the operators from CYBERCOM or the CIA’s DDI. The process will hopefully include policy-makers struggling with how and why to use cyber as either an offensive tool or a tool of espionage. Brantly provides the reader with these delineations, offering definitions of, for example, cyberattack versus cyber exploitation. He also provides a solid starting point for a discussion about which of these approaches is most appropriate and a framework in which to understand our own and our adversary’s potential decision-making processes. There’s more to be said and in this evolving domain there is much more to understand, but Decision to Attack should be in the library of those hoping to make the right call when it comes time to act. IAJ


1 odni-election-security-statement

2 Brantly, Aaron Franklin. The Decision to Attack: Military and Intelligence Cyber Decision-Making, University of Georgia Press, 2016, pgs. 158-159.

3 Ibid, pgs. 123-124.

4 achieves-key-milestone-in-agency-wide-modernization-initiative.html

5 speaks-at-the-brookings-institution.html

6 Ibid, cia-achieves-key-milestone-in-agency-wide-modernization-initiative.html


8 Ibid, brennan-speaks-at-the-brookings-institution.html


10 Graham, Matt. U.S. Cyber Force: One War Away, Military Review, May-June 2016, Vol 96, No. 3, Pg. 114.


BOOK REVIEW InterAgency Journal Vol. 7, Issue 3, Fall 2016


Sticks and Stones – Training for Tomorrow’s War Today

Written with COL. Thomas COOK:


‘I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.’ – Albert Einstein

Technology is great, when it works the way we want it to. Over the last couple years it seems the ever-mounting stream of hacks could leave even the most stoic of technologists cringing. As researchers at the Army Cyber Institute at West Point, our task is to be forward thinking and anticipate the hill after next. We are one part of the Army’s robust effort to address cyberspace issues of today and tomorrow. Along with our cross-service and cross-agency partners we are making progress: we are working our way through a highly disruptive era in technology and politics to find solutions ensuring the security of the United States. At the same time, as we step forward into the complexity of a fully integrated future, we must not lose sight as a military of the fundamentals of fighting and defending the security and interests of the nation. The more the tools and gadgets of modern warfare are challenged by state and non-state actors, the more critical it becomes that our men and women in uniform maintain the fundamental skills of warriors from previous generations.

Networked warfare and cyber warfare are but two of many catch phrases of the last couple of decades rising to prominence. These are concepts that we must continue build on to improve our precision, coordination and efficiency as defenders of the nation’s security and interests. Yet despite these advances, the US military must also be prepared to operate in a world where the lights do not turn on, engines do not start, and all our efficiencies leave us with only the rifle in our hands. Our ships, armor, aircraft, satellites, and almost all other military systemsare highly dependent on digital systems vulnerable to attack.

As a nation, the US must expect the unexpected by training our military to perform in the absence of technologies they have never lived without. Our incoming officer and enlisted corps are digital natives: they leverage GPS, laser guided munitions and other modern tools expertly. But as recent hacking incidents on cars, ships, supply systems, GPS, and even aircraft indicate, the diversity of threats posed to our systems are immense. While calls to fix the code and secure the systems are being heard loud and clear. The Army and other organizations like ours are working day and night to solve a persistent stream of cyber challenges It is important to remember that we are solving problems as our cyber surroundings change under our feet.

As we write better code, build more robust hardware and develop better cyber warriors for both offensive and defensive operations, our ability to observe, orient, decide, and act across the services and within them will become more robust. At the same time, we must recognize that we are creating puzzles that others will try to solve and that eventually, given enough time, energy and luck, most puzzles are solvable. Technology has enhanced the capabilities of the Army and her sister services. Under the continued direction of President Obama, Secretary Carter, and foresight of Admiral Rogers and Generals Alexander, Cardon, Hernandez and many others, we as a nation have established the foundations of a robust national approach to cybersecurity. This is an evolving process and the Department of Defense (DoD), Federal, state, local, and private entities will necessarily continue to build capabilities improving our aggregate resilience. The problem of cybersecurity is not isolated to the DoD alone and as a nation we must work together to strengthen our mutual security and resilience.

Across the armed services there is a yet another need, specific to our profession. Just as medieval castles layered their fortifications, so too must we train and develop redundancies in our men and women and the systems they use. These redundancies should be well adapted to a world in which the technology we have grown so dependent on fails us. The services must recognize that our need to train in, and for, cyberspace related conflict does not obviate necessary skills found in the historical foundations of military arts. Skills such as celestial navigation, non-computer aided mathematics and many more are critical to maintaining operational effectiveness in the absence of the tools upon which we now so often depend. Robots, drones, and all the science-fiction that has become science-fact is nothing compared to the determined will of a well trained and educated, highly motivated and creative Soldier.

Enter the Policy and Legal Void

Soldiers are down range and have suites of tools available to them that they cannot use to their full capability. They are not technically limited, but rather constrained by the authorities and pre-requisite policies established in a pre-digital age. We tell them to go and defeat ISIS, Al al’Qaeda, or pick another future adversary, but they must do so with their hands tied behind their backs. Make no mistake, as a nation we are currently involved in a global conflict. The conflict is not defined by traditional weapons, but by bits and bytes traversing fiber lines and airwaves. This global information war collides with many of the values of Western Democracies, and the societal constraints of authoritarian regimes. The robust constraints on governmental instruments serve a valuable purpose, yet at the same time our Soldiers in the field are struggling to navigate complex legal and policy waters while corporations are drowning in data that might inform or provide context for a variety of mission sets. The volume and velocity of this data is only set to grow as globally the number of Internet enabled devices increases from approximately 17 billion to 50 billion and beyond. At the beginning of the digital age it is imperative that we, as a society, begin discussing the future we are rapidly entering.

Constraints are pivotal for maintaining the fundamental civil rights Americans cherish.  Civil rights, to include various liberties such as privacy, free speech, and freedom of religion among others are challenged by data repositories that eliminate anonymity and the ability to be forgotten and to forget. Yet, we as a society are fooling ourselves if we believe that when we order Google to delete us from search results, or Facebook to remove our profile that that data ever really disappears. The vast majority of US Internet users are simultaneously consumers and products in a complex digital ecosystem that will only become more complicated with the expansion of the Internet of Things into our homes, offices, and even our bodies. We and our political elite can pretend to be neo-Luddites, but we are not. We are voracious consumers of innovation. We innovate without significant thought to consequence, and in so doing often fail to assess the risks of the world we are designing.

As we demand and consume innovation, we ignore the fact that we are retaining the policies and laws of yesterday, and in the process shackling those in our society to whom we have assigned the responsibility for protecting us. As we innovate and adapt so to do our enemies, with terrorists, states adversaries, and criminal networks preying upon our innovation and learning to innovate and adapt as we do. All the while, we tell ourselves that if we provide the military and law enforcement with the policies and legal structure to defend us that we will be entering into some Orwellian nightmare. Yet, in many respects the nightmare is of our own making. We bleed trillions of dollars a year to cyber criminals and state espionage campaigns, and willingly allow those who engage in political violence, child pornography and other nefarious behavoirs to run rampant through the systems that we once thought would usher in a bright new era for humanity.

General Michael Hayden asserted during a talk after his time at the NSA that he would go right up to the line in using every legal authority granted him and the agencies under his control, but that he would go no further. He said the agencies of the federal government were designed to operate within a rule of law system beholden to the will of the people. Edward Snowden, the EFF, the ACLU, and others have challenged the extent to which federal authorities extend control over systems used by the US and allies. They have challenged the concept of secret courts and classified policy directives. Some have even indicated that individuals from the intelligence community (IC) engaged in illegal activities beyond the scope of even secret courts and classified policies. Around the margins there will always be those who violate the intent of law and policy. However, the vast majority of members of the IC are well intentioned individuals who seek to protect their fellow citizens.

The basic distribution of relevant national security and law enforcement authorities within United States Code are divided between Title 10 (Military), Title 18 (Law Enforcement), and Title 50 (Intelligence). The U.S. Code has been evolving in various forms since World War II, and was designed primarily in a pre-digital era in which it was logical to provide clear lines of demarcation between domestic and foreign, law-enforcement, military and intelligence. These lines are blurred in a world in which terrorists recruit from abroad, and plan in both conflict and non-conflict zones operations against the Homeland. These lines are strained by states engaging in cyberattacks against critical infrastructure, and espionage environments that span military, civilian, and intelligence spheres.

I have met with police agencies asking for intelligence capabilities, and with military organizations requesting the ability to view online media accounts with known terrorist connections. In the present environment, the tools available to track and engage terrorists are robust, but authorities require the military, IC, and law enforcement to engage in a dance along a legal and policy tightrope that slows the process down and increases risks. Moreover, because each entity is so ingrained within its authorized framework they are limited in their abilities to think effectively across the lines to anticipate what other agencies and entities need. Often they are further constrained by not knowing what they are truly allowed to share, when they are allowed to share it, and under what conditions. To some extent fusion centers provide valuable bridges between stovepiped institutions. Additionally, entities often embed personnel within one another’s structures, but even these attempts provide avenues for communication fail to fully mitigate the problems faced.

The constraints imposed by the various titles within the cyber environment are particularly frustrating when one realizes that the tools available to the corporate sector for marketing and sales often in many ways exceed the capabilities of both intelligence and law-enforcement. Critics are correct in challenging the assertions of the government and its agencies that these tools are capable of preventing all attacks, but as the volume of data increases, and as the skill and efficiency of the community increases in tandem with advances in technology and volumes and types of data, it is likely that these challenges will be met head on and solutions found.

We can and must educate the citizenry about the world we are rapidly entering. The world in which we carry mobile supercomputers that far exceed the capabilities of the devices used to land astronauts on the moon. We excrete data from our phones, our watches, our credit card transactions, our communications, our homes, and soon our cars. We produce zettabytes of data, and we are only at the beginning of the digital age. We can fool ourselves into saying we can remain private, we can remain anonymous, we can remain hidden from the future, but the reality is  far different. The US is operating in a policy and legal void based on a static technological environment of yesterday. Yet the environment is not static, it is nearly exponential.

Credit needs to be given to EFF, CDT, the ACLU, and others for challenging the conversation, but this challenge needs to go further and extend to our schools, our local and state and federal legislative and legal bodies. If we want to maintain the current constraints on law enforcement, intelligence and military institutions, we must do so knowing these constraints are self-imposed and carry certain risks, just as there are risks associated with the removal of constraints. We must acknowledge that the constraints we impose are primarily limited to those to whom we have delegated responsibility for our protection both at home and abroad and not to the companies we so willingly give our data to on a daily basis. We must recognize that we will continue to generate and consume enormous amounts of data both as consumers and products in a complex socio-technical-economic ecosystem that is still in its infancy. It is only by confronting the reality of both the present and the future that we can begin to address the current status of laws and policies and determine where they need to be.

The Value of Intelligence and Secrets

Secretary of State Henry Stimson was famously quoted “Gentlemen don’t read each other’s mail” in 1929. Just a couple years later during the 1930-31 London Naval Conference and the 1932 Geneva Disarmament Conference, Secretary Stimson would come to understand and appreciate the value of national security intelligence and would reverse himself. The value of intelligence to both the United States and our allies would become of paramount importance during World War II and in the Cold War to follow. Whether the breaking of Enigma codes, the Purple codes of the Japanese or the use of double agents in the United Kingdom, intelligence saved lives and provided strategic and tactical advantages.

Intelligence is not a new state activity, but one that is thousands of years old rooted in classical antiquity. For as long as humans have been bipedal and walked the earth they have sought advantages over one another and their environment. In our modern hyper-partisan environment, an era of liberal democracy and utopian goals of radical transparency many are quick to condemn our intelligence community (IC). We decry their sources and methods, but even more so we decry their failures when they infrequently occur. As a nation and people, our IC faces a paradox, we ask them to provide perfect protection, but we work hard to limit the techniques and tools by which to achieve our stated objectives.

As a liberal democracy, we have every right to constrain the state which we establish. Members of our intelligence community recognize and respect this right. As General (Ret.) Michael Hayden has been quoted numerous times and wrote a book discussing that the intelligence community plays to the edge of acceptable behavior. They go right up to the legal, ethical, and moral lines that we establish, but no further. They serve at the pleasure of those whom we elect to represent our interests. Their mission depends upon collecting information and developing intelligence products to keep us safe. This requires the manipulation of human assets (spies), the manipulation of computers and similar devices, the breaking of signals, the collection of images and signatures from a variety of sources. These activities are accomplished within the constraints of US law and under the supervision of the Executive Branch, the House Permanent Select Committee on Intelligence and the Senate Select Committee on Intelligence, as well as the Senate and House Armed Services Committee and a bevy of other oversight organizations dispersed throughout the US government can and should be considered reasonable and unsurprising functions of intelligence services.

The most recent WikiLeaks document dump does not serve the national good, but rather harms efforts of well-intentioned professionals working to provide intelligence on adversaries who would seek to do us harm. For the better part of the last three years, I have been researching the online behaviors of the Islamic State and Al-Qaeda. These organizations are well-attuned to technology and its vulnerabilities. They actively seek to evade intelligence and law-enforcement agencies by using encrypted communications and a variety of platforms. They actively crowdsource and train one another on best practices. The release of these documents whether verified or not harms efforts of professionals working tirelessly to put together a complex mosaic of bits of intelligence to prevent terrorist attacks and strategic and tactical surprise. The release of this information while temporarily serving the benefit of patching and protecting individuals outside of the gaze of the US IC likely harms efforts to understand and track terrorists who desire to attack the homeland and our allies. Time and again I have seen intelligence leaks spread through the jihadist communities like wildfire, and within days the tactics, techniques, and procedures for avoiding intelligence and law enforcement agencies have changed. Leaks such as the recent WikiLeaks disclosures do not make us safer; they provide those who wish to harm us with an information edge and degrade our national security.

We as a nation, like Secretary Stimson, detest other’s reading our mail, but we should not forsake the value of the intelligence community and the work it does to keep the nation safe. We should work through our elected leaders to convey the lines within which we wish our intelligence professionals to operate and should consistently pressure our elected officials to keep watch over those we empower to protect us. Intelligence does and will continue to provide value to the nation and to achieve this value requires secrecy and the development of sources and methods that often reside beyond the public spotlight.

The False Promise of Hacking Democracy

“Probable impossibilities are to be preferred to improbable possibilities”

It is immensely convenient to claim that a Federal election can be hacked; however, the reality of hacking such an election is far more difficult than one might realize. The level of complexity in the US electoral process is such that to hack the election would require a combined feat of technical and social engineering requiring tens of thousands of co-conspirators operating across hundreds of jurisdictional boundaries with divergent laws and practices. Having worked in democracy development for the better part of 10 years on elections in several dozen countries, the state of American electoral security is strong because of its immensely decentralized nature. In a case where the bewildering and often arcane complexity facilitates inefficiency, it is this inefficiency that coincidentally fosters systemic resilience. It is the organizational attributes of a national election run by state and local authorities that make the United States a poor target for any malicious actor attempting to directly affect the polling places where American’s cast their ballots.

To understand why the United States is so resilient to malicious actors seeking to manipulate a national election requires understanding the nuances of federal, state and local roles in the execution of a national election. One of the best sources for understanding the complexities of the American voting process was produced by a 2014 Presidential Commission. The commission deconstructs its recommendations and thereby provides insight into the electoral procedures of states by examining issues about voter registration, access to polling locations, the management of polling places, and the technology of voting itself.[i]

It should be noted that everything from the registration of voters to the management of polling locations and the subsequent tallying of votes is largely overseen at the state level or below. The laws and rules associated with voter eligibility within a given state, and the necessary requirements associated with registering to vote are largely state-based. The primary body that informs state executive offices on election administration and federal legislation related to voting practices and laws is the National Association of Secretaries of State (NASS). This body seeks to distribute information in a non-partisan manner, and exists primarily to disseminate information between states.[ii] The NASS also provides helpful resources for state legislation related to the management of polling stations, voting requirements, and laws within each state.[iii]

To analyze the potential for “hacking” an election, I will briefly break down the legal, functional, and technical attributes of voting within the United States. The intent is to illustrate the complexity and relative robustness of the US system, while simultaneously highlighting its gross incongruities based largely on state preferences. This is not a comprehensive assessment of the resilience and vulnerabilities of the US election but rather is intended to provide a starting point for curious minds.

Basic Legal Considerations in US Elections

Generally, enfranchisement originates within constitutional law. Constitutional law supersedes state law under the Supremacy Clause (Article VI, Clause 2) of the United States Constitution. Moreover, the right to vote has been further elucidated in the 15th, 19th and 26th Amendments to the Constitution and establishes that voting rights cannot be abridged on account of race, color, previous conditions of servitude, sex, or age for those over the age of 18.[iv] While there has been controversy between the states and the Federal government, these are largely settled through the legislatures in accordance with established legal precedents. Although there have been suggestions that some state practices reduce ease of attainment to the right of enfranchisement, such state laws must skirt the edge of constitutional law. These laws are also typically on record on a state by state basis and susceptible to challenge within the courts. Thus to hack the election from a structural perspective on who is and is not able to vote is indeed possible, but not without the oversight of state and federal bodies as well as multiple non-governmental organizations. Legislation that might disenfranchise or discourage voter registration might skew election results but are largely constructed in public view. This, however, does not diminish their often controversial nature.[v]

At the legal level, there are few apparent ways to manipulate an election absent oversight. There are no reasonable technical means by which a malicious actor could systematically manipulate election results across multiple states simultaneously to have a high probability of altering election results.

Basic Functional Considerations of US Elections

Who can register to vote and when they can vote is conducted on a state by state basis. The US Vote Foundation is one of many robust organizations that provides detailed information on a variety of issues related to voter registration and the timing of votes and voting methods.[vi] A summary search of voter registration requirements by state illustrates significant differences by state as to the time, documentation requirements and services available for voter registration. Moreover, each state has divergent voter requirements for voter options and methods on early voting[vii], Election Day voting laws[viii] and more.

Because states maintain voter rolls independent of one another, it is conceivable that an individual might attempt to register in two neighboring jurisdictions to double vote. To mitigate this issue states have established at least two primary mechanisms for voter verification. The Electronic Registration Information Center (ERIC) maintains voter rolls for twenty-one states and allows them to verify voter registration, motor vehicle information, and postal information across states and within the United States Postal Service database. The Interstate Crosscheck Program leverages similar procedures to ERIC and includes twenty-nine states. Even the manipulation of voter lists by placing deceased persons should be caught by either state registration databases, motor vehicle databases, benefits databases, or the US Postal Service. If a deceased person is attempting to vote across state boundaries, such a person should also be identified by cross-border databases. The functional act of registering to vote and subsequently voting varies state by state, yet the data indicate that incidents of voter fraud are extremely rare. The Washington Post in 2014 provided a detailed analysis of more than one billion votes cast in elections at all levels of government from 2000-2014 and found only thirty-one confirmed incidents of fraud.[ix]Statistically, you have significantly better odds of winning the lottery than finding someone committing voter fraud. Even then, the impact of such fraud on the outcome of elections is nonexistent.

Is it possible to hack and systematically manipulate every single database or enough databases to provide the number of potentially fraudulent voters to sway an election? Yes. But the probability is infinitely small and is further constrained by technical challenges in the actual execution of voting so as to make the impact of voter fraud through digital manipulation futile. Moreover, because each congressional district is sampled every ten years in the census, significant changes in voter registrations by district would provide strong indicators of fraud before, during, and after an election. In addition to the mandated census numbers is a consistent statistical assessment of district population maintained on an ongoing basis. The manipulation of voter rolls to a level that might sway an election is constrained by numerous checks and balances, some digital, some historical, and others practical.

Basic Technical Characteristics of US Elections

In the United States, twenty-one states are not susceptible to digital attacks at the polling station. These states use exclusively paper based or mail based ballots.[x] Of the remaining thirty-one states including the District of Columbia, each uses DRE (direct-recording electronic voting machine),[xi] eighteen of these states have a paper trail associated with the DRE with three of these providing the option not to have a paper trail.[xii] There is no way to fully protect DRE systems. They are a digital system and therefore susceptible to various attack vectors, some of which have been demonstrated. Many of these states are using systems that are more than ten years old. Wired published an article in August 2016 indicating the relative ease with which these voting machines might be systematically violated to achieve outcomes other than those intended by a voter.[xiii]

To eliminate issues associated with voting machine vulnerability half of all states engage in post-election audits to verify the paper and digital vote totals.[xiv] Other fundamental problems would arise in the targeting of the voting machines themselves. First, while there are a limited number of voting machine vendors, there is still a reasonable variety that would require expertise across multiple systems. Many of these machines are never connected to the Internet and would also, therefore, require in person manipulation. In person manipulation might be possible with select districts, but the feasibility of manipulating machines across multiple districts becomes significantly more complicated. If an election were sufficiently close so that the manipulation of a single district might affect the electoral outcome, the possibility of hacking the democracy might be viable. However, there is no modern US election in which the results post audit indicated any meaningful or intentional manipulation. Hacking the voting infrastructure might cast doubt on the electoral process, but even then there are mechanisms for recounts, re-votes, judicial and legislative action. The areas of most concern are the voting devices themselves, yet the voting devices are only one part of a broader ecosystem that ensures the viability of election results.

The False Promise

There is a reason why election fraud is hard to hide, math. Even if one were to compound the probabilities of success of hacking or manipulating each of the briefly examined categories above, the numbers and turnout of voters by demographic is extremely difficult to falsify. Changing the outcome of a vote requires detailed knowledge of each area where voting occurs. Why so many authoritarian regimes struggle at hiding their manipulation of elections is not for lack of practice, coordination or planning, but because the math simply never adds up.

As Americans head to the polls, they should know that the lines, the frustrations of registration, the differences in laws across states while often frustrating and controversial make the eventual democratic outcome more resilient in the face of individuals or states that might seek to alter an election. The United States has a robust history of peaceful transitions largely absent voter fraud or manipulation. Even under very contentious conditions in 2000, the electoral process proceeded through the judiciary in an orderly manner. The flip side of the false promise of hacking democracy is the realization that more than 240 years of statehood have demonstrated that the American experiment will continue to live on despite the challenges confronting it in the digital age.






[i] The American Voting Experience: Report and Recommendations of the Presidential Commission on Election Administration, 2014.










[xi] Ibid.

[xii] Ibid.