Disentangling The Digital Battlefield: How the Internet Has Changed War

UA_Support_Forces_Starlink_01

Journalists have described the invasion of Ukraine as the world’s “first TikTok war,” the most “internet-accessible war in history,” and history’s “most viral” social media war. But this kind of hype tells us only so much about the real impact of digital technology on the current conflict. 

So far, three significant digital trends have emerged from the war in Ukraine. First, technological innovation has helped Ukraine to offset Russia’s conventional military advantage, particularly by increasing the participation of ordinary citizens. Second, as these citizens have become uniquely involved in digital warfighting, the lines between civilian and military actors have blurred. Unfortunately, international humanitarian law has not kept pace, leading to growing concerns about how the rules of war apply in a digital conflict. Third, the conflict has generated a massive amount of data potentially useful for holding war criminals to account. However, the proliferation of open-source investigations also creates new risks for analytic bias and procedural inconsistencies. 

Governments and civil society organizations that are concerned with upholding the laws of war should take note of these emerging challenges. Digital technology is now a central feature of warfare. The sooner new policies and legal guidance can be articulated, the more effectively international institutions will be able to protect civilians and prosecute violators in future conflicts.

Turning the Tables

Ukraine has used new technology to help to turn the tables on Russia. As one of the first conflicts in which both belligerents possess advanced tech infrastructure, this conflict has become a laboratory for new technological concepts. To date, a great deal of coverage has focused on the impact of new weapons like the Turkish Bayraktar TB2 drone or new communications equipment like Space X’s Starlink satellite-based internet services. But while Bayraktars have destroyed Russian equipment and Starlink has helped to coordinate artillery strikes, digital technology is having an equally significant, if less obvious, impact on the way in which the war is being fought.

 

 

Crucially, Starlink devices have also served another purpose. By providing reliable internet access, they have allowed Ukrainian citizens to become actively involved in pushing back Russian forces. In doing so, they have expanded warfighting beyond the confines of traditional military and government actors. A key innovation has been Ukraine’s deployment of crowdsourcing apps — giving individuals the means to provide critical information about Russian military movements and assets. In early 2020, prior to the war, Ukraine launched the Diia app, intended as a good government initiative to make it easier for citizens to renew licensing permits, pay for parking tickets, and report potholes. Since the invasion, Ukraine’s government has repurposed the app to serve as frontline eyes and ears for the Ukrainian army. Citizens can submit geolocated photos and videos of Russian military sightings through the app. Citizens can also use Diia to provide tips about “suspicious” people who might be collaborators, invaders, or saboteurs. As Gulsana Mamediieva, an official in Ukraine’s Ministry of Digital Transformation, told the author during an interview, the app allows Ukrainians to report sightings of “tanks, military forces, and anything like Russian troops they have seen. We really urge citizens to do it.” This data is then aggregated on a map and used by intelligence officials working on defense and counterstrikes. It is doubtful that Ukraine’s government could have made use of these tools without already having the necessary technological capacity and a digitally literate population able to supply crucial information via smartphones.

While it is difficult to assess the aggregate impact of crowdsourced apps on the war, there are many individual stories showing how they have facilitated battlefield gains. For example, in the March 2022 fight for Voznesensk, a southern town of 35,000 people, Ukrainian volunteers used the Viber social messaging app to send the coordinates of Russian tanks and direct artillery fire. As recounted by one of the volunteers: “Everyone helped. Everyone shared the information.” The result was catastrophic for the Russian army. Fleeing Russian soldiers left behind nearly 30 of their 43 vehicles, including tanks, armored personnel carriers, rocket launchers, and trucks, as well as a wrecked Mi-24 helicopter gunship — leading to one of the first comprehensive routs of Moscow’s forces.

The role of digital technology in the Ukraine war is not limited to drones and crowdsourcing apps. Information operations are a crucial pillar of both Ukraine and Russia’s efforts to galvanize international support. Online information operations are not a new phenomenon of course. Many experts cite the 2012 Israel-Gaza conflict as the world’s first “Twitter war,” while the Islamic State and other terrorist groups have exploited social media to amplify their propaganda, mobilize supporters, and influence global opinion. But the scale to which Russia and Ukraine have prioritized the global information war is exceptional. In the early days of the invasion, Ukrainians proved especially adept at demonstrating their defiance via memes, videos, and photos. For Ukraine, holding the attention of the West has been crucial to maintaining the flow of arms and other forms of support. 

Civilians Enter the Fray

As digital technologies have allowed Ukrainian citizens to become uniquely involved in warfighting, however, the lines between civilian and military actors have become blurred. The Diia app is but one of many examples of Ukrainians leveraging digital technology to defend their homeland. Ukraine’s defense ministry coordinates closely with the self-styled “Ukrainian IT Army,” composed of over 400,000 international and Ukrainian volunteer hackers, to target Russian infrastructure and websites. The Army was created by Mykhailo Fedorov, Ukraine’s Minister of Digital Transformation, who sent a tweet with a link to a newly created Telegram group that urged volunteers to “use any vector of cyber and [distributed denial of service] attacks on Russian resources.” His initial post provided 31 Russian banks, commercial establishments, and government websites for targeting. There are numerous other instances of civilians providing expertise to support the war effort. Upwards of 1,000 civilian drone operators, for example, contribute to Ukraine’s defense by surveilling Russian assets from the air and relaying crucial information to Ukrainian military units for artillery strikes.

This blurring raises difficult questions regarding civilian protection under international humanitarian law. A bedrock legal concept is the principle of distinction: Parties to a conflict are expected to distinguish between the civilian population and military combatants and direct their operations only against military objectives. But should the rules change when civilians provide direct support to one of the warring parties — such as feeding drone surveillance footage of Russian tanks to Ukrainian artillery unit that then conduct precision strikes? It is well established in international humanitarian law that civilians benefit from protection against direct attack “unless and for such time as they take a direct part in hostilities.” But the interpretation of precisely what is meant by “direct part in hostilities” is not settled. In 2009, the International Committee of the Red Cross published interpretative guidance to spell out what qualifies as direct civilian participation in hostilities. While this guidance is not considered settled law, it carries significant weight, and lays out three cumulative criteria for determining participation in hostilities: First, whether the action leads to adverse military consequences; second, whether there is a causal relationship between the act and the expected harm; and third, whether an act is specifically designed to support one party to an armed conflict to the detriment of another, that is, whether there is the existence of a “belligerent nexus.”

Regarding the threshold of harm, critics contend that the Red Cross’s guidance defines “harm” in too limited a manner — excluding actions by civilians designed to “enhance a party’s military operations or capacity.” Thus, civilians manufacturing improvised explosive devices would not qualify in the “harm” category of direct participation in hostilities. Only individuals laying them in the ground or directly contributing to their installation would meet the harm threshold. Similarly, critics charge that the guidance is overly restrictive with how it defines causality. The guidance imposes a “one causal step” rule, stating that an individual cannot be more than one stage removed from the harm in question. Therefore, building up or maintaining the capacity of a party to harm its adversary would not qualify. Neither would providing supplies and services — such as fuel, finances, or electricity — to a party in a conflict. An individual providing intelligence to a party that subsequently attacked a target would qualify as a direct participant to hostilities. But if the provided intelligence was not used right away — if a mission-planning cell instead analyzed the information and subsequently passed it on to a strike team for air raids — this would exceed the one causal step requirement and not qualify as direct participation. 

As for determining the third element of belligerent nexus, the International Committee of the Red Cross acknowledges that this presents “considerable practical difficulties.” Belligerent nexus is not premised on the subjective intent or hostile intent of the participating actor — “it does not depend on the mindset of every participating individual.” Instead, it is expressed in the design of the act or operation itself, based on objectively verifiable factors. However, experts note that the requirement that the act be in support of one party and not simply to the detriment of another — that one must belong to an organized armed group which is a party to the conflict — leads to inconsistencies. What about situations where an armed group is engaged in operations against one party without being explicitly aligned with that party’s opponent? In the current conflict, while some citizens are actively organized by Ukrainian government or military forces, others are acting in a more spontaneous way or could be working with partisan groups that do not have a formal affiliation with the Ukrainian military. Would their acts still constitute direct participation in hostilities from the perspective of international humanitarian law?

Putting these elements together — particularly in light of new technologies — reveals significant ambiguities. It is possible that civilians participating in the Ukrainian IT Army who commit direct acts disabling Russian infrastructure would qualify as direct participants, removing their civilian immunity. But one would have to prove that the hacker’s action resulted in a sufficiently adverse military consequence, that the hacking itself directly caused the occurrence of the harm in no more than one causal step, and that the act was designed to harm the opposing side in the context of ongoing hostilities. 

Each aspect is open to argument. Other acts bring even more uncertainty. For example, how should one treat civilians who periodically upload surveillance footage to Diia that is later used by Ukrainian forces to conduct missile strikes? Generally speaking, if the transmitted intelligence is tactical in nature and given to an attacking air force, then it would likely be regarded as directly participating in hostilities. But if the collected intelligence is not of a tactical nature — or even if the activity appears to be military in nature but not directly linked to a harm, such as purchasing, manufacturing, or maintaining weapons or other equipment outside of specific military operations — then these actions would not meet the “direct participation in hostilities” threshold. The resulting uncertainties arising from the new digital dimension of conflict bring heightened risks for civilians and highlight the importance of providing greater clarity regarding the rules governing war.

Enabling Accountability

The invasion of Ukraine has generated a trove of digital data that could be used for potentially prosecuting war crimes. Activists and ordinary citizens are relying on smartphones to store photos and videos documenting abuses. Citizen investigators are searching online to identify perpetrators and verify atrocities. Individuals are posting to digital apps and social media to raise awareness about human rights violations and draw attention to egregious incidents. Initial results are encouraging. Ukrainian prosecutors are reportedly investigating over 21,000 alleged war crimes. On May 23, 2022, a panel of judges sentenced the first Russian soldier on trial for war crimes.

But the broadening use of open-source investigations and the generation of digital forensic evidence is not without peril. When publishing this data, news sources do not always weigh the risk to individual observers against the desire for accountability. A 2021 report published by the Stanley Center focused on the risks for collateral harm stemming from open-source journalism. One reporter described a story that he was writing about video footage from a missile strike in the Middle East: “We wanted to include the video in our reporting. But based on the video, it wouldn’t be hard to figure out which building, apartment, or window our contact was standing in when filming. That could get the person arrested or bring harm to a family. In this case, we didn’t publish the video with our reporting.” Just as open-source data can provide vital information, it can also facilitate atrocities and other harm against civilians. Early in the Ukraine war, for example, Google decided to disable live traffic features offered in Google Maps. While including traffic patterns could serve as a useful source of information for Ukrainians fleeing oncoming hostilities and looking for clear exit routes, there were also concerns that disclosing road circulation patterns could assist Russian military targeting.

Another problem relates to biases and blind spots in the investigative process. While digital information presents a veneer of objectivity, the collection of relevant data and its subsequent analysis is vulnerable to errors in judgement that can impede open-source investigations. One issue is algorithmic bias. When investigators use keyword searches to gather certain information, such as photos of alleged atrocities committed in a certain geographic area, social media amplification priorities may distort the results. Social media use proprietary algorithms intended to amplify certain voices, accounts, or websites that are deemed higher priority, largely based on their revenue-generation potential. This means that “each platform’s algorithm can lead to relevant material being overlooked.” Another area of concern is automation bias — where the results generated from automated tools, such as object-detection software that sifts through troves of video and photographic evidence to identify potential matches, can be perceived as more accurate than human analysis — whether this is actually the case or not. Certain cognitive biases also bring challenges. High-profile acts like murder and property destruction, for example, are more amenable to open-source documentation than more hidden violations like torture or starvation. This can result in the inadvertent elevation of certain prosecutions over others based on the visibility of a crime rather than its egregiousness. Confirmation bias can also be problematic, if later evidence serves to reinforce an initial hypothesis and leads to overlooking contrary data. This risk is particularly pronounced in open-source investigation, “where the volume of content through which investigators have to trawl to find the most relevant evidence can be overwhelming.”

Finally, evidence preservation can also create challenges. Given the slow nature of international justice, prosecutors may not use collected information for years. Investigators should develop the capacity to properly preserve and store digital information, develop appropriate protocols, and consider relevant security procedures. Tech companies also have an obligation to properly preserve information for later accountability. But this responsibility can be offset by other priorities, such as the necessity of removing harmful content. Federica D’Alessandra and Kirsty Sutherland describe how YouTube removed 7,872,684 videos in July–September 2020 for violations of its community guidelines, 93 percent of which were taken down through automatic filtering. Many of these videos, however, could have contained information essential for future evidence-gathering and prosecution. Is YouTube permanently deleting this footage or simply removing it from circulation? What protocols is YouTube following? Platforms’ policies in this regard are opaque, yet the impact on future accountability is real. 

Conclusion

The invasion of Ukraine demonstrates the unique effect of new technologies on the battlefield and the expanding role of civilians using information and communications technology. As a result, it is raising complicated questions about the limits of international humanitarian law and the perils of relying too heavily on open-source information for justice and accountability without consistent procedures. 

Policymakers should do more to address these new problems. One starting point would be to supercharge efforts to scrutinize the impact of digital technologies on law and war and specifically probe how the current conflict is shifting global views about the applicability of international humanitarian law. Democratic states could also provide more formalized guidance to global tech companies about their geopolitical responsibilities in war — thereby limiting missteps and providing clearer expectations for conduct. Lastly, international organizations and like-minded democracies should build on the Berkeley Protocol on Digital Open Source Investigations to ensure that lessons derived from the Ukraine war are incorporated into subsequent international justice guidance and processes.

 

 

Steven Feldstein is a senior fellow in the Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace. From 2014 to 2017, he served as U.S. Deputy Assistant Secretary for Democracy, Human Rights, and Labor. He is the author of The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance.

Image: Ukrainian Ministry of Defense