Comparative Analysis of Ethical Frameworks in Cybersecurity
Civil society faces a challenge prioritizing human rights within the current, predominant approach to cybersecurity. Nation states deliberately weaken the overall security of information systems by buying and exploiting zero-day vulnerabilities, injecting malware, hoarding known vulnerabilities and cracking existing encryption standards. These operations represent the ‘attack’ side of cybersecurity which are counterproductive to the goals of security and can lead to human rights violations. Ethical underpinnings of this approach are drawn from a military perspective which is based on deterrence theory and a ‘just war doctrine’; justifications that continue to inform policy, narratives and approaches which fund the militarization of cyberspace. A national security approach to cybersecurity is not without ethical and legal constraints, yet seems to operate according to a different set of ethical and legal rules. What code of ethics justifies intentionally creating insecurity, overlooking human security concerns and neglecting the prioritization of protecting democratic values? How can the problem of prioritizing human rights in cybersecurity be solved while maintaining a healthy cybersecurity posture? A potential solution is to question the narrative of a zero-sum game; one that pits human rights and network security against national security.
Keywords: Human-centric, cybersecurity, ethical frameworks, human rights
Temporary emergency measures first introduced by the U.S. government are still in place twenty years later and continue to influence a global narrative that security can be served when citizens are surveilled. In Canada, the three surveillance organizations are the Communications Security Establishment (CSE), the Canadian Security Intelligence Service (CSIS) and the Royal Canadian Mounted Police (RCMP) (Petrou, 2017). With a military history dating back to the Second World War, CSE is mandated to focus on foreign, hostile or oppressive targets that threaten national security. In as much as physical geography and borders, that which defines a nation, are analogous to a notion of a ‘sovereign’ digital space, CSIS and the RCMP are focused ‘inwards’. Both ‘inwards’ and ‘outwards’ in the context of cybersecurity are abstractions from the physical world, contributing to a notion of “cyber-sovereignty”; unbound this concept is reflected in China’s national security approach to cybersecurity which amounts to control over network infrastructure where censorship, suppression of dissenting speech and social control can proliferate (Jagalur et al., 2018). As citizens of a democratic country, we have an obligation to scrutinize our government’s cybersecurity strategy when it points in the same strategic direction as an authoritarian regime. We must assess to what degree our cybersecurity strategy aligns with democratic principles, or contributes to the health of the democracy or values safeguarding civil and human rights.
Drawing from the military concepts of offensive capabilities and deterrence, some cybersecurity strategies justify deliberately attacking cryptosystems (Applebaum et al, 2014). This is evident from the U.S. government developing the ‘built-in backdoor’ Clipper Chip in 1993 and more recently by the NSA creating and promoting the flawed Dual EC DRBG algorithm (Amin et al, 2015). Deliberate attempts to weaken encryption has a negative effect both on the overall security of the network and the trust placed in the organizations that create them (Blackledge, 2013). Unlike in the physical world, these types of attacks offer no guarantees that they can be aimed at and limited to foreign targets only. What ethical framework justifies the actions of a cybersecurity strategy that erodes trust and puts us all at risk?
Cyberweapons arms manufacturers sell spyware products, marketed as ‘security’ products, to government clients while working under their protection in the form of weak regulations. The NSO Group in Israel are responsible for producing, selling and supporting a smartphone malware product that is connected to hundreds of criminal, targeted surveillance activities including some in Canada (Marzak et al, 2018). Crimes against citizens include, but are not limited to, the killing of a Mexican journalist by a government operator, the imprisonment of a human rights defender by the UAE, and targeted profiling of democratic political leaders, journalists and pro-democracy activists (Scott-Railton, 2019). Where is the legal and ethical threshold to hold governments and businesses to account?
Definition of cybersecurity
Some research indicates that our national security approach to cybersecurity is broken. (Dunn Cavelty, 2014). Authors recommend that an examination begin with prioritizing the individual, and human rights through definitions of cybersecurity and national security that acknowledges freedoms, civil liberties and rights as preconditions for security; elements that are core components and not antithetical in a zero-sum game (Pavlova, 2020). Other researchers echo this sentiment. Advocating for a definition of cybersecurity that isn’t limited to concepts of national security, Liaropoulus argues how the militarization of cybersecurity is deficient in addressing the needs of people, specifically in how it fails to address human rights (Liaropoulos, 2015). In her book review, Manjikian identifies how ethical, legal and moral concerns with respect to safeguarding individual rights are not prioritized when the object of security is the state (Manjikian, 2020). Deibert states, so long as the object of cybersecurity prioritizes the state over humans, systems will continue to be made less secure under the pretext of preserving state-centric and military interests while large parts of the civilian population continue to be negatively effected (Deibert, 2014). Noting growing budgets for secretive, military intelligence agencies the author further questions, how far down the wrong path have we gone by allowing military intelligence interests to subvert core values of liberal democracy?
The purpose of cybersecurity
What is the purpose or goal of cybersecurity except to protect the democratic values and human rights such as “…access to information, freedom of thought, and freedom of association” (Deibert, 2018). Given the connection between human rights protection and liberal democratic values, its reasonable to expect that cybersecurity strategies in democratic societies would acknowledge its importance, especially as signatories of international law (Shackelford, 2021).
General form of the essay
Ethical frameworks form the basis from which policy and public funding of programs are justified. A comparative analysis of frameworks for cybersecurity highlights differences and similarities that can be used to inform an analysis of what’s behind the current narrative and what could solidify an alternative. The first part of the essay provides a brief literature review of papers related to ethical frameworks for cybersecurity. The selection criteria for the literature review articles limited the search to the last seven years and were discovered using the search terms ‘human-centric’ and ‘ethical frameworks’ across numerous academic databases. When a research article cited another relating to these search terms, the citation was used to discover those articles. The process revealed papers covering a breadth of cybersecurity topics including more specific aspects of it such as penetration testing, intelligence gathering and security research. Three articles in the literature review represent individual chapters from a volume titled National Security Intelligence Ethics, which provides ethical analyses from a variety of academics with a range of perspectives. As ethical frameworks are specifically mentioned in each article, key components are aggregated in a table for a comparative analysis. A discussion about the various approaches to ethical frameworks and their relationship to human rights follows, noting common themes, limitations and novel interpretations.
Ethics of Infosphere
Dunn Cavelty states that the object to protect, from a national security perspective, is critical infrastructure; a symbolic term borrowed from the physical world that plays into the logic of a threat model that justifies government protection (Dunn Cavelty, 2014). However, its meaning in the digital space is quite broad; it’s a catch all phrase to mean the sum total of servers and their ongoing performance with respect to the relationship that they have on economic growth and therefore national security. While this seems like familiar and reasonable things to protect, the government is not responsible for the management of digital infrastructure, especially that of private business. Further complicating a national security approach to cybersecurity is that there are no physical borders in cyberspace that neatly defines outside vs inside threats. Despite this, “cyber-sovereignty” is a concept embraced by both authoritarian and democratic states which asserts government control over information flows often leading to detrimental effects on trust, confidence, human rights and human-security. A resolution to the dilemma would be to move the focus from protecting inanimate, technical objects to protecting humans.
A Principlist Framework
Formosa et al put forward a principlist framework for cybersecurity ethics which includes beneficence, non-maleficence, autonomy, justice and explicability (Formosa et al., 2021). One of the objectives of their research is to look at how a derived version of the AI4People ethical framework maps to specific cybersecurity contexts, such as penetration testing. In their redeployment of that ethical framework they acknowledge a desire to move away from a focus on discussions framed as a conflict between privacy and security. They map relationships between the five principles and notions of privacy which moves the conversation away from privacy as a single ethical concept. For instance, the principle of ‘explicability’ can be mapped to ‘a right to freedom from arbitrary surveillance’ and ‘autonomy’ can be mapped to ‘an aspect of human dignity’. The authors acknowledge consequentialism, deontological and virtue ethics are too simplistic to address domain-specific issues, highlighting a desire to cultivate ethical sensitivities among practitioners in order to better prepare them to respond appropriately to the nuances and complexities of cybersecurity ethics.
Ross Bellaby identifies both a dilemma in the intelligence community and the need for an ethical framework. He argues that the right framework can reconcile some of the tensions between violating people’s privacy and autonomy and justifying those acts through a ‘just war doctrine’ (Miller et al., 2021). The author discusses the advantages for developing and applying a normative framework called ‘just intelligence’ which defines principles intended to guide ethical intelligence gathering. The principles are intended to reflect a flexible, proportional ethical framework that builds upon notions of just cause, authority, intention, proportionality, last resort and discrimination.
David Omand and Mark Phythian reference the ‘Just War tradition’ as the historical basis for ethical concerns over intelligence activities prior to asserting that the jus in intelligentia concepts of right intention, proportionality, right authority, reasonable prospect of success, discrimination and necessity can be used to guide the behaviour of intelligence agencies (Miller et al., 2021). The authors state that consequentialist ethical theory dominates the intelligence industry, hoping that the results of their actions justify the means. Using the recent COVID-19 pandemic as an example, the authors outline how the UK’s national security surveillance apparatus can be repurposed to adapt to a domestic health crisis while making note of the privacy and security concerns it raised in a liberal democracy context. Ethical considerations spilled over into intelligence research, noting both the necessity of research and the difficulty of researching in secrecy. Comparing ethical norms in other areas such as medical and academic research highlighted differences in the role of consent. As it pertains to secret intelligence operations, the authors point to the value in judicial oversight, professional codes of conduct and a ‘democratic license to operate’.
Ethical Framework for security research
Cybersecurity research is not without peril and researchers can find themselves in dangerous situations that may involve political violence, conflict, terrorism and insecurity (Baele et al., 2018). The unique focus of cybersecurity research requires more than a generalist ethical framework can offer. Baele et al classify the types of risks a security researcher faces as researcher-related, subject-related and result-related problems. These three families of risks, he argues, could inform tailored ethical guidelines to cover the specific physical, psychological and emotional harms that researchers and subjects may be exposed to.
Ethics of Risk
Kevin Macnish states that the topic of security can be understood as the inverse of risk (Miller et al., 2021). Ethical issues can arise from a determination of risk which means evaluating both the severity of harm and probability of occurrence. He argues that ethical frameworks and international law lack the specificity necessary for intelligence operations, particularly the ‘just war tradition’, ‘jus in intelligentium’ and the Tallinn Manual 2.0. He proposes an ethics of risk approach applied through informal agreements between intelligence operators, noting three logical fallacies that should be avoided when estimating risk: The tuxedo fallacy, the sheer size fallacy and the infallibility fallacy.
Table 1.1 – Comparison of Cybersecurity Ethical Frameworks
|Human Rights prioritization
|Anti-vulnerability, expansion of environmental ethics
|Duty is to contribute to the growth and welfare of the entire infosphere, with relevance to human life
|Human-centric, human rights are pro-security if linked to vulnerabilities
|Principlist: beneficence, non-maleficence, autonomy, justice, explicability
|Improve human-well being
|Families of risks: researcher-related, subject-related, result-related
|Field-specific ethical guidelines
|Researcher, researcher subject well-being, misuse of researcher results
|Just Intelligence principles
|Cybersecurity (intelligence activity)
|Just cause, authority, intention, proportionality, last resort, discrimination
|Consequentialism, with deontological elements
|Security and human rights are not opposing attributes to be “balanced”
|Jus in intelligentia
|Right intention, proportionality, right authority, reasonable prospect of success, discrimination, necessity
|Consequentialism, with deontological elements
be sought within the basket of human rights
|Ethics of risk
|Cybersecurity (intelligence activity)
|Determining severity of harm, probability of occurrence; avoiding fallacies
|Downplays the role of international law in cybersecurity operations
|Just war doctrine
|Going to War (jus ad bellum), Combat during war (jus in bello), Ending a war (jus post bellum)
|Criteria for determining if war is morally justifiable
|Consequentialism, with deontological elements
|Criteria for justifying exceptional behaviour (often outside of human rights concerns)
Common to most ethical frameworks for cybersecurity is the historical influence of the Just War Theory, advocated by Augustine in the thirteenth century and more recently adopted to derive criteria for modern day intelligence activities. The idea of a ‘just war’ shifts the moral landscape, justifying certain types of behaviours under exceptional circumstances. Digital surveillance, espionage and human rights violations facilitated by attacks on computer systems seems to land in this world of exceptionalism. Why then, does it continue to proliferate during times of peace? Is it justified?
Another common pattern among the ethical frameworks is a discovery that the ‘big three’ ethical theories consequentialism, deontology and virtue ethics are too simplistic to offer compelling responses to the nuances of complex use cases. Most authors advocated for derivatives, or ‘blends’ often leaning on the concept of guidelines which could be tailored to specific subject areas instead of leaning on hard rules, maxims or absolutes. Part of the necessity for this bespoke customization comes from acknowledging cultural differences between how notions of harm are interpreted, or how risk is perceived. This highlights one of many challenges in conceptualizing a relevant, globally acceptable ethical framework for cybersecurity.
Though the ‘just war tradition’ brings with it notions of the state being the object of security, some frameworks seem to bring with them a concern for, or at least an acknowledgement of human rights. Similar to how privacy and security are often unnecessarily framed as competing interests, only one article frames security and human rights as opposing attributes needing to be ‘balanced’. Others present human rights as being complimentary to overall security goals.
A novel discovery was how environmental ethics could be extended to inform cybersecurity ethics, especially the notion of an ‘infosphere’. Not only does the metaphor of a healthy biological ecosystem translate well to a healthy information system, it also helps conceptualize the roles we might play. Vulnerabilities as pollution and creators of malware as polluters is an apt representation bringing with it a collective sense of purpose (to keep our infosphere healthy) and broad ethical guidelines (don’t pollute!).
Human rights represents a broad spectrum of things, with variance in interpretation and enforcement depending on international, domestic and regional law. Cybersecurity policy is also applied differently across countries. While the research initially set out to answer a broad question, admittedly with a western, northern-hemisphere bias, there is specificity and relevance to be gained by narrowing the focus to a particular region, country or a particular set of human rights.
It can be said that defending human rights represents a focal point for emerging notions of cybersecurity ethics. The influence of the just war theory on ethical frameworks explains the exceptionalism and military perspective that seems to surround a familiar national security approach to cybersecurity. At the very least, the just war doctrine contextualizes how cybersecurity continues to be influenced by military thinking. It also explains how a different set of ethical rules are applied to some approaches to cybersecurity. There are shortcomings associated with a state-centric paradigm that can be addressed by making humans the object of national security. The prioritization of human rights in cybersecurity has a place for shaping the direction of cybersecurity policy, strengthening the goals of security and upholding democratic values.
The temporal quality of a ‘just war’ means it doesn’t apply during times of peace, leaving open the question of how certain criminal activities perpetrated by government operators can be justified. Even if the landscape of war has changed and cybersecurity policy is expected to operate under an ‘always on’ state of engagement, prioritizing human rights must not be overlooked.
Security research covers a wide range of topics so future research directions might look to drill down into one particular aspect such as intelligence gathering or penetration testing. The volume “National Security and Intelligence Ethics” is one example of a resource with a plethora of current articles from a wide range of perspectives that could easily begin the process of discovering the basis for a much deeper and focused analysis. Penetration testing, which includes vulnerability scanning, is also something that deserves a deeper dive especially as it relates to different aspects of human rights. While there are many professional codes of conduct in the information security industry, Georg et al identify a notable absence of a specific professional code of conduct or ethics for this emerging field, sometimes referred to as ‘ethical hacking’ (Georg et al., 2018). Future research might look at the significance of that. Despite the absence of a code of conduct, there are numerous formalized approaches to penetration testing methodology. For instance, the Open Source Security Testing Methodology Manual (OSSTMM) outlines a formalized approach to operational security testing (Herzog, 2010) as does NIST Technical Guide to Information Security Testing and Assessment (Scarfone et al., 2008). Shanley and Johnstone compare penetration testing methodologies which include Building Security in Maturity Model (BSIMM), the Penetration Testing Execution Standard (PTES), and OWASP Testing Guide (OTG) (Shanley & Johnstone, 2015). While these methodologies speak to the how of ‘white hat’ penetration testing, they lack in being able to answer difficult ethical questions that ‘grey hat’ hacking elicits. ‘Grey hat’ hacking is identified by the lack of expressed authorization to perform a vulnerability assessment or penetration test, but are motivated by good intentions including notifying organizations of the security issues found, or working under a notion of national security or disclosing information that may be in the public interest, which human rights clearly are.
Amin, M.; Afzal, M., “On the vulnerability of EC DRBG,” in Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on , vol., no., pp.318-322, 13-17 Jan. 2015 doi: 10.1109/IBCAST.2015.7058523
Applebaum, J, Gibson, A, Grothoff, C, Muller-Maguhun, A, Poitras, L, Sontheimer, M; Stocker, C. ” Prying Eyes: Inside the NSA’s War on Internet Security” 2014 http://www.spiegel.de/international/germany/inside-the-nsa-s-war-on-internet-security-a-1010361.html
Baele, S. J., Lewis, D., Hoeffler, A., Sterck, O. C., & Slingeneyer, T. (2018). The Ethics of Security Research: An Ethics Framework for Contemporary Security Studies. International Studies Perspectives, 19(2), 105–127. https://doi.org/10.1093/isp/ekx003
Blackledge, J.; Bezobrazov, S.; Tobin, P.; Zamora, F.,”Cryptography using evolutionary computing,” in Signals and Systems Conference (ISSC 2013), 24th IET Irish , vol., no., pp.1-8, 20-21 June 2013 doi: 10.1049/ic.2013.0029
Deibert, R. J. (2018). Toward a Human-Centric Approach to Cybersecurity. Ethics & International Affairs, 32(4), 411–424. https://doi.org/10.1017/S0892679418000618
Deibert Ronald J (2014). The Cyber Security Syndrome, OpenCanada.org, November 25, 2014, opencanada.org/features/the-cyber-security-syndrome/
Dunn Cavelty, M. (2014). Breaking the Cyber-Security Dilemma: Aligning Security Needs and Removing Vulnerabilities. Science and Engineering Ethics, 20(3), 701–715. https://doi.org/10.1007/s11948-014-9551-y
Formosa, P., Wilson, M., & Richards, D. (2021). A principlist framework for cybersecurity ethics. Computers & Security, 109, 102382. https://doi.org/10.1016/j.cose.2021.102382
Georg, T., Oliver, B., & Gregory, L. (2018). Issues of Implied Trust in Ethical Hacking. The ORBIT Journal, 2(1), 1–19. https://doi.org/10.29297/orbit.v2i1.77
Herzog, P, Barcelo, M (2010) ISECOM, https://www.isecom.org/OSSTMM.3.pdf (Accessed November 2021)
Jagalur, P. K., Levin, P. L., Brittain, K., Dubinsky, M., Landau-Jagalur, K., & Lathrop, C. (2018). Cybersecurity for Civil Society. 2018 IEEE International Symposium on Technology and Society (ISTAS), 102–107. https://doi.org/10.1109/ISTAS.2018.8638270
Liaropoulos, Andrew. (2015). A Human-Centric Approach to Cybersecurity: Securing the Human in the Era of Cyberphobia, Journal of Information Warfare, 14, 4. Marczak, B., Scott-Railton, J., & Deibert, R. (n.d.). NSO Group Infrastructure Linked to Targeting of Amnesty International and Saudi Dissident. 13.
Marczak, B., Railton, S., McKune, S., Abdul Razzak, B., Deibert, R. “Hide and Seek: Tracking NSO Group’s Pegasus Spyware to Operations in 45 Countries,” Citizen Lab Research Report No. 113, University of Toronto, September 2018.
Manjikian, M. (2020). Review of The Ethics of Cybersecurity, by M. Christen, B. Gordijn, & M. Lor. Prometheus, 36(4), 403–405. https://www.jstor.org/stable/10.13169/prometheus.36.4.0403
Miller, S., Regan, M., & Walsh, P. F. (2021). National Security Intelligence and Ethics (1st ed.). Routledge. https://doi.org/10.4324/9781003164197
Nicholson, S. (2019). How ethical hacking can protect organisations from a greater threat. Computer Fraud & Security, 2019(5), 15–19. https://doi.org/10.1016/S1361-3723(19)30054-5
Pavlova, P. (2020). Human Rights-based Approach to Cybersecurity: Addressing the Security Risks of Targeted Groups. Peace Human Rights Governance, 4(11/2020), 391–418. https://doi.org/10.14658/pupj-phrg-2020-3-4
Petrou, M (2017). Surveillance in Canada: Who are the watchers? https://opencanada.org/surveillance-canada-who-are-watchers/(Accessed, December 2021)
Scarfone, K. A., Souppaya, M. P., Cody, A., & Orebaugh, A. D. (2008). Technical guide to information security testing and assessment. (NIST SP 800-115; 0 ed., p. NIST SP 800-115). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-115
Scott-Railton, J., Marczak, B., Anstis, S., Abdul Razzak, B., Crete-Nishihata, M., Deibert, R. Reckless VII: Wife of Journalist Slain in Cartel-Linked Killing Targeted with NSO Group’s Spyware, Citizen Lab Research Brief No. 117, March 2019
Shackelford, S. J. (2021). Should cybersecurity be a human right?*. In C. C. Glen & T. L. Fort, Music, Business and Peacebuilding (1st ed., pp. 174–197). Routledge. https://doi.org/10.4324/9781003017882-14
Shanley, A., & Johnstone, M. (2015). Selection of penetration testing methodologies: A comparison and evaluation [PDF]. 13th Australian Information Security Management Conference, held from the 30 November – 2 December, Western Australia. https://doi.org/10.4225/75/57B69C4ED938D