Cyberpolitik Journal
http://cyberpolitikjournal.org/index.php/main
<p><em>Cyberpolitik</em> Journal - A Peer Review International E-Journal on Cyberpolitics, Cybersecurity and Human Rights - is the first academic journal study in cyberpolitics and cyber security in Turkey. The main mission of our journal is to contribute to the literature of this important field which is in its infancy stage in the international relations literature of Turkey.</p> <p> </p> <p>The goal of the <em>Cyberpolitik</em> Journal is to contribute to the discipline of international relations both nationally and internationally, by analyzing the technological developments, the state's political and security policies, its legal systems and more importantly its democratic structures. From this point of view, the main purpose of the <em>Cyberpolitik </em>Journal is to contribute to a better understanding of the relationship between cyberspace and international relations. Although <em>Cyberpolitik</em> Journal focuses on cyber policy, other important issues such as society, security, peace, international relations, international law and human rights will also be critically analyzed in the journal.</p>Association for Human Rights Educationen-USCyberpolitik Journal2587-1218THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE IN ACADEMIC WRITING AND ETHICS: THE CONDITION OF HYPER-PLAGIARISM
http://cyberpolitikjournal.org/index.php/main/article/view/259
Nezir Akyeşilmen
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020vixiiDIGITAL SHIELD: THE PROTECTIVE ROLE AGAINST HUMAN RIGHTS VIOLATIONS IN CYBER INTERVENTIONS
http://cyberpolitikjournal.org/index.php/main/article/view/260
<p>Atrocity crimes represent some of the most severe violations of international order and are primarily addressed within the framework of humanitarian intervention and the Responsibility to Protect (R2P). Traditional military interventions have been widely criticized due to their potential infringement on state sovereignty and the high risk of operational failure, whereas emerging digital technologies have introduced cyber humanitarian intervention as a possible alternative. The aim of this article is to explore the potential of cyber operations in preventing or halting mass atrocity crimes within the context of R2P and to critically assess the legal, ethical, and practical constraints of this approach.</p> <p>Methodologically, the study adopts a normative analytical framework, drawing on international law, cybersecurity, and humanitarian intervention scholarship to establish a conceptual and legal basis. Existing literature tends to focus predominantly on military or diplomatic means of intervention, with only limited engagement with the notion of cyber humanitarian intervention. This gap highlights the need for a comprehensive assessment of how cyber measures align with international law, their feasibility, and associated risks. The findings suggest that cyber interventions may support the implementation of R2P by safeguarding access to information, protecting communication infrastructures, and limiting the digital capacities of perpetrators. Nevertheless, the approach also entails significant limitations, particularly concerning state sovereignty, attribution challenges, the lack of international cooperation, and ethical accountability. In conclusion, while cyber humanitarian intervention does not constitute a definitive solution on its own, it can be considered a complementary tool for enhancing the effective realization of the R2P principle.</p>Muhammet Ali Demir
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020112134ARTIFICIAL INTELLIGENCE IN EDUCATION: REGULATION, ETHICS, AND SECURITY
http://cyberpolitikjournal.org/index.php/main/article/view/261
<p><strong>Abstract</strong></p> <p>The integration of artificial intelligence (AI) into e-learning environments is transforming educational practices by enabling personalized learning pathways, automating assessment, and enhancing administrative efficiency. While these innovations offer significant pedagogical benefits, they also raise complex ethical and security concerns. This paper explores the implications of AI use in digital education, focusing on algorithmic transparency, data protection, and institutional accountability. It examines how AI systems influence decision-making processes in teaching and learning, and highlights the need for clear public policies to regulate their deployment. Drawing on international best practices and case studies from countries such as Finland, Estonia, and Romania, the study identifies key strategies for responsible AI implementation. These include teacher training in digital ethics, risk-based governance frameworks, and mechanisms for human oversight of algorithmic decisions. The paper also discusses the importance of transparency policies, such as algorithmic audit protocols and public registries, to ensure that AI systems operate fairly and explainable. By adopting an interdisciplinary approach that combines digital pedagogy, ethical standards, and legal safeguards, the research advocates for a sustainable and inclusive model of AI integration in education. The findings underscore the urgency of aligning technological innovation with democratic values and human rights, ensuring that AI serves as a tool for empowerment rather than control in the learning process.</p>Carmen-Gabrieala Bostan
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020135152AI-DRIVEN DISINFORMATION AS A GLOBAL CYBERSECURITY THREAT TO DEMOCRATIC SYSTEMS
http://cyberpolitikjournal.org/index.php/main/article/view/262
<p>The proliferation of generative artificial intelligence has fundamentally transformed our current information landscape, facilitating the widespread production and consumption of photorealistic yet fictitious media. So, misinformation is no longer just a communication issue; it is an absolute cybersecurity threat, especially for democracies that depend on trust, transparency, and an informed public. This paper views AI-fueled disinformation as a cognitive cyber-attack aimed at modifying perceptions, shaping belief formation, and undermining the legitimacy of institutions, rather than solely targeting technical systems. Based on literature in cybersecurity, political communication, and AI governance, the research investigates how generative AI augments disinformation, which systemic failures in democracies it exploits, and why current interventions fall short. The results, he says, are that AI-supported disinformation damages public trust, increases social and political fragmentation, and distorts electoral and governmental processes in ways that are often hard to detect and even harder to put right. The research suggests expanding existing cybersecurity strategies to protect democracies in the age of generative AI by considering not only information integrity and cognitive security but also societal resilience to disinformation and propaganda, as well as technical protections.</p>Murat Emeç
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020153167ARTIFICIAL INTELLIGENCE, SURVEILLANCE, AND HUMAN RIGHTS IN THE AGE OF DIGITAL CAPITALISM: THE FUTURE OF PRIVACY AND FREEDOM
http://cyberpolitikjournal.org/index.php/main/article/view/263
<p>In the age of digital capitalism, the protection of human rights and freedoms is becoming an increasingly critical issue in terms of both international law and global politics. With the proliferation of AI-based surveillance systems, fundamental rights such as individual privacy, freedom of expression, and data security are being redefined within the economic and political functioning of digital infrastructures. This study examines how human rights are transformed in the digital age by presenting a theoretical framework that intersects the economic logic of digital capitalism with the social impacts of surveillance technologies. Ultimately, it demonstrates that privacy laws alone are insufficient to prevent the destructive effects of surveillance capitalism and that this can only be achieved through fundamental reforms in institutional structures and international legal regulations. In this context, the study aims to contribute to global debates on digital rights, algorithmic accountability, and data sovereignty. It also aims to provide a theoretical and practical basis for policymakers and legal reformers to develop normative mechanisms that protect individual freedoms in the face of digital surveillance. Ultimately, the article presents a comprehensive assessment of the future of privacy, freedom, and human dignity in the age of digital capitalism.</p>Demet Şefika Mangır
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020168190ARTIFICIAL INTELLIGENCE AND GLOBAL SECURITY ARCHITECTURE: MECHANISMS OF POWER DISTRIBUTION AND STRUCTURAL TRANSFORMATION
http://cyberpolitikjournal.org/index.php/main/article/view/264
<p>This study examines the structural impact of artificial intelligence (AI) on the global security architecture and the mechanisms transforming the distribution of power. The core research problem is the lack of a clear causal chain explaining how AI-driven shifts exert pressure on norms, institutions, and operational tools. This article conceptualizes AI not as a singular capability but as a “multiplier technology” that redefines power competition through compute concentration, operational speed, and modular governance. Adopting a theory-driven qualitative design, the research utilizes structured document analysis and mechanism-based reasoning to trace structural pressures across the normative, institutional, and operational layers of the security architecture. The findings indicate that AI compresses decision cycles and reshapes command-and-control and intelligence processes. However, the impact of this multiplier is not distributed uniformly. Concentration in computing power, advanced chips, and data centers produces a rigid strategic hierarchy. Consequently, power competition has evolved into an infrastructural struggle over critical inputs and the control of enabling infrastructures. This shift exerts structural pressure on the multi-layered security architecture, including the UN-centered collective security system, NATO, and arms control regimes. Given the deadlock in universal regimes caused by great power rivalry, the findings suggest that modular regulatory tools—such as technical standards and alliance-based arrangements—offer a more viable pathway for risk mitigation and governance.</p>Kürşat Kan
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020191213HUMANS IN THE CYBER LOOP: PERSPECTIVES ON SOCIAL CYBERSECURITY
http://cyberpolitikjournal.org/index.php/main/article/view/267
Selim Mürsel Yavuz
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020226229DEMOCRACY IN THE AGE OF AI; THE FINE LINE BETWEEN THE KNOWN AND UNKNOWN
http://cyberpolitikjournal.org/index.php/main/article/view/265
Mihai SebeAlexandru GeorgescuEliza Vaş
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020215219DIGITAL WORLD-SYSTEM HIERARCHIES AND AI-DRIVEN SECURITY COMPETITION
http://cyberpolitikjournal.org/index.php/main/article/view/266
Merve Suna Özel-Özcan
Copyright (c) 2026
https://creativecommons.org/licenses/by-nc-sa/4.0
2026-01-312026-01-311020220224