Archive
SCIENTIFIC WORK - 2026 SCIENTIFIC WORK - 2025 SCIENTIFIC WORK - 2024 SCIENTIFIC WORK - 2023 SCIENTIFIC WORK - 2022 SCIENTIFIC WORK - 2021 SCIENTIFIC WORK - 2020 SCIENTIFIC WORK - 2019 SCIENTIFIC WORK - 2018 SCIENTIFIC WORK - 2017 SCIENTIFIC WORK - 2016 SCIENTIFIC WORK - 2015 SCIENTIFIC WORK - 2014 SCIENTIFIC WORK - 2013 SCIENTIFIC WORK - 2012 SCIENTIFIC WORK - 2011 SCIENTIFIC WORK - 2010 SCIENTIFIC WORK - 2009 SCIENTIFIC WORK - 2008 SCIENTIFIC WORK - 2007

DOI:  https://doi.org/10.36719/2663-4619/118/189-202

Nigar Garajamirli

SOCAR

https://orcid.org/0009-0008-8195-4629

nigar.garajamirli@gmail.com

 

Platform Governance in the Age of Media Created by Artificial

Intelliegence, Deepfake Technologies and Fact-Checking Challenges

                                                                               

Abstract

 

This article examines the changing nature of platform governance in the age of deepfake technology and algorithmic manipulation. It analyzes how social media platforms are adapting their disinformation policies, the implications for fact-checking and journalism, Meta’s termination of third-party fact-checking partnerships, and how new governance mechanisms such as X’s (formerly Twitter’s) Community Notes are shaping public discourse. The study also evaluates ethical and educational responses to algorithmic disinformation by integrating international frameworks such as UNESCO’s AI teaching guides, MediaSmarts’ Break the Fake program, and WITNESS’s Prepare, Don’t Panic initiative. The aim is to evaluate platform policies not only from a technical and educational perspective, but also from a democratic participation, algorithmic bias, and social responsibility perspective. This study assessed the multi-layered impacts of synthetic media and deepfake technologies, encompassing not only technical but also political, social and ethical dimensions. Changes in fact-checking policies of social media platforms, digital attacks on journalists and user-generated content governance practices pose serious risks to democratic participation. Meta’s end of its fact-checking partnerships and practices like the Community Notes program on the X platform have brought new conversations to combat misinformation, but the impartiality and effectiveness of these mechanisms are still in question.

Keywords: deepfake technologies, disinformation policy, fact-checking, platform moderation, community notes, media literacy, algorithmic bias


Views: 332