نوع مقاله : مقاله پژوهشی
نویسنده
دکتری، گروه حقوق خصوصی، دانشکده حقوق، واحد تهران مرکز، دانشگاه آزاد اسلامی، تهران، ایران.
چکیده
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسنده [English]
Objective
The rapid integration of artificial intelligence (AI) into social media platforms has transformed the digital ecosystem, reshaping how content is produced, disseminated, moderated, and consumed. Algorithms now determine what billions of users see, whom they interact with, and which narratives are amplified or suppressed. These systems perform tasks such as content curation, ranking posts, detecting illicit behavior, and removing harmful material with unprecedented speed and scale. Despite the immense societal benefits, AI-driven automated decisions have raised complex legal and ethical questions concerning transparency, accountability, fairness, and the protection of fundamental rights. This article aims to examine the extent to which social media platforms may be held legally responsible for automated decisions produced by machine-learning systems, especially when these decisions affect users’ rights, opportunities, or access to information.
The primary goal of this study is to analyze the legal responsibility of social media companies for the outcomes of automated decision-making systems powered by AI. While previous literature has explored challenges such as algorithmic bias and data governance, the legal dimensions—particularly the allocation of responsibility when automated decisions cause harm—remain insufficiently addressed. By examining these issues, the article contributes to an emerging and critically important debate regarding platform liability in the age of algorithmic governance. Specifically, the study investigates whether current legal frameworks are adequate for governing autonomous AI decisions, what gaps exist in user protection, and how regulatory systems might evolve to ensure that automated decision-making remains lawful, ethical, and accountable.
Research Methodology
This research employs a descriptive–analytical methodology grounded in comparative legal analysis. It draws on statutory materials, international legal instruments, regulatory proposals, case law, industry guidelines, academic publications, and electronic sources. The comparative dimension examines regulatory trends in multiple jurisdictions—particularly the European Union, the United States, and selected Asia-Pacific countries—to assess whether existing frameworks such as the EU Digital Services Act, the EU AI Act, or Section 230 of the U.S. Communications Decency Act adequately address challenges posed by AI-generated decisions. The study synthesizes doctrinal legal analysis with theoretical perspectives from technology governance, human rights law, and information law. Findings are categorized, systematized, and analyzed to develop a structured conceptual understanding of platform liability within AI-driven environments.
Findings
The analysis reveals that AI algorithms deployed by social media platforms exercise significant power over users’ digital experiences. These systems collect and process large volumes of user-generated data, generating predictive insights that shape decisions related to content visibility, moderation, and user segmentation. However, automated decisions raise at least three major legal concerns:
Lack of Transparency:
Many AI systems operate as “black boxes,” particularly those based on deep learning. Users often have no understanding of why content is removed, accounts are suspended, or certain posts are prioritized. The opacity of these decisions makes it difficult for users to challenge or appeal outcomes and hinders regulators from assessing compliance with legal standards. This lack of transparency undermines procedural fairness and violates principles found in emerging regulatory frameworks that emphasize explainability and documentation of algorithmic processes.
Accountability Gaps:
A central challenge is determining who is legally responsible when an algorithm makes a harmful or incorrect decision. Platforms frequently assert that automated decisions are neutral outputs of technical systems, not intentional human actions. Yet algorithms are designed, trained, fine-tuned, and supervised by human teams. When an automated system erroneously removes lawful content, discriminates against certain users, or disproportionately affects minority groups, identifying the responsible party becomes legally complex. In many jurisdictions, existing laws either do not explicitly cover algorithmic harms or provide platforms with broad immunity from liability.
Algorithmic Bias and Discrimination:
AI systems may inadvertently reproduce or amplify biases present in training datasets. For example, automated moderation tools often misclassify content from linguistic minorities, political dissidents, or marginalized communities. Biased algorithms can restrict freedom of expression, distort political discourse, and perpetuate digital inequality. The absence of robust oversight mechanisms allows such discriminatory outcomes to persist without adequate safeguards or remedies.
The study further finds that although social media platforms generate enormous societal value, the lack of effective regulation surrounding AI-driven content governance creates structural vulnerabilities. These vulnerabilities can result in wrongful account suspensions, uneven enforcement of community standards, suppression of legitimate speech, and wrongful reporting to law-enforcement authorities. On the regulatory side, while some jurisdictions (such as the EU) are developing stringent obligations for transparency, auditing, and human oversight, others (such as the U.S.) maintain broad immunity protections for platforms, creating global inconsistencies.
Opportunities and Recommendations:
The research identifies several opportunities for strengthening legal and regulatory frameworks:
Independent Supervisory Bodies: Establishing specialized regulatory authorities capable of auditing algorithms, investigating platform practices, and imposing sanctions where necessary.
Transparency Obligations: Requiring platforms to disclose key information about how algorithms function, including criteria for content removal, ranking, and risk assessment. User Rights and Redress Mechanisms: Ensuring users have meaningful avenues to challenge automated decisions, including the right to request human review and access to explanations.
Mandatory Bias Audits: Implementing regular third-party audits to detect discriminatory outcomes and ensure compliance with equality and human rights standards. Harmonization of Global Standards: Developing coordinated international frameworks to avoid fragmentation and ensure consistent levels of user protection
Discussion & Conclusion
As AI continues to become a foundational component of social media ecosystems, the legal implications of automated decision-making cannot be overlooked. The study concludes that existing regulatory frameworks are insufficient to address the unique risks posed by autonomous algorithmic systems. More precise, transparent, and enforceable rules are required to govern the deployment of AI in digital platforms, safeguard user rights, and promote accountability. Strengthening regulatory mechanisms will enhance trust, improve platform governance, and ensure healthier interactions between platforms and their users. Ultimately, the establishment of coherent legal frameworks that address transparency, accountability, and fairness will provide a more resilient foundation for digital ecosystems increasingly driven by algorithmic intelligence.
کلیدواژهها [English]