مسئولیت حقوقی پلتفرم‌‏های رسانه‌‌های اجتماعی در قبال تصمیم‌های خودکار الگوریتم‌‏های هوش مصنوعی: مطالعه تطبیقی در نظام حقوقی امریکا، اروپا و ایران

نوع مقاله : مقاله پژوهشی

نویسنده

دکتری، گروه حقوق خصوصی، دانشکده حقوق، واحد تهران مرکز، دانشگاه آزاد اسلامی، تهران، ایران.

10.22059/mmr.2025.381742.1108

چکیده

هدف: این مقاله به بررسی مسئولیت حقوقی پلتفرم‌های رسانه‌های اجتماعی در قبال تصمیم‌های خودکار الگوریتم‌های هوش مصنوعی می‌پردازد. در این راستا، ابتدا نقش و تأثیر این الگوریتم‌ها در مدیریت محتوا و رفتار کاربران به بحث گذاشته می‌شود؛ سپس چالش‌های حقوقی مرتبط با این تصمیم‌ها، از جمله کمبود شفافیت، مسئولیت‌پذیری و تعصب‌های احتمالی بررسی خواهد شد.
روش: پژوهش حاضر، پژوهشی توصیفی ـ تحلیلی است که با بهره‌گیری از روش‌های مطالعۀ تطبیقی و با استفاده از منابع مکتوب، شفاهی، اسناد الکترونیکی و مطالعات قانونی، به تشریح مباحث پرداخته است؛ سپس با گردآوری، انسجام و دسته‌بندی یافته‌ها، مباحث پرسش شده در پژوهش را تحلیل و سرانجام به استنتاج می‌پردازد.
یافته‌ها: الگوریتم‌های هوش مصنوعی، با تحلیل داده‌های گستردۀ کاربران، وظایف مختلفی مانند نمایش محتوا، رتبه‌بندی پُست‌ها، حذف مطالب نامناسب و تشخیص رفتارهای غیرقانونی را برعهده دارند. با این حال، تصمیم‌های خودکار این الگوریتم‌ها، می‌تواند برای پلتفرم‌ها چالش‌های حقوقی جدی ایجاد کند؛ زیرا ممکن است به نقض حقوق کاربران، تبعیض ناعادلانه یا اشتباه‌های شایان توجهی منجر شود. این مقاله، فرصت‌های موجود برای بهبود و توسعۀ قوانین و مقررات در این زمینه، از جمله ایجاد نهادهای نظارتی مستقل، افزایش شفافیت و امکان اعتراض کاربران به تصمیم‌های خودکار را بررسی می‌کند.
نتیجه‌گیری: با توجه به نقش گستردۀ هوش مصنوعی در رسانه‌های اجتماعی، تدوین مقررات دقیق‌‌تر و شفاف‌‌تر، می‌تواند به حل چالش‌های حقوقی کمک کند و فرصتی برای بهبود عملکرد و تعامل میان پلتفرم‌ها و کاربران فراهم کند.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

The Legal Responsibility of Social Media Platforms for Automated Decisions Made by AI Algorithms: A Comparative Study of the Legal Systems in the United States, Europe, and Iran

نویسنده [English]

  • Emad Molla Ebrahimi
PhD., Department of Private Law, Faculty of Law, Central Tehran Branch, Islamic Azad University, Tehran, Iran.
چکیده [English]

Objective
The rapid integration of artificial intelligence (AI) into social media platforms has transformed the digital ecosystem, reshaping how content is produced, disseminated, moderated, and consumed. Algorithms now determine what billions of users see, whom they interact with, and which narratives are amplified or suppressed. These systems perform tasks such as content curation, ranking posts, detecting illicit behavior, and removing harmful material with unprecedented speed and scale. Despite the immense societal benefits, AI-driven automated decisions have raised complex legal and ethical questions concerning transparency, accountability, fairness, and the protection of fundamental rights. This article aims to examine the extent to which social media platforms may be held legally responsible for automated decisions produced by machine-learning systems, especially when these decisions affect users’ rights, opportunities, or access to information.
The primary goal of this study is to analyze the legal responsibility of social media companies for the outcomes of automated decision-making systems powered by AI. While previous literature has explored challenges such as algorithmic bias and data governance, the legal dimensions—particularly the allocation of responsibility when automated decisions cause harm—remain insufficiently addressed. By examining these issues, the article contributes to an emerging and critically important debate regarding platform liability in the age of algorithmic governance. Specifically, the study investigates whether current legal frameworks are adequate for governing autonomous AI decisions, what gaps exist in user protection, and how regulatory systems might evolve to ensure that automated decision-making remains lawful, ethical, and accountable.
Research Methodology
This research employs a descriptive–analytical methodology grounded in comparative legal analysis. It draws on statutory materials, international legal instruments, regulatory proposals, case law, industry guidelines, academic publications, and electronic sources. The comparative dimension examines regulatory trends in multiple jurisdictions—particularly the European Union, the United States, and selected Asia-Pacific countries—to assess whether existing frameworks such as the EU Digital Services Act, the EU AI Act, or Section 230 of the U.S. Communications Decency Act adequately address challenges posed by AI-generated decisions. The study synthesizes doctrinal legal analysis with theoretical perspectives from technology governance, human rights law, and information law. Findings are categorized, systematized, and analyzed to develop a structured conceptual understanding of platform liability within AI-driven environments.
Findings
The analysis reveals that AI algorithms deployed by social media platforms exercise significant power over users’ digital experiences. These systems collect and process large volumes of user-generated data, generating predictive insights that shape decisions related to content visibility, moderation, and user segmentation. However, automated decisions raise at least three major legal concerns:

Lack of Transparency:

Many AI systems operate as “black boxes,” particularly those based on deep learning. Users often have no understanding of why content is removed, accounts are suspended, or certain posts are prioritized. The opacity of these decisions makes it difficult for users to challenge or appeal outcomes and hinders regulators from assessing compliance with legal standards. This lack of transparency undermines procedural fairness and violates principles found in emerging regulatory frameworks that emphasize explainability and documentation of algorithmic processes.

Accountability Gaps:

A central challenge is determining who is legally responsible when an algorithm makes a harmful or incorrect decision. Platforms frequently assert that automated decisions are neutral outputs of technical systems, not intentional human actions. Yet algorithms are designed, trained, fine-tuned, and supervised by human teams. When an automated system erroneously removes lawful content, discriminates against certain users, or disproportionately affects minority groups, identifying the responsible party becomes legally complex. In many jurisdictions, existing laws either do not explicitly cover algorithmic harms or provide platforms with broad immunity from liability.

Algorithmic Bias and Discrimination:

AI systems may inadvertently reproduce or amplify biases present in training datasets. For example, automated moderation tools often misclassify content from linguistic minorities, political dissidents, or marginalized communities. Biased algorithms can restrict freedom of expression, distort political discourse, and perpetuate digital inequality. The absence of robust oversight mechanisms allows such discriminatory outcomes to persist without adequate safeguards or remedies.
The study further finds that although social media platforms generate enormous societal value, the lack of effective regulation surrounding AI-driven content governance creates structural vulnerabilities. These vulnerabilities can result in wrongful account suspensions, uneven enforcement of community standards, suppression of legitimate speech, and wrongful reporting to law-enforcement authorities. On the regulatory side, while some jurisdictions (such as the EU) are developing stringent obligations for transparency, auditing, and human oversight, others (such as the U.S.) maintain broad immunity protections for platforms, creating global inconsistencies.
 
Opportunities and Recommendations:
The research identifies several opportunities for strengthening legal and regulatory frameworks:
Independent Supervisory Bodies: Establishing specialized regulatory authorities capable of auditing algorithms, investigating platform practices, and imposing sanctions where necessary.
Transparency Obligations: Requiring platforms to disclose key information about how algorithms function, including criteria for content removal, ranking, and risk assessment. User Rights and Redress Mechanisms: Ensuring users have meaningful avenues to challenge automated decisions, including the right to request human review and access to explanations.
Mandatory Bias Audits: Implementing regular third-party audits to detect discriminatory outcomes and ensure compliance with equality and human rights standards. Harmonization of Global Standards: Developing coordinated international frameworks to avoid fragmentation and ensure consistent levels of user protection
Discussion & Conclusion
As AI continues to become a foundational component of social media ecosystems, the legal implications of automated decision-making cannot be overlooked. The study concludes that existing regulatory frameworks are insufficient to address the unique risks posed by autonomous algorithmic systems. More precise, transparent, and enforceable rules are required to govern the deployment of AI in digital platforms, safeguard user rights, and promote accountability. Strengthening regulatory mechanisms will enhance trust, improve platform governance, and ensure healthier interactions between platforms and their users. Ultimately, the establishment of coherent legal frameworks that address transparency, accountability, and fairness will provide a more resilient foundation for digital ecosystems increasingly driven by algorithmic intelligence.

کلیدواژه‌ها [English]

  • Platform
  • Social media
  • AI algorithms
  • Automated decisions
  • Legal challenges
حسام، ابوالفضل؛ مدیبو جونی، محمد؛ میری حسین؛ یمرلی صالح (1400). مسئولیت مدنی پلتفرم‌‌های آنلاین ناشی از نقض حریم خصوصی اطلاعاتی از سوی کاربران؛ مطالعه تطبیقی در ایران، آمریکا و اتحادیه اروپا. مجله پژوهش‌های ارتباطی، 28(107)، 69-91.
نجفی، حامد و مدنی، مهسا (1399). مشارکت در نقض اموال فکری در حقوق ایران و آمریکا. تهران: نشر میزان.
 
References
Balkin, J. M. (2017). Free speech in the algorithmic society: Big data, private governance, and new school speech regulation. UC Davis Law Review, 51(3), 1149.
Hesam, A., Madibo Jooni, M., Miri, H. & Yamerli, S. (2021).Civil liability of online platforms arising from users’ violation of informational privacy: A comparative study of Iran, the United States, and the European Union. Communication Research Journal, 28(107), 69–91. (in Persian)
Najafi, H. & Madani, M. (2020). Participation in the infringement of intellectual property rights in Iranian and U.S. law. Tehran: Mizan Publishing. (in Persian)
Boyle, J. & Jenkins, J. (2024). Intellectual Property: Law & the Information Society: Cases & Materials. Center for the Study of the Public Domain.
Buntain, C. & Golbeck, J. (2017, November). Automatically identifying fake news in popular twitter threads. In 2017 IEEE international conference on smart cloud (smart Cloud) (pp. 208-215). IEEE.
European Commission (2022). Digital services act. Retrieved from [European Commission website].
European Commission. (2021). Proposal for a regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act).
European Commission. (2022). Digital services act package. Retrieved from [European Commission website].
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. & Yang, G. Z. (2019). XAI—Explainable Artificial Intelligence. Science Robotics, 4(37), eaay7120.
Kaplan, A. M. & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of social media. Business Horizons, 53(1), 59-68.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Russell, S., Norvig, P. & Intelligence, A. (2020). Artificial intelligence: A modern approach. (4th ed.). Pearson.
Sevanian, A. M. (2014). Section 230 of the communications decency act: A "good samaritan" law without the requirement of acting as a"good samaritan". UCLA Ent. L. Rev., 21, 121.
Van der Sloot, B. (2015). Welcome to the jungle: The liability of internet intermediaries for privacy violations in Europe. J. Intell. Prop. Info. Tech. & Elec. Com. L., 6, 211.
Veale, M. & Borgesius, F. Z. (2021). Demystifying the draft EU artificial intelligence act. Computer Law Review International, 22(4), 97-112.