Document Type : Research Article
Author
Assistant Professor of Private Law, Department (The Research Group) of Islamic Jurisprudence and Law, Institute for Islamic Studies in Humanities, Ferdowsi University of Mashhad, Mashhad, Iran
Abstract
Keywords
سند امنیت قضایی مصوب ۱۳۹۹. بازیابیشده در 2 اردیبهشت 1404 از: https://rc.majlis.ir/fa/law/show/1623986
سند ملی هوش مصنوعی جمهوری اسلامی، مصوبة شورای عالی انقلاب، مورخ تیرماه 1403، بازیابی در 25 فروردین 1404 از: https://rc.majlis.ir/fa/law/show/1811432
قانون اساسی جمهوری اسلامی ایران، مصوب ۱۳۵۸ با اصلاحات ۱۳۶۸.
قانون آیین دادرسی مدنی، مصوب ۱۳۷۹.
قانون مسئولیت مدنی، مصوب ۱۳۳۹.
مرکز پژوهشهای مجلس شورای اسلامی (1403). طرح حفاظت از دادههای شخصی. تهران: مرکز پژوهشهای مجلس شورای اسلامی، بازیابی در 20 فروردین 1404 از: https://rc.majlis.ir/fa/legal_draft/show/1816729
منشور حقوق شهروندی، ابلاغی رئیسجمهور، ۱۳۹۵.
Arsenault, P.-D., Wang, S., & Patenande, J.-M. (2024). A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting. arXiv preprint. https://doi.org/10.48550/ARXIV.2407.15909
Bechler-Speicher, M., Globerson, A., & Gilad-Bachrach, R. (2024). The Intelligible and Effective Graph Neural Additive Networks. arXiv preprint. https://doi.org/10.48550/ARXIV.2406.01317
Berry, D. M. (2023). Explanatory Publics: Explainability and Democratic Thought. arXiv preprint. https://doi.org/10.48550/ARXIV.2304.02108
Bordt, S., Finck, M., Raidl, E., & Von Luxburg, U. (2022). Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT) (pp. 891–905). ACM. https://doi.org/10.1145/3531146.3533153
Dyoub, A., Costantini, S., & Lisi, F. A. (2020). Logic Programming and Machine Ethics. Electronic Proceedings in Theoretical Computer Science, 325, 6–17. https://doi.org/10.4204/EPTCS.325.6
European Parliament and Of The Council (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) Text with EEA Relevance. Official Journal of the European Union, L 1689, 1–144.
Goodman, B. & Flaxman, S. (2017). European Union Regulations on Algorithmic Decision Making and a “Right to Explanation”. AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741
Gorski, L., Ramakrishna, S., & Nowosielski, J. M. (2020). Towards Grad-CAM Based Explainability in a Legal Text Processing Pipeline. arXiv preprint. https://doi.org/10.48550/ARXIV.2012.09603
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2018). A Survey of Methods for Explaining Black Box Models. arXiv preprint. https://doi.org/10.48550/ARXIV.1802.01933
Gurrapu, S., Huang, L., & Batarseh, F. A. (2023). ExClaim: Explainable Neural Claim Verification Using Rationalization. arXiv preprint. https://doi.org/10.48550/ARXIV.2301.08914
Gyevnar, B., Ferguson, N., & Schafer, B. (2023). Bridging the Transparency Gap: What Can Explainable AI Learn from the AI Act? In K. Gal, A. Nowé, G. J. Nalepa, R. Fairstein, & R. Rădulescu (Eds.), Frontiers in Artificial Intelligence and Applications (Vol. 365, pp. 27–39). IOS Press. https://doi.org/10.3233/FAIA230367
Kirat, T., Tambou, O., Do, V., & Tsoukiàs, A. (2022). Fairness and Explainability in Automatic Decision-Making Systems: A Challenge for Computer Science and Law. arXiv preprint. https://doi.org/10.48550/ARXIV.2206.03226
Leofante, F., Ayoobi, H., Dejl, A., Freedman, G., Gorur, D., Jiang, J., et al. (2024). Contestable AI Needs Computational Argumentation. arXiv preprint. https://doi.org/10.48550/ARXIV.2405.10729
Licato, J., Fields, L., & Marji, Z. (2022). Resolving Open-Textured Rules with Templated Interpretive Arguments. arXiv preprint. https://doi.org/10.48550/ARXIV.2212.09700
Lin, N., Liu, H., Fang, J., Zhou, D., & Yang, A. (2023). An Interpretability Framework for Similar Case Matching. arXiv preprint. https://doi.org/10.48550/ARXIV.2304.01622
Mansi, G. & Riedl, M. (2024). Recognizing Lawyers as AI Creators and Intermediaries in Contestability. arXiv preprint. https://doi.org/10.48550/ARXIV.2409.17626
Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2022). Explainable Artificial Intelligence: A Comprehensive Review. Artificial Intelligence Review, 55(5), 3503–3568. https://doi.org/10.1007/s10462-021-10088-y
Mollas, I., Bassiliades, N., & Tsoumakas, G. (2020). Altruist: Argumentative Explanations through Local Interpretations of Predictive Models. arXiv preprint. https://doi.org/10.48550/ARXIV.2010.07650
Mumford, J., Atkinson, K., & Bench-Capon, T. (2022). Reasoning with Legal Cases: A Hybrid ADF-ML Approach. In E. Francesconi, G. Borges, & C. Sorge (Eds.), Legal Knowledge and Information Systems: JURIX 2022 (Vol. 359, pp. 97–106). IOS Press. https://doi.org/10.3233/FAIA220452
Nigam, S. K., Deroy, A., Maity, S., & Bhattacharya, A. (2024). Rethinking Legal Judgement Prediction in a Realistic Scenario in the Era of Large Language Models. In Proceedings of the Natural Legal Language Processing Workshop 2024 (pp. 61–80). ACL. https://doi.org/10.18653/v1/2024.nllp-1.6
Park, M. & Chai, S. (2021). AI Model for Predicting Legal Judgments to Improve Accuracy and Explainability of Online Privacy Invasion Cases. Applied Sciences, 11(23), 11080. https://doi.org/10.3390/app112311080
Rozen, H. W., Elkin-Koren, N., & Gilad-Bachrach, R. (2023). The Case Against Explainability. arXiv preprint. https://doi.org/10.48550/ARXIV.2305.12167
Rudin, C. (2018). Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. arXiv preprint. https://doi.org/10.48550/ARXIV.1811.10154
Sovrano, F. & Vitali, F. (2021). An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability. arXiv preprint. https://doi.org/10.48550/ARXIV.2109.05327
Steging, C., Renooij, S., & Verheij, B. (2021a). Discovering the Rationale of Decisions: Experiments on Aligning Learning and Reasoning. arXiv preprint. https://doi.org/10.48550/ARXIV.2105.06758
Steging, C., Renooij, S., & Verheij, B. (2021b). Rationale Discovery and Explainable AI. In E. Schweighofer (Ed.), Frontiers in Artificial Intelligence and Applications (Vol. 341, pp. 137–150). IOS Press. https://doi.org/10.3233/FAIA210341
Vermeire, T. & Martens, D. (2020). Explainable Image Classification with Evidence Counterfactual. arXiv preprint. https://doi.org/10.48550/ARXIV.2004.07511
Zhang, N. & Zhang, Z. (2023). The Application of Cognitive Neuroscience to Judicial Models: Recent Progress and Trends. Frontiers in Neuroscience, 17, 1257004. https://doi.org/10.3389/fnins.2023.1257004
Zhang, Y., Tiňo, P., Leonardis, A., & Tang, K. (2020). A Survey on Neural Network Interpretability. arXiv preprint. https://doi.org/10.48550/ARXIV.2012.14261
Zhu, Z. (2021). Legal Regulation of Algorithmic Discrimination. Advances in Social Behavior Research, 1(1), 65–72. https://doi.org/10.54254/2753-7102/1/2021009