Quality, bias mitigation, and ethical data collection practices
Architecture decisions, training methodologies, and deployment considerations
Ethical application, misuse prevention, and responsible usage patterns
Policy development, governance structures, and enforcement mechanisms
Generative AI is moving rapidly from research into real world deployment across sectors, which elevates the need for responsible development, evaluation, and governance. This survey synthesizes the landscape of responsible generative AI across methods, benchmarks, and policies, and connects governance expectations to concrete engineering practice. We follow a prespecified search and screening protocol focused on post-ChatGPT era with selective inclusion of foundational work for definitions, and we conduct a narrative and thematic synthesis.
Three findings emerge. First, benchmark and practice coverage is dense for bias and toxicity but relatively sparse for privacy and provenance, deepfake and media integrity risk, and system level failure in tool using and agentic settings. Second, many evaluations remain static and task local, which limits evidence portability for audit and lifecycle assurance. Third, documentation and metric validity are inconsistent, which complicates comparison across releases and domains.
We outline a research and practice agenda that prioritizes adaptive and multimodal evaluation, privacy and provenance testing, deepfake risk assessment, calibration and uncertainty reporting, versioned and documented artifacts, and continuous monitoring. Limitations include reliance on public artifacts and the focus period, which may under represent capabilities reported later. The survey offers a path to align development and evaluation with governance needs and to support safe, transparent, and accountable deployment across domains.
This survey provides a comprehensive analysis of responsibility in generative AI systems, examining key stakeholders and their roles in ensuring ethical and sustainable AI development.
This survey builds upon extensive research in AI ethics, algorithmic fairness, and responsible AI development.
Key areas of related work include AI governance frameworks, algorithmic bias detection and mitigation, ethical AI design principles, and regulatory approaches to AI oversight.
Our work synthesizes insights from computer science, law, policy, and ethics to provide a holistic view of responsibility in the generative AI landscape.
@article{raza2025responsible,
title={Who is responsible? the data, models, users or regulations? a comprehensive survey on responsible generative ai for a sustainable future},
author={Raza, Shaina and Qureshi, Rizwan and Zahid, Anam and Fioresi, Joseph and Sadak, Ferhat and Saeed, Muhammad and Sapkota, Ranjan and Jain, Aditya and Zafar, Anas and Hassan, Muneeb Ul and others},
journal={arXiv preprint arXiv:2502.08650},
year={2025}
}