Creating AI Art Responsibly: A Field Guide for Artists

Main Article Content

Claire R. Leibowicz
Emily Saltz
Lia Coleman


Machine learning tools for generating synthetic media enable creative expression, but they can also result in content that misleads and causes harm. The Responsible AI Art Field Guide offers a starting point for designers, artists, and other makers on how to responsibly use AI techniques and in a careful manner. We suggest that artists and designers using AI situate their work within the broader context of responsible AI, attending to the potentially unintended harmful consequences of their work as understood in domains like information security, misinformation, the environment, copyright, and biased and appropriative synthetic media. First, we describe the broader dynamics of generative media to emphasize how artists and designers using AI exist within a field with complex societal characteristics. We then describe our project, a guide focused on four key checkpoints in the lifecycle of AI creation: (1) dataset, (2) model code, (3) training resources, and (4) publishing and attribution. Ultimately, we emphasize the importance for artists and designers using AI to consider these checkpoints and provocations as a starting point for building out a creative AI field, attentive to the societal impacts of their work.

Article Details

How to Cite
Leibowicz, C., Saltz, E., & Coleman, L. (2021). Creating AI Art Responsibly: A Field Guide for Artists. Diseña, (19), Article.5.
Author Biographies

Claire R. Leibowicz, Partnership on AI

BA in Psychology and Compu­ter Science, Harvard University. Master in the Social Science of the Internet, University of Oxford (as a Cla­rendon Scholar). She is the Head of the AI and Media Integrity program at the Partnership on AI, a global multistakeholder nonprofit devoted to responsible AI. Under her leadership, the AI and Media Integrity team investigates the impact of emerging AI techno­logy on digital media and online information. She is also a 2021 Journalism Fellow at Tablet Magazine, where she is exploring questions at the intersection of technology, society, and digital culture, and an inco­ming doctoral candidate at the Oxford Internet Insti­tute. Her latest publications include, ‘Encounters with Visual Misinformation and Labels Across Platforms: An Interview and Diary Study to Inform Ecosystem Approaches to Misinformation Interventionsʼ (with E. Saltz and C. Wardle; Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Issue 340) and ‘The Deepfake Detection Dilemma: A Multistakeholder Exploration of Adver­sarial Dynamics in Synthetic Mediaʼ (with A. Ovadya and S. McGregor; Proceedings of the 2021 ACM Con­ference on Artificial Intelligence, Ethics, and Society).

Emily Saltz, The New York Times

Master in Human-computer Inte­raction, Carnegie Mellon University. She is a UX Researcher studying media and misinformation, working with organizations like the Partnership on AI and First Draft. She led UX for The News Provenance Project at The New York Times, where she works as a UX researcher. Some of her work includes a collaboration on an AI-generated op-ed for author Oobah Butler on being catfished by AI (The Independent, 2021); explorations of text prediction software such as ‘Human-Human Autocompletion’ (presented at WordHack at Babycastles, 2020) and ‘Super Sad Googlesʼ (presented at Eyeo 2019); and ‘Filter Bubble Roulette’, a mobile VR experience to inhabit user-specific social media feeds (presen­ted at The Tech Interactive in San Jose, 2018).

Lia Coleman, Rhode Island School of Design

BSc in Computer Science, Massa­chusetts Institute of Technology. She is an artist, AI researcher, and educator. Adjunct Professor at Rhode Island School of Design, she teaches machine learning artwork. She is the author of ‘Machines Have Eyesʼ (with A. Raina, M. Binnette, Y. Hu, D. Huang, Z. Davey, and Q. Li; in Big Data. Big Design: Why Designers Should Care About Machine Learning; Princeton Architectural Press, 2021), ‘Artʼificial (with E. Lee; Neocha Magazine, 2020), and ‘Flesh & Machine’ (with E. Lee; Neocha Magazine, 2020). Some of her recent workshops and talks include ‘How to Play Nice with Artificial Intelligence: Artist and AI Co-creationʼ (presented at Burg Giebichenstein University of Art and Design, 2021); ‘A Field Guide to Making AI Art Responsiblyʼ (presented at Art Machi­nes: International Symposium on ML and Art), and ‘How to Use AI for Your Art Responsiblyʼ (presented at Mozilla Festival, 2020 and Gray Area, 2020).


Bhatt, U., Andrus, M., Weller, A., & Xiang, A. (2020). Machine Learning Explainability for External Stakeholders. Association for Computing Machinery ArXiv, (arXiv:2007.05408).

Bickert, M. (2020, January 6). Enforcing Against Manipulated Media. Facebook Blog.

Buolamwini, J. (2016). Project Overview Algorithmic Justice League. MIT Media Lab.

Chesney, R., & Citron, D. K. (2018). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(1753).

Costanza-Chock, S. (2018). Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice. Proceedings of the Design Research Society 2018.

Crawford, K., & Paglen, T. (2019). Excavating AI: The Politics of Images in Machine Learning Training Sets. Excavating AI.

Diehm, C., & Sinders, C. (2020, May 14). “Technically” Responsible: The Essential, Precarious Workforce that Powers A.I. The New Design Congress Essays.

Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., & Ferrer, C. C. (2020). The DeepFake Detection Challenge (DFDC) Dataset. Association for Computing Machinery ArXiv, (arXiv:2006.07397).

Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who Gets Credit for AI-Generated Art? IScience, 23(9), 101515.

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. Association for Computing Machinery ArXiv, (arXiv:1803.09010).

Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J. (2018). Embedded EthiCS: Integrating Ethics Broadly Across Computer Science Education. Association for Computing Machinery ArXiv, (arXiv:1808.05686).

Hao, K. (2019, June 6). Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes. MIT Technology Review.

Hara, N. (2020). Pause Fest [AI-Generated Image].

Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. (2019). Quantifying the Carbon Emissions of Machine Learning. Association for Computing Machinery ArXiv, (arXiv:1910.09700).

Leibowicz, C. R. (2020). The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity. Partnership on AI.

Leibowicz, C. R., Stray, J., & Saltz, E. (2020, July 13). Manipulated Media Detection Requires More Than Tools: Community Insights on What’s Needed. The Partnership on AI.

Li, Y., & Lyu, S. (2019). De-identification Without Losing Faces. Proceedings of the ACM Workshop on Information Hiding and Multimedia Security, 2019, 83–88.

Lomas, N. (2020, August 17). Deepfake Video App Reface is just Getting Started on Shapeshifting Selfie Culture. TechCrunch.

Lyons, M. J. (2020). Excavating “Excavating AI”: The Elephant in the Gallery. Association for Computing Machinery ArXiv Preprint, (arXiv:2009.01215).

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229.

Mix. (2020, May 7). This AI Spits Out an Infinite Feed of Fake Furry Portraits. The Next Web.

Moisejevs, I. (2019, July 14). Will My Machine Learning System Be Attacked? Towards Data Science.

Nicolaou, E. (2020, August 27). Chrissy Teigen Swapped Her Face with John Legend’s and We Can’t Unsee It. Oprah Daily.

Paris, B., & Donovan, J. (2019). Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence. Data & Society.

Patrini, G. (2019, October 7). Mapping the Deepfake Landscape. Sensity.

Posters. (2019, May 29). Gallery: “Spectre” Launches (Press Release).

Raji, I. D., & Yang, J. (2019). ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles. Association for Computing Machinery ArXiv Preprint, (arXiv:1912.06166v1).

Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2020). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. Proceedings of the ACM on Human-Computer Interaction, CSCW1.

Roth, Y., & Achuthan, A. (2020, February 4). Building Rules in Public: Our Approach to Synthetic & Manipulated Media. Twitter Blog.

Rothkopf, J. (2020, July 1). Deepfake Technology Enters the Documentary World. The New York Times.

Salgado, E. (2020, August 5). Yaku with Circular Loops [AI-Generated Image].

Saltz, E., Coleman, L., & Leibowicz, C. R. (2020). Making AI Art Responsibly: A Field Guide [Zine]. Partnership on AI.

Schultz, D. (2019). Faces2flowers—Artificial Images.

Schultz, D. (2020). Artificial Images.

Simonite, T. (2018, November 28). How a Teenager’s Code Spawned a $432,500 Piece of Art. Wired.

Twitter Safety [@TwitterSafety]. (2020, October 30). Our policies are living documents. We’re willing to update and adjust them when we encounter new scenarios or receive important… [Tweet]. Twitter.