Main Article Content
Machine learning tools for generating synthetic media enable creative expression, but they can also result in content that misleads and causes harm. The Responsible AI Art Field Guide offers a starting point for designers, artists, and other makers on how to responsibly use AI techniques and in a careful manner. We suggest that artists and designers using AI situate their work within the broader context of responsible AI, attending to the potentially unintended harmful consequences of their work as understood in domains like information security, misinformation, the environment, copyright, and biased and appropriative synthetic media. First, we describe the broader dynamics of generative media to emphasize how artists and designers using AI exist within a field with complex societal characteristics. We then describe our project, a guide focused on four key checkpoints in the lifecycle of AI creation: (1) dataset, (2) model code, (3) training resources, and (4) publishing and attribution. Ultimately, we emphasize the importance for artists and designers using AI to consider these checkpoints and provocations as a starting point for building out a creative AI field, attentive to the societal impacts of their work.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license.
All contents of this electronic edition are distributed under the Creative Commons license of "Attribution-ShareAlike 4.0 Internacional" (CC-BY-SA). Any total or partial reproduction of the material must mention its origin.
The rights of the published images belong to their authors, who grant to Diseña the license for its use. The management of the permits and the authorization of the publication of the images (or of any material) that contains copyright and its consequent rights of reproduction in this publication is the sole responsibility of the authors of the articles.
Bickert, M. (2020, January 6). Enforcing Against Manipulated Media. Facebook Blog. https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/
Buolamwini, J. (2016). Project Overview Algorithmic Justice League. MIT Media Lab. https://www.media.mit.edu/projects/algorithmic-justice-league/overview/
Chesney, R., & Citron, D. K. (2018). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(1753). https://doi.org/10.15779/Z38RV0D15J
Costanza-Chock, S. (2018). Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice. Proceedings of the Design Research Society 2018. https://doi.org/10.21606/drs.2018.679
Crawford, K., & Paglen, T. (2019). Excavating AI: The Politics of Images in Machine Learning Training Sets. Excavating AI. https://excavating.ai
Diehm, C., & Sinders, C. (2020, May 14). “Technically” Responsible: The Essential, Precarious Workforce that Powers A.I. The New Design Congress Essays. https://newdesigncongress.org/en/pub/trk
Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., & Ferrer, C. C. (2020). The DeepFake Detection Challenge (DFDC) Dataset. Association for Computing Machinery ArXiv, (arXiv:2006.07397). https://arxiv.org/abs/2006.07397v4
Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who Gets Credit for AI-Generated Art? IScience, 23(9), 101515. https://doi.org/10.1016/j.isci.2020.101515
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. Association for Computing Machinery ArXiv, (arXiv:1803.09010). https://arxiv.org/abs/1803.09010v1
Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J. (2018). Embedded EthiCS: Integrating Ethics Broadly Across Computer Science Education. Association for Computing Machinery ArXiv, (arXiv:1808.05686). https://arxiv.org/abs/1808.05686
Hao, K. (2019, June 6). Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
Hara, N. (2020). Pause Fest [AI-Generated Image]. http://www.n-hara.com
Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. (2019). Quantifying the Carbon Emissions of Machine Learning. Association for Computing Machinery ArXiv, (arXiv:1910.09700). https://arxiv.org/abs/1910.09700
Leibowicz, C. R. (2020). The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity. Partnership on AI. https://www.partnershiponai.org/wp-content/uploads/2020/03/671004_Format-Report-for-PDF_031120-1.pdf
Leibowicz, C. R., Stray, J., & Saltz, E. (2020, July 13). Manipulated Media Detection Requires More Than Tools: Community Insights on What’s Needed. The Partnership on AI. https://www.partnershiponai.org/manipulated-media-detection-requires-more-than-tools-community-insights-on-whats-needed/
Li, Y., & Lyu, S. (2019). De-identification Without Losing Faces. Proceedings of the ACM Workshop on Information Hiding and Multimedia Security, 2019, 83–88. https://doi.org/10.1145/3335203.3335719
Lomas, N. (2020, August 17). Deepfake Video App Reface is just Getting Started on Shapeshifting Selfie Culture. TechCrunch. https://social.techcrunch.com/2020/08/17/deepfake-video-app-reface-is-just-getting-started-on-shapeshifting-selfie-culture/
Lyons, M. J. (2020). Excavating “Excavating AI”: The Elephant in the Gallery. Association for Computing Machinery ArXiv Preprint, (arXiv:2009.01215). https://doi.org/10.5281/zenodo.4037538
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596
Mix. (2020, May 7). This AI Spits Out an Infinite Feed of Fake Furry Portraits. The Next Web. https://thenextweb.com/news/ai-generated-furry-portraits
Moisejevs, I. (2019, July 14). Will My Machine Learning System Be Attacked? Towards Data Science. https://towardsdatascience.com/will-my-machine-learning-be-attacked-6295707625d8
Nicolaou, E. (2020, August 27). Chrissy Teigen Swapped Her Face with John Legend’s and We Can’t Unsee It. Oprah Daily. https://www.oprahdaily.com/entertainment/a33821223/reface-app-how-to-use-deepfake/
Paris, B., & Donovan, J. (2019). Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/
Patrini, G. (2019, October 7). Mapping the Deepfake Landscape. Sensity. https://sensity.ai/mapping-the-deepfake-landscape/
Posters. (2019, May 29). Gallery: “Spectre” Launches (Press Release). http://billposters.ch/spectre-launch/
Raji, I. D., & Yang, J. (2019). ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles. Association for Computing Machinery ArXiv Preprint, (arXiv:1912.06166v1). http://arxiv.org/abs/1912.06166
Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2020). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. Proceedings of the ACM on Human-Computer Interaction, CSCW1. https://doi.org/10.1145/3449081
Roth, Y., & Achuthan, A. (2020, February 4). Building Rules in Public: Our Approach to Synthetic & Manipulated Media. Twitter Blog. https://blog.twitter.com/en_us/topics/company/2020/new-approach-to-synthetic-and-manipulated-media
Rothkopf, J. (2020, July 1). Deepfake Technology Enters the Documentary World. The New York Times. https://www.nytimes.com/2020/07/01/movies/deepfakes-documentary-welcome-to-chechnya.html
Salgado, E. (2020, August 5). Yaku with Circular Loops [AI-Generated Image]. https://www.youtube.com/watch?v=kSQW8Q2WV9c
Saltz, E., Coleman, L., & Leibowicz, C. R. (2020). Making AI Art Responsibly: A Field Guide [Zine]. Partnership on AI. https://www.partnershiponai.org/wp-content/uploads/2020/09/Partnership-on-AI-AI-Art-Field-Guide.pdf
Schultz, D. (2019). Faces2flowers—Artificial Images. https://artificial-images.com/project/faces-to-flowers-machine-learning-portraits/
Schultz, D. (2020). Artificial Images. https://artificial-images.com/
Simonite, T. (2018, November 28). How a Teenager’s Code Spawned a $432,500 Piece of Art. Wired. https://www.wired.com/story/teenagers-code-spawned-dollar-432500-piece-of-art/
Twitter Safety [@TwitterSafety]. (2020, October 30). Our policies are living documents. We’re willing to update and adjust them when we encounter new scenarios or receive important… [Tweet]. Twitter. https://twitter.com/TwitterSafety/status/1322298208236830720