Main Article Content
In this contribution, we present a visual approach to study the development of the online representation of climate change. We collected ranked image lists over a twelve years timespan on Google Image Search, and analyzed them with a two-fold visualization: an image timeline of the top 5 images per year and an area bump chart showing the top 10 tags automatically detected by the computer vision algorithm in the larger dataset of the top 100 results per year. We can draw two main conclusions from these results. First, the artificial separation between climate change and humans identified in previous studies of climate change imagery is being perpetuated and reinforced on one of the most important digital locations for visual culture: Google Images. Second, that there is a notable homogeneity within the corpus of images, as well as stability over time.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license.
All contents of this electronic edition are distributed under the Creative Commons license of "Attribution-ShareAlike 4.0 Internacional" (CC-BY-SA). Any total or partial reproduction of the material must mention its origin.
The rights of the published images belong to their authors, who grant to Diseña the license for its use. The management of the permits and the authorization of the publication of the images (or of any material) that contains copyright and its consequent rights of reproduction in this publication is the sole responsibility of the authors of the articles.
Corner, A., Webster, R., & Teriete, C. (2015). Climate Visuals: Seven Principles for Visual Climate Change Communication (Based on International Social Research). Climate Outreach. https://climateoutreach.org/reports/climate-visuals-seven-principles-for-visual-climate-change-communication/
Doyle, J. (2016). Celebrity Vegans and the Lifestyling of Ethical Consumption. Environmental Communication, 10(6), 777–790. https://doi.org/10.1080/17524032.2016.1205643
Fei-Fei, L., Fergus, R., & Perona, P. (2004). Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. 2004 Conference on Computer Vision and Pattern Recognition Workshop, 178–178. IEEE. https://doi.org/10.1109/CVPR.2004.383
Huang, D., Shan, C., Ardabilian, M., Wang, Y., & Chen, L. (2011). Local Binary Patterns and Its Application to Facial Image Analysis: A Survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41(6), 765–781. https://doi.org/10.1109/TSMCC.2011.2118750
Mauri, M., Elli, T., Caviglia, G., Uboldi, G., & Azzi, M. (2017). RAWGraphs: A Visualisation Platform to Create Open Outputs. Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter, 28:1-28:5. https://doi.org/10.1145/3125571.3125585
Page, L., & Brin, S. (2004). Letter from the Founders “An Owner’s Manual” for Google’s Shareholders. USA: Securities and Exchange Commission. https://www.sec.gov/Archives/edgar/data/1288776/000119312504073639/ds1.htm#toc16167_1