In this first feature, we encounter Jutta Haider, research leader of the Mistra Environmental Communication focus area “Information cultures, data and technology in environmental communication” and Professor of Information studies at the University of Borås, Sweden.
The focus area on “Information” centres around the crucial issues of how information technologies and data shape the ways in which environmental concerns are framed and articulated today. Across applications, services, people, and organisations, datadriven practices create, share and (re)assembles knowledge in specific ways at the expense of others. The number, sophistication, and use of such information technologies are constantly increasing. Its impact on environmental governance will increase and intensify in the coming years.
Jutta, in recent months your Mistra EC work has featured in several news outlets (here, here and here). Could you give us a brief account of the concerns and challenges raised in these pieces?
The work we do in our work package concerns datafication and how that shapes what we know and don’t know about environmental issues, and how that in turn forms practices and evidence. In the last year or so, a lot of public and media attention has turned to generative AI, mostly ChatGPT and similar tools. However, this has also led to the intensification of many pre-existing concerns, which are part of what can be described as a crisis of information. Some of these concerns, such as how to determine the credibility and trustworthiness of information, the extreme control corporate platforms have over information infrastructures and public knowledge, and the value of science and its various assigned roles, are being reconfigured and discussed with even more urgency. Environmental issues and the massive changes occurring simultaneously right now are at the forefront here. Our observations and commentaries are part of that context, and I hope and think we can make a valuable contribution. Perhaps we can disrupt some troubling developments.
For example, we examined papers, very obviously produced using ChatGPT, and their dissemination through the seemingly straightforward and easy-to-use academic search engine, Google Scholar and into all kinds of repositories, archives, and networks. Such production raises questions about how the public and policymakers perceive and access scientific evidence in environmental domains, but also attempts by legacy media to portray climate change as an economic benefit for Sweden, or perhaps they are in a bind to do so. It might seem like these are very different questions, but to me—but I am sure or I know that my Mistra EC colleagues also have different interpretations—it boils down to highlighting that the unequal distribution of control over information infrastructures has profound implications for how societies can respond to environmental crises and to emphasising that environmental meaning-making and infrastructural meaning-making are co-constituted. My interest in this co-constitution lies with what happens right now, at a moment in time where hyper-datafication and existential rawness of nature collide, for lack of better words, but of course, it has a very rich history that also needs to be taken into account.
Your work has received quite some attention and has been picked up by other papers, why do you think this is?
At first I was a bit surprised, to be honest, that the paper on ChatGPT-created papers and how they spread on Google Scholar got as much attention as it did. It was almost a shock. No other paper in the journal comes even close to the number of downloads and reads that our paper attracted. But on second thoughts, I probably shouldn’t have been. There is an unhealthy hype around these so-called AI tools, and there is a media logic that responds to hype but also needs to keep it going. We were of course pleased with the attention and new connections came of it, but our main argument got a bit lost. Our main argument revolved around Google Scholar, the potential for evidence hacking, and the associated challenges for media and information literacy. Most of the reporting didn’t really mention these. This is something we need to work on. But it’s difficult because it’s slow and can’t be quantified, which makes for boring headlines, I guess. Although now, we have come across comments and reporting that does pick up on it and develop it further and keeps the nuance. That’s good.
Is there enough awareness and regulations around these issues today? Does society need to do more to address the concerns you raise?
The short answer is no there is not and yes a lot more!
I recently also published a text in TechPolicy Press on the new AI-infused search engines – like Microsoft CoPilot – and how they literally spread climate denial and even direct people into rather extreme climate conspiracy rabbit holes. The article came into being as a spin-off that developed during the material collection for part of the research on advocacy issues in our work package. That article was well-received, but did not receive the same level of attention at the ChatGPT one. At first I was a bit surprised, but then again, I shouldn’t have been. Despite all the rhetoric, Big Tech platforms—Google, Microsoft, Facebook, OpenAI, and others—do not prioritize environmental destruction and climate breakdown. This is really obvious if you study their policies and guidelines for content control, where they outline what is “okay” to post and what is not and why and so on. The environment is just not there. It still isn’t considered related to systemic risks or to any form of societal damages, social problems, or however else harm is framed in other domains, like health or finance. It’s not even on the list of risks to society or to people’s lives, futures and so on. And that is the very obvious stuff, like blatant, old-school climate denial, truly wild conspiracies about renewable energy, or extreme ideas about a cabal of climate activists. So, awareness or regulation of anything more subtle, like algorithms that advance hyperconsumption, fast fashion, and so on, seems way out of reach. But maybe it’s not; maybe it’s easier to regulate because it’s much more boring; there is little hype around it, and that might turn out to be a good thing.
Finally, what do you think we need to pay more attention to, whether in everyday life or as a research community when it comes to the issues you work on?
My mantra is always that we need to pay attention to the boring stuff, to small frictions and minor breakdowns. Infrastructures, standards, and routines are boring, but they are where change happens, where power takes shape, where discourse solidifies, and this can be transformative. (And I am really working on taming my cynical impulses!)
"Spotlight on Focus area: Information cultures, data and technology in environmental communication"
In this first feature, we encounter Jutta Haider, research leader of the Mistra Environmental Communication focus area “Information cultures, data and technology in environmental communication” and Professor of Information studies at the University of Borås, Sweden.
The focus area on “Information” centres around the crucial issues of how information technologies and data shape the ways in which environmental concerns are framed and articulated today. Across applications, services, people, and organisations, datadriven practices create, share and (re)assembles knowledge in specific ways at the expense of others. The number, sophistication, and use of such information technologies are constantly increasing. Its impact on environmental governance will increase and intensify in the coming years.
Jutta, in recent months your Mistra EC work has featured in several news outlets (here, here and here). Could you give us a brief account of the concerns and challenges raised in these pieces?
The work we do in our work package concerns datafication and how that shapes what we know and don’t know about environmental issues, and how that in turn forms practices and evidence. In the last year or so, a lot of public and media attention has turned to generative AI, mostly ChatGPT and similar tools. However, this has also led to the intensification of many pre-existing concerns, which are part of what can be described as a crisis of information. Some of these concerns, such as how to determine the credibility and trustworthiness of information, the extreme control corporate platforms have over information infrastructures and public knowledge, and the value of science and its various assigned roles, are being reconfigured and discussed with even more urgency. Environmental issues and the massive changes occurring simultaneously right now are at the forefront here. Our observations and commentaries are part of that context, and I hope and think we can make a valuable contribution. Perhaps we can disrupt some troubling developments.
For example, we examined papers, very obviously produced using ChatGPT, and their dissemination through the seemingly straightforward and easy-to-use academic search engine, Google Scholar and into all kinds of repositories, archives, and networks. Such production raises questions about how the public and policymakers perceive and access scientific evidence in environmental domains, but also attempts by legacy media to portray climate change as an economic benefit for Sweden, or perhaps they are in a bind to do so. It might seem like these are very different questions, but to me—but I am sure or I know that my Mistra EC colleagues also have different interpretations—it boils down to highlighting that the unequal distribution of control over information infrastructures has profound implications for how societies can respond to environmental crises and to emphasising that environmental meaning-making and infrastructural meaning-making are co-constituted. My interest in this co-constitution lies with what happens right now, at a moment in time where hyper-datafication and existential rawness of nature collide, for lack of better words, but of course, it has a very rich history that also needs to be taken into account.
Your work has received quite some attention and has been picked up by other papers, why do you think this is?
At first I was a bit surprised, to be honest, that the paper on ChatGPT-created papers and how they spread on Google Scholar got as much attention as it did. It was almost a shock. No other paper in the journal comes even close to the number of downloads and reads that our paper attracted. But on second thoughts, I probably shouldn’t have been. There is an unhealthy hype around these so-called AI tools, and there is a media logic that responds to hype but also needs to keep it going. We were of course pleased with the attention and new connections came of it, but our main argument got a bit lost. Our main argument revolved around Google Scholar, the potential for evidence hacking, and the associated challenges for media and information literacy. Most of the reporting didn’t really mention these. This is something we need to work on. But it’s difficult because it’s slow and can’t be quantified, which makes for boring headlines, I guess. Although now, we have come across comments and reporting that does pick up on it and develop it further and keeps the nuance. That’s good.
Is there enough awareness and regulations around these issues today? Does society need to do more to address the concerns you raise?
The short answer is no there is not and yes a lot more!
I recently also published a text in TechPolicy Press on the new AI-infused search engines – like Microsoft CoPilot – and how they literally spread climate denial and even direct people into rather extreme climate conspiracy rabbit holes. The article came into being as a spin-off that developed during the material collection for part of the research on advocacy issues in our work package. That article was well-received, but did not receive the same level of attention at the ChatGPT one. At first I was a bit surprised, but then again, I shouldn’t have been. Despite all the rhetoric, Big Tech platforms—Google, Microsoft, Facebook, OpenAI, and others—do not prioritize environmental destruction and climate breakdown. This is really obvious if you study their policies and guidelines for content control, where they outline what is “okay” to post and what is not and why and so on. The environment is just not there. It still isn’t considered related to systemic risks or to any form of societal damages, social problems, or however else harm is framed in other domains, like health or finance. It’s not even on the list of risks to society or to people’s lives, futures and so on. And that is the very obvious stuff, like blatant, old-school climate denial, truly wild conspiracies about renewable energy, or extreme ideas about a cabal of climate activists. So, awareness or regulation of anything more subtle, like algorithms that advance hyperconsumption, fast fashion, and so on, seems way out of reach. But maybe it’s not; maybe it’s easier to regulate because it’s much more boring; there is little hype around it, and that might turn out to be a good thing.
Finally, what do you think we need to pay more attention to, whether in everyday life or as a research community when it comes to the issues you work on?
My mantra is always that we need to pay attention to the boring stuff, to small frictions and minor breakdowns. Infrastructures, standards, and routines are boring, but they are where change happens, where power takes shape, where discourse solidifies, and this can be transformative. (And I am really working on taming my cynical impulses!)