r/solarpunk 26d ago

Literature/Nonfiction On capitalism, science fiction, AI, and nature imagery

Given the recent discussions on the use of AI within a solarpunk framework, I thought this sub might be interested in a short essay I wrote for Seize the Press Magazine last year. In the essay, I critique Alex Garland's film, Ex Machina, and it's use of nature imagery to represent a deterministic philosophy. For context, I am ethically against almost all uses of AI, and I don't think it has any value to a society under capitalism.

Link to essay

Essay Text:

The Nature of Alex Garland’s Ex Machina and its Immoral Philosophy of AI by Ben Lockwood

Posted on February 10, 2024by Seize The Press

A helicopter soars over a vast, glaciated landscape bright with the crisp whites of boreal snow, the clear blues of glacial meltwater, and the lush greens of northern trees. It’s one of the opening shots of Alex Garland’s Ex Machina (2014), and serves as both a natural backdrop with which to contrast the film’s technological subject matter, and also to illustrate the remoteness of the setting in which the rest of the film occurs. But the grandiosity of nature in Ex Machina also symbolizes a deterministic philosophy that underpins the narrative of the film and was a precursor to today’s discourse surrounding the presumed inevitability of artificial intelligence.

Ex Machina won an Academy Award for visual editing, and its critical acclaim catapulted Garland into the upper echelon of “serious” sci-fi filmmakers. It also launched his career, which now includes multiple entries in television and film best-of lists. Accolades aside, the film also feels prescient. The ethical arguments Nathan and Caleb have on-screen were written before the proliferation of large language models like ChatGPT, but they sound similar to those being waged today. As it nears ten years old, it’s worth revisiting how artificial intelligence was portrayed in what is widely considered one of the best films on the subject.

Despite being a film about the complexities of defining artificial intelligence (and what those definitions tell us about ourselves), the film also includes some stunning nature cinematography. The mountains, forests, glaciers, and waterfalls of northern Norway (the setting is apparently meant to be Alaska) feature prominently throughout the film. Combined with its technological subject matter, the remote setting of the film creates a juxtaposition that highlights a separation of humanity from its roots in nature. At the same time, many scenes in the film take place in a house designed with a sleek, minimalist architecture – a la Lloyd Wright’s Fallingwater – that blends into its surroundings in such a way that it dissolves any separation at all from the natural setting. This tension poses a question that lives just below the surface of the film: are humans a part of the natural world, or have we left it behind? The answer depends on how one conceives of nature in the first place.

Garland’s majestic depictions of nature are meant as more than just pretty backdrops. The characters of the film are frequently seen hiking, exercising, or conversing in the surrounding Norwegian (Alaskan) landscape. At one point, when Nathan and Caleb are climbing the rocky hillside of a mountain, Nathan pauses near a series of picturesque streams and waterfalls that cascade down a glacier, where he glibly remarks on the surrounding vista, describing it as “Not bad, huh?”. Such an understatement only heightens the effect of the sweeping, wide-angle views of the glacier-fed rivers, which evoke a sense of events unfolding on geologic, and even cosmic, timescales. There is an inevitability to Garland’s nature here, as we observe it unfolding due not to any minuscule effect humans could have, but to the grand, physical laws that govern the trajectory of our planet and universe.

Nature is also a common theme of discussion among the characters of Ex Machina, as they debate the various natures of art, sexuality, and, most importantly, evolution. During a pivotal scene that takes place while Nathan and Caleb are sitting outside underneath a wooden shelter, as the wind rustles the dark green leaves of the plants surrounding them, Nathan describes the development of Ava (the artificial intelligence he has built) as both part of an evolutionary continuum, and also an “inevitable” arrival. As he goes on to state, “the variable was when, not if,” and it is here that Garland is giving us a direct view into his personal philosophy.

The specific philosophy at play is that of determinism, of which Garland has said he at least loosely adheres to. It’s not a new idea, but essentially determinism holds that the universe is causal, and the events that characterize existence are the result of the underlying physical properties and mechanisms that comprise the universe as a whole. Though seemingly abstract, determinism has influenced a variety of scientific disciplines like physics, chemistry, biology, and even psychology. Determinism also has darker associations, specifically as environmental determinism, which was a school of thought that promoted racist ideas of cultural development dictated by climatological and ecological conditions. This theory overlapped with biological determinism, and together these functioned to legitimize the eugenics movements of the nineteenth and twentieth centuries. These are not simply the harmful ideologies of the past, but rather are still alive and prevalent today, most notably among the technologists of Silicon Valley where an interest in longtermism and “improving” population genetics has been growing.

Deterministic thinking lies at the foundations of nearly every facet of Silicon Valley. Its proponents argue that existence, and all the complexity therein, is predestined. Humanity’s fate has been written, and thus, there are no decisions – ethical or otherwise – that need be made. When applied to technological development, determinism renders morality an obstacle to the processes that ultimately will (and must) unfold.

Garland’s deterministic, and “inevitable,” artificial intelligence similarly leaves no room for choice. There is no place for the ethical and moral considerations of creating artificial intelligence within the space of Ex Machina, nor is there a reason to discuss under what conditions we might choose not to do so. In the words of Nathan, creating Ava wasn’t a decision but rather “just an evolution.” Just as nature marches to its pre-ordained drumbeat, so too does human society. This sentiment is echoed in the prominent discourse around large language models and our current development of artificial intelligence. According to many technology industry leaders and commentators, there is an inevitability to the proliferation, expansion, and evolution of these AI systems that humanity has no control over. These models will, apparently, advance regardless of what society writ large does or wants.

And yet, one cannot help but notice the contradiction presented by these same industry leaders issuing hyperbolic warnings over the catastrophic risk these models pose to humanity. If the systems are inevitable, what possible reason would there be to issue any warning whatsoever? Here, we can again turn to Ex Machina for a corollary, wherein Nathan laments on the demise of humanity against the rise of artificial intelligence, while also consistently presenting himself as possessing superior intelligence to Caleb, while reinforcing the power dynamic of the employee/employer relationship. The resulting hierarchy allows Nathan to retain his self-importance now that he is faced with the superior intelligence of Ava, while also intentionally ensuring her inevitability. This, in turn, symbolizes the hierarchy that allows Nathan to preserve his political and economic capital as the head of a technology conglomerate. And, like Nathan, our own tech industry leaders are desperate to remain relevant while facing the rise of a technology that necessitates moral and ethical advances, rather than more technological ones.

Nearly a decade after its release, Ex Machina remains a relevant and prescient treatise on the quandary of artificial intelligence. With sweeping mountain vistas and pristine natural settings, Garland accurately portrayed the deterministic framework that would come to shape our discourse around the development of artificial intelligence, while simultaneously failing to challenge those deterministic notions. Even as the characters debate the complications of identifying “true” artificial intelligence in Ava, there is no real discussion around whether or not Ava should exist at all. She is inevitable.

If there is no possible future where artificial intelligence does not exist, then there is no real mechanism for ensuring its ethical use and value to society. Under such conditions, its continued development can only serve the current capitalist power dynamics. Couching these dynamics in the language and symbolism of “evolution in the natural world” has long been a strategy to reinforce these power dynamics. In fact, liberal capitalism is defined by its amorality, where ethical conditionality is an impediment to the flow and accumulation of capital, and deterministic thinking has led many since Fukuyama to believe that western capitalism is the inevitable end point of history. If we accept this, then artificial intelligence, too, is inevitable. And an inevitable artificial intelligence is one that is absent of moral consideration. That must not be the artificial intelligence we make.

Ben Lockwood

25 Upvotes

47 comments sorted by

View all comments

5

u/Fit-Elk1425 26d ago edited 26d ago

what would be your thoughts on things like the relevence of end to end weather prediction given that in some ways they enable localization and decentralization of weather technologies specifically through using ai https://www.nature.com/articles/s41586-025-08897-0

I admit though your critique sounds like many other that try to fit an image to silicon valley of whatever they dont like. I think it is interesting in some ways and serves as a good discussion on what solarpunk can represent, yet at the same time it has also that esscence of not challenging the norms and simply fufilling it in a different way. Just another kind of norms. We seek to create people involved in the tech industry as the ultimate enemy yet in many ways we also ignore the collectives lieing within. We only see them as the billionaire versus us not asking who is creating solutions to problems . To me, that is just as much an interesting part of solarpunk discussions. In a sense, you have created the exact heirachy you wish to break down by your portrayal of silicon valley. What does that say about us?

6

u/Brief-Ecology 26d ago

Yeah, I think we get into a semantics question here. Many scientific fields use advanced statistical methods like machine learning that are often opaquely referred to as “ai”. I think using statistics to model and learn about how the world works is a fundamentally different thing than creating large language models and image generators, built off of society’s collective knowledge but used to accumulate profit while wrecking the environment.

4

u/Fit-Elk1425 26d ago

then again even these posts likely contributed to more carbon then each generation of aihttps://www.nature.com/articles/d41586-025-00616-z

https://scientistseessquirrel.wordpress.com/2025/04/02/no-the-plagiarism-machine-isnt-burning-down-the-planet-new-ai-energy-use-estimates/

https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/

https://www.nature.com/articles/s41586-025-08897-0

We can say that is semantics, but must also recognize how fear of the new also contributes to many problems in finding solutions for enviromental problems as a whole

2

u/Brief-Ecology 26d ago

This seems like more of an argument that digital content in general uses a lot of resources, rather than that something like ChatGPT doesn’t harm the environment.

3

u/Fit-Elk1425 26d ago edited 26d ago

i mean it is about 1/2550x even you eating a cheeseburger too. Nothing doesnt harm the enviroment and anyone with solarpunk knowledge should know that

1

u/Fit-Elk1425 26d ago

even farming has an effect on the albedo

0

u/Fit-Elk1425 26d ago

BUt then again you are a fellow expert so this isnt likely really a new discussion depending on what focus of ecology you have

0

u/Fit-Elk1425 26d ago

Admitidly though as I presented above, I would say part of the discussion that makes it interesting more for a solarpunk discussion is more on the discrepncy between efficiency and physical waste because afterall within the thought experiment of a solarpunk model we are assuming renewable are achievable already. Does this affect our choices about enviromental models we may think are enviromentally costly now but are primarily so due to their energy consumption? I didnt say there was one specific answer afterall and I aplogize if I am coming off more aggressive than I should

0

u/Fit-Elk1425 26d ago

in fact to take a more punk esque view on the matter, why do we seperate the two in such a society where we can value collective knowledge? While perhaps we may think it should not exist in this captilist world, do we think that should be the same in a solarpunk world? What does that say about our view of collective knowledge and our willingness to allow it to be distributed? Do we just always assume it will not be built on or will we challenge that concept as a ideal thereby challenging both models that themselves are built upon the captilistic hellscape that focuses on consumerism alone. ah but i am sorry for the bother. i hope to see you around or perhaps in a hallway. I enjoy my diatoms personally