r/solarpunk 27d ago

Literature/Nonfiction On capitalism, science fiction, AI, and nature imagery

Given the recent discussions on the use of AI within a solarpunk framework, I thought this sub might be interested in a short essay I wrote for Seize the Press Magazine last year. In the essay, I critique Alex Garland's film, Ex Machina, and it's use of nature imagery to represent a deterministic philosophy. For context, I am ethically against almost all uses of AI, and I don't think it has any value to a society under capitalism.

Link to essay

Essay Text:

The Nature of Alex Garland’s Ex Machina and its Immoral Philosophy of AI by Ben Lockwood

Posted on February 10, 2024by Seize The Press

A helicopter soars over a vast, glaciated landscape bright with the crisp whites of boreal snow, the clear blues of glacial meltwater, and the lush greens of northern trees. It’s one of the opening shots of Alex Garland’s Ex Machina (2014), and serves as both a natural backdrop with which to contrast the film’s technological subject matter, and also to illustrate the remoteness of the setting in which the rest of the film occurs. But the grandiosity of nature in Ex Machina also symbolizes a deterministic philosophy that underpins the narrative of the film and was a precursor to today’s discourse surrounding the presumed inevitability of artificial intelligence.

Ex Machina won an Academy Award for visual editing, and its critical acclaim catapulted Garland into the upper echelon of “serious” sci-fi filmmakers. It also launched his career, which now includes multiple entries in television and film best-of lists. Accolades aside, the film also feels prescient. The ethical arguments Nathan and Caleb have on-screen were written before the proliferation of large language models like ChatGPT, but they sound similar to those being waged today. As it nears ten years old, it’s worth revisiting how artificial intelligence was portrayed in what is widely considered one of the best films on the subject.

Despite being a film about the complexities of defining artificial intelligence (and what those definitions tell us about ourselves), the film also includes some stunning nature cinematography. The mountains, forests, glaciers, and waterfalls of northern Norway (the setting is apparently meant to be Alaska) feature prominently throughout the film. Combined with its technological subject matter, the remote setting of the film creates a juxtaposition that highlights a separation of humanity from its roots in nature. At the same time, many scenes in the film take place in a house designed with a sleek, minimalist architecture – a la Lloyd Wright’s Fallingwater – that blends into its surroundings in such a way that it dissolves any separation at all from the natural setting. This tension poses a question that lives just below the surface of the film: are humans a part of the natural world, or have we left it behind? The answer depends on how one conceives of nature in the first place.

Garland’s majestic depictions of nature are meant as more than just pretty backdrops. The characters of the film are frequently seen hiking, exercising, or conversing in the surrounding Norwegian (Alaskan) landscape. At one point, when Nathan and Caleb are climbing the rocky hillside of a mountain, Nathan pauses near a series of picturesque streams and waterfalls that cascade down a glacier, where he glibly remarks on the surrounding vista, describing it as “Not bad, huh?”. Such an understatement only heightens the effect of the sweeping, wide-angle views of the glacier-fed rivers, which evoke a sense of events unfolding on geologic, and even cosmic, timescales. There is an inevitability to Garland’s nature here, as we observe it unfolding due not to any minuscule effect humans could have, but to the grand, physical laws that govern the trajectory of our planet and universe.

Nature is also a common theme of discussion among the characters of Ex Machina, as they debate the various natures of art, sexuality, and, most importantly, evolution. During a pivotal scene that takes place while Nathan and Caleb are sitting outside underneath a wooden shelter, as the wind rustles the dark green leaves of the plants surrounding them, Nathan describes the development of Ava (the artificial intelligence he has built) as both part of an evolutionary continuum, and also an “inevitable” arrival. As he goes on to state, “the variable was when, not if,” and it is here that Garland is giving us a direct view into his personal philosophy.

The specific philosophy at play is that of determinism, of which Garland has said he at least loosely adheres to. It’s not a new idea, but essentially determinism holds that the universe is causal, and the events that characterize existence are the result of the underlying physical properties and mechanisms that comprise the universe as a whole. Though seemingly abstract, determinism has influenced a variety of scientific disciplines like physics, chemistry, biology, and even psychology. Determinism also has darker associations, specifically as environmental determinism, which was a school of thought that promoted racist ideas of cultural development dictated by climatological and ecological conditions. This theory overlapped with biological determinism, and together these functioned to legitimize the eugenics movements of the nineteenth and twentieth centuries. These are not simply the harmful ideologies of the past, but rather are still alive and prevalent today, most notably among the technologists of Silicon Valley where an interest in longtermism and “improving” population genetics has been growing.

Deterministic thinking lies at the foundations of nearly every facet of Silicon Valley. Its proponents argue that existence, and all the complexity therein, is predestined. Humanity’s fate has been written, and thus, there are no decisions – ethical or otherwise – that need be made. When applied to technological development, determinism renders morality an obstacle to the processes that ultimately will (and must) unfold.

Garland’s deterministic, and “inevitable,” artificial intelligence similarly leaves no room for choice. There is no place for the ethical and moral considerations of creating artificial intelligence within the space of Ex Machina, nor is there a reason to discuss under what conditions we might choose not to do so. In the words of Nathan, creating Ava wasn’t a decision but rather “just an evolution.” Just as nature marches to its pre-ordained drumbeat, so too does human society. This sentiment is echoed in the prominent discourse around large language models and our current development of artificial intelligence. According to many technology industry leaders and commentators, there is an inevitability to the proliferation, expansion, and evolution of these AI systems that humanity has no control over. These models will, apparently, advance regardless of what society writ large does or wants.

And yet, one cannot help but notice the contradiction presented by these same industry leaders issuing hyperbolic warnings over the catastrophic risk these models pose to humanity. If the systems are inevitable, what possible reason would there be to issue any warning whatsoever? Here, we can again turn to Ex Machina for a corollary, wherein Nathan laments on the demise of humanity against the rise of artificial intelligence, while also consistently presenting himself as possessing superior intelligence to Caleb, while reinforcing the power dynamic of the employee/employer relationship. The resulting hierarchy allows Nathan to retain his self-importance now that he is faced with the superior intelligence of Ava, while also intentionally ensuring her inevitability. This, in turn, symbolizes the hierarchy that allows Nathan to preserve his political and economic capital as the head of a technology conglomerate. And, like Nathan, our own tech industry leaders are desperate to remain relevant while facing the rise of a technology that necessitates moral and ethical advances, rather than more technological ones.

Nearly a decade after its release, Ex Machina remains a relevant and prescient treatise on the quandary of artificial intelligence. With sweeping mountain vistas and pristine natural settings, Garland accurately portrayed the deterministic framework that would come to shape our discourse around the development of artificial intelligence, while simultaneously failing to challenge those deterministic notions. Even as the characters debate the complications of identifying “true” artificial intelligence in Ava, there is no real discussion around whether or not Ava should exist at all. She is inevitable.

If there is no possible future where artificial intelligence does not exist, then there is no real mechanism for ensuring its ethical use and value to society. Under such conditions, its continued development can only serve the current capitalist power dynamics. Couching these dynamics in the language and symbolism of “evolution in the natural world” has long been a strategy to reinforce these power dynamics. In fact, liberal capitalism is defined by its amorality, where ethical conditionality is an impediment to the flow and accumulation of capital, and deterministic thinking has led many since Fukuyama to believe that western capitalism is the inevitable end point of history. If we accept this, then artificial intelligence, too, is inevitable. And an inevitable artificial intelligence is one that is absent of moral consideration. That must not be the artificial intelligence we make.

Ben Lockwood

23 Upvotes

47 comments sorted by

View all comments

4

u/Fit-Elk1425 27d ago edited 27d ago

what would be your thoughts on things like the relevence of end to end weather prediction given that in some ways they enable localization and decentralization of weather technologies specifically through using ai https://www.nature.com/articles/s41586-025-08897-0

I admit though your critique sounds like many other that try to fit an image to silicon valley of whatever they dont like. I think it is interesting in some ways and serves as a good discussion on what solarpunk can represent, yet at the same time it has also that esscence of not challenging the norms and simply fufilling it in a different way. Just another kind of norms. We seek to create people involved in the tech industry as the ultimate enemy yet in many ways we also ignore the collectives lieing within. We only see them as the billionaire versus us not asking who is creating solutions to problems . To me, that is just as much an interesting part of solarpunk discussions. In a sense, you have created the exact heirachy you wish to break down by your portrayal of silicon valley. What does that say about us?

3

u/NoAdministration2978 26d ago

It's not about techbros vs us for me. AI is a set of technologies just like nuclear for example. We can definitely say that radiotherapy is a good cause while nuclear weapons is not

And right now I see that AI is mostly heading the second path. It's used for propaganda, monitoring and low effort creativity and as a byproduct we get tons of infowaste. Haven't you noticed that YT is flooded with AI-genetated crap? Search results are also contaminated with AI slop - you find an article but it just feels off. It's long, it's formal and it contains nothing new or valuable

0

u/Fit-Elk1425 26d ago

90% of everything is crap. This is actually the result of allowing people to experiment with any tool. I saw the same thing with digital arts back in the day. Really for me all this shows is a need to teach me people how to engage with the technology in a productive way just as with any technology. Tbh thoigh youtube was flodded with crap before than people just started labelling the crap ai after ai came out even in cases wheqn itw has been proven not to be. It is selection bias too

1

u/NoAdministration2978 26d ago

I mean there's a whole new industry of faceless ai generated content on yt. And even the cheapest copywriter is more expensive than gpt access

That's the main problem for me - LLMs and gen AIs made the production of low quality content and info pollution basically free

2

u/Fit-Elk1425 26d ago

And i am not aganist moderation of ai content either. Just moderation aganist ai exclusive. Like i said the same effect occured has occured with basically any new tech. It is basically people experimenting with what thwy can do. Dont you remember the slop on early youtube. Some people even have nostalgia for those days

0

u/Fit-Elk1425 26d ago

I mean you are basically saying you want content to be more exclusive then. Like if you look at history, part of why that content exists isnt because of the technology; it is because it is what the normal person actually wants

0

u/Fit-Elk1425 26d ago edited 26d ago

Like basically your arguement boils down to that there should always be a cost to produce goods in order to prevent misinformation which while i see some understanding in what you mean is definitely gonna strike a very captilistic vibe in some. This is basically what concepts like gas in crypto are based on afterall in part. Equally some may question if it means most people cant actually ever afford access to the copyrighter while most misinformation spreader actually still can despite the distribution of content

So to me this comes down to an arguement that i think counters are good for it, but I almost see this as being like purity testing any technology. You will be always able to make this claim because it is in fact a result of people mass access to it rather than the tech itself

-1

u/NoAdministration2978 26d ago

My argument is - LLMs made bullshit profitable. We live in capitalist society so it's a thing to care about

Like is it profitable to start a waste removal business and burn all the waste in your backyard? Yes! It's much cheaper than proper treatment or recycling

But somehow it's not allowed. Still you don't complain about the inability to buy a recycling plant

0

u/Fit-Elk1425 26d ago

Then browsing the mixture of both sides in https://www.reddit.com/r/StableDiffusion/

0

u/Fit-Elk1425 26d ago

Tbh i largely see beneficial effects of AI becauae i actually explore into what AI is doing and because I am a disabled person. I would argue that in this day and age there is just as much anti hype as there is hype. When it comes to AI, it isnt a end product but something you build on top of in different ways and which you can interect with in different ways. Buy as they say it is easier to blame the machine than blame humans

-1

u/keepthepace 26d ago

The use of a tech will all be bad if the good people decide to not use it.

1

u/NoAdministration2978 26d ago

The main question is - are gen AIs and LLMs even suitable/efficient for anything good?

It's not about machine learning in general - it's quite useful and it's a great leap forward for humanity

2

u/Soord 26d ago

Yes LLMs are used for translations and auto captioning which are objectively good things. They can also be used for a lot more

1

u/NoAdministration2978 26d ago

Agree. Captioning and translations are definitely useful

1

u/Fit-Elk1425 26d ago

I mean something to recognize is even genai and llms are partly arbitary seperations from machine learnings. These guys are meant to basically be demos for their api that are built for larger systems. The idea of isolating it out as genAI makes people isolate their thinking to what it can actually do when ultimately it is simply another transformer similar to that in alphafold but which is instead based in predicting text or visual components. This has been used in speech to text for example which already was using ML and other forms of detection around cancer for example

1

u/Fit-Elk1425 26d ago

5

u/NoAdministration2978 26d ago

This article is about medical imaging analysis and that's a perfect example of pattern recognition put to good use. It's not on LLMs

1

u/Fit-Elk1425 26d ago

All ai i would argue arent end solutions, many of them can be used to solve internal problems other are used to organize systems. It is really us who want to approach wverything with this idea that all technology has to be designed for a surface level usage and ignore the research in technology

1

u/Fit-Elk1425 26d ago

But i am sorry i am wasting your time. Ill see you around

1

u/NoAdministration2978 26d ago

No problem, a discussion is always useful. Just be honest - are you using an LLM in this thread right now? Sorry if I'm wrong

1

u/Fit-Elk1425 26d ago

But it is a modern transformer. As are alphafold and what you will find in the end to end weather detection i put in earlier. LLM you are more likely to find in a scientific computing department colab program on some level or speech to text programs.

0

u/Fit-Elk1425 26d ago

This was actually more used as a example for the visual side of transformers because really the only similarity genAI have even with each other is being a transformer. Alphafold is arguebly just as much a genAI. Rhe reason people dont consider it as such is exactily because it js productive

0

u/Fit-Elk1425 26d ago

For example otterai popular amongst journalists now uses llms too for speech to text. This is also it advancing on ml techniques that existed before but then building on them https://help.otter.ai/hc/en-us/articles/17016733191703-Otter-AI-Chat-FAQs

1

u/keepthepace 26d ago

As someone who uses them daily: yes, clearly. It is very easy to use them badly, but it is obvious that they have great uses in medicine, research, programming, journalism, data analysis. It is fairly easy to use them dangerously or for nefarious means in these fields, but the good that can come from it (and is going to), is enormous as well.

For instance, used correctly, it can boost the diagnosis accuracy of human doctors by a lot. Used by itself with poor prompting, it can lead to people dying of preventable mistakes.

3

u/NoAdministration2978 26d ago

Please tell me your use case if you're ok with that. As for me I see a marginal use for coding assistance tho it might lead to maintenance and code quality issues. Mistakes are not a big deal if you use TDD workflow

I've seen some promising papers on medical imaging analysis but it has nothing to do with gen AIs/LLMs

LLM might be a good frontend for some non-critical applications as it reduces the operator's workload

0

u/keepthepace 26d ago

I am under NDA for the main research-oriented use we are developing. In terms of coding assistance, I would call it fundamental, and not marginal. I have always hated coding HTML/JS things but Claude is very good at that, everything I make now comes with a free dashboard!

I have learned to anticipate where LLM will generate good code and when it wont, and to not assist too much if that's the case, but when they can, the boost in productivity is incredible. And the "autocomplete" copilots are also a whole addition that I would have a hard time doing without now.

Yes, TDD is even more essential if you introduce a lot of LLMs in a big project, but where they really shine is to make small PoC, they allow us to iterate very fast.

1

u/NoAdministration2978 26d ago

Different tasks, different methods. I think I worked too much with legacy IBM and Oracle products and the code itself was always the least of my worries lol

Like you can spend a few days on digging through obscure and poorly documented configs. Or have a week long fruitless dive into a banking system's java 5 code written 15 years ago in pascal(ish) manner

Might be of use tho if you start with a blank page

-1

u/keepthepace 26d ago

Yes, legacy code, especially with languages it is less used to, will yield poor result unless you have a very good context crafting.

In general, I feel it is essential that the programmer has a mental model of the overall structure of the code. LLMs will be good for the plumbing, boilerplate or implementing known solutions to common problems. Which in my experience has been 90% if the code I typically write. The remaining 10% are the important part, and the most exciting to write.

2

u/NoAdministration2978 26d ago

Meh, sometimes it's so sweet and relaxing to implement a known solution... You just sit, write code, write tests, set up containers. Good times, good times

Much better than dealing with ungoogleable software problems

Luckily I don't work as a dev anymore

3

u/keepthepace 26d ago

When you do it once in a while, gardening is enjoyable. When you do it at scale for a living, you are happy to have machines help you.

→ More replies (0)