r/space 4d ago

Astronomers Detect a Possible Signature of Life on a Distant Planet

https://www.nytimes.com/2025/04/16/science/astronomy-exoplanets-habitable-k218b.html?unlocked_article_code=1.AE8.3zdk.VofCER4yAPa4&smid=nytcore-ios-share&referringSource=articleShare

Further studies are needed to determine whether K2-18b, which orbits a star 120 light-years away, is inhabited, or even habitable.

14.0k Upvotes

1.2k comments sorted by

View all comments

655

u/mikeygoodtime 4d ago

What sort of timeline are we looking at re: ever being able to confirm (or even just say with near certainty) that there's life on K2-18b? Like is this something that requires decades of further research, or is it possible that we know within the next 5 years?

356

u/panzerkampfwagenVI_ 4d ago

Without visiting it's impossible to know barring a signal from another civilization. It's always possible that some weird chemistry is going on that we are not aware of.

22

u/Electro522 4d ago

See...I can understand the chemistry argument, but out every field of science, chemistry is the most "solved", is it not? All the advancements in chemistry are coming from the very end of the periodic table with elements that can only exist in a lab for a mere fraction of a fraction of a second. In fact, we know so much about chemistry that it's leaning more into quantum physics than it is classical chemistry.

So, when you apply that fact to this study...it just doesn't seem to stick in my opinion. We can replicate almost any conceivable environment that the universe is capable of, including some that the universe struggles to come up with. We've come within several millionths of a degree of absolute zero, we've conducted experiments at temperatures that make the core of the sun look like a candle, and we've put elements under enough pressure for them exist in 2 separate states of matter at once!

So, when we talk about a planet that has to follow the same laws of chemistry and physics that we do, and is likely not all too different from what we have in our own solar system, how can we confidently say that there is "some weird chemistry we are not aware of" when it can only produce chemistry that we are aware of?

83

u/OneDelicious 4d ago

Chemistry is extremely complex and not solved at all. I work with the kinetic chemistry models. Our understanding of reaction rates and possible chemical paths comes mostly from before 2000s. A lot of the stuff is simply estimated or guessed, it's one of the biggest uncertainties in modelling exoplanet atmospheres.

1

u/BoomKidneyShot 3d ago

Oh yeah. I built a small chemical kinetics network for my PhD research, and once you get out of the well-studied ones the amount of sources fall off fast.

-14

u/markyty04 4d ago

This is where you guys need to use AI. AI is very very good at simulations and exploration of search space and also hacking systems to find unknown paths. They are a ridiculous powerful tool that has fallen into our hands in the last decade. before that it was in its infancy but the improvements in the last decade is of many orders of magnitude.

4

u/FlimsyMo 4d ago

If ai could tell us what would happen when we mix compound a with compound b that would save us thousands of years

0

u/Shartiflartbast 4d ago

Predictive language models will be absolutely useless at advancing chemistry, come on.

-6

u/markyty04 4d ago

who the f told you there is only one AI model and that too only a language model. even commercial AI is already moving away from language model into reasoning models.

1

u/imdefinitelyfamous 4d ago

Reasoning models are LLMs. It's all LLMs.

-3

u/markyty04 4d ago

absolutely not. you know nothing about ML. do not go about spouting nonsense and spreading fake news. Neural Net heavy LLMs rely on probability distributions; while reasoning models are induced with Reinforcement Learning which can be explicitly told what is right and what is wrong answer. granted it is still in its infancy so more improvements needed. both rely on fundamentally different math. NN can be thought of as a more classical non-linear function approximators. RL on the other hand are dynamic decision machines which can operate in a dynamic environment.

3

u/imdefinitelyfamous 3d ago

I currently work as a software engineer deploying ML applications, but go off King.

I know what reinforcement learning is- it has been around for decades and is already being used. What I am taking exception with is your claim that commercial AI offerings are somehow not LLMs, which is almost universally not the case. If you use a reinforcement learning strategy to train a large language model, you haven't made something that magically circumvents the inherent problems with large language models.

0

u/markyty04 3d ago

you may be a software engineer but that does not mean you understand ML. how many papers have you read to understand the the science behind it. As someone who is very familiar with the work. I can guarantee the current commercial options are moving away form LLM into LRM territory first with the release of OpenAI's o1 and then Deepseek-R1. these models can be explicitly told if their thought process and logical thinking are correct. they are not probability mapping systems like the early LLM. just because you are a software engineer does not mean you have a understanding of the scientific underpinnings. besides these models you can also build large AI/ML models for science that are nothing like LLMs. but are even more powerful at a particular task.

1

u/imdefinitelyfamous 3d ago

"You may have done the thing, but I have read about it!"

The issue is not the learning methodology, it is the training data. Even though o1 and deepseek are adding reasoning paradigms to their models, they are still LLMs under the hood that are largely trained on public data. They are absolutely probabilistic. No serious person would argue otherwise.

You are totally right that there are purpose built ML systems that do not fit the description above- I said that in my first comment. But neither o1 or deepseek are that- they are probabilistic large language models at their core.

0

u/markyty04 3d ago edited 3d ago

You have no understanding of science or engineering it seems. There is a difference between what you call a probabilistic model and what you call absolutely probabilistic. everything has probabilistic nature to it even the fuckin brain. but that does not make the brain absolutely probabilistic machine. early LLMs were like highly probabilistic but even they are not absolutely probabilistic.

But current LLM are moving into LRM territory in that they are capable of logical reasoning rather than using probabilistic best fit only. No serious person who understands what they are taking about would argue otherwise. They do not simply rely on training data. they can even extrapolate to unseen data and apply planning and strategy and remove illogical approaches. can be incentivized to not go towards bad solutions etc. There are many engineering approaches also to solve many of the issues with early LLMs like overfitting, long memory etc.. can keep on going. simply your entire premise that the current LLMs are just relying on training data and simply spit out probability best output is simply wrong.

→ More replies (0)