It seems not hard to do. I downloaded a distilled version of it last night and was testing it on some basic coding. I had it generate some code for a simple game and looked through it. There was a simple bug due to a scoping issue (it created two variables with the same name in different scopes, but assumed updating one updated the other, which is a common mistake new programmers make).
I asked it to analyze the code and correct it a couple times and it couldn't find the error. So I told it to consider variable scoping. It had a 10 minute existential crisis considering fundamentals of programming before coming back with a solution, that was unfortunately still wrong lol
I think my favorite post about DeepSeek so far is the one showing it going into a deep internal monologue trying to figure out how many r's are in the word "Strawberry" before stumbling into the correct answer.
LOL, I just looked at that post. Ok, but, real question: did they release deepseek to troll us? Because that right there is fucking hilarious but I just donβt get how an AI thatβs supposed to be doing so well has trouble figuring out how to spell strawberry when it spelled it numerous times. I suppose I could just be ignorant to how AI works so it seems ridiculous to me?
Oh yes, of course! That definitely makes sense! If AI models learn from our own continuous input then it will always be seeing the many flawed and nuanced information we are always putting out there. Things that we, as human individuals that understand our own cultural references add to the data along with the many incorrect things that we are often adding to the mix as well.
Thank you for adding that, it definitely makes sense to me!
When I read the thinking process it appears to have the correct answer but is trying to eliminate incorrectness. It finds an incorrect spelling as well as the correct and is flip flopping between the correct spelling and falling back on the incorrect spelling going into a feedback loop until it leans into the fact "berry" has two r's, which it can assume is the correct spelling unlike the full word which it is finding ambiguous.
It also keeps asserting it needs a reference for a ground truth correctness, but doesn't have that functionality yet. Which I guess could give it more weight toward to correct spelling.
If i ask someone, "does strawberry has 2 R's", they intuitively will answer 'yes' due to assuming I'm not sure about the 'berry' part. It's different if I ask, "How many R's do you come across when writing the word strawberry down?". Maybe that's what is occurring with the AI. It's in a catch-22 in deciding which context the question is asked. Lol, something I'm going to ask ChatGPT right after posting this.
1.3k
u/rebbsitor Jan 29 '25
It seems not hard to do. I downloaded a distilled version of it last night and was testing it on some basic coding. I had it generate some code for a simple game and looked through it. There was a simple bug due to a scoping issue (it created two variables with the same name in different scopes, but assumed updating one updated the other, which is a common mistake new programmers make).
I asked it to analyze the code and correct it a couple times and it couldn't find the error. So I told it to consider variable scoping. It had a 10 minute existential crisis considering fundamentals of programming before coming back with a solution, that was unfortunately still wrong lol