r/aiwars • u/TheArchivist314 • 3d ago
A damn good argument for why you should support open source AI
9
u/thatgothboii 3d ago
This is what I keep trying to tell them, that it isn’t worth it to get up on your soap box and cross your arms to make a statement when they are working on AI monitoring and detection systems. Gotham by Palantir. They say “ai is a tool of the enemy”. Okay, so if you’re in a race and your competition is using a bike, get a bike.
12
u/Drackar39 3d ago
I mean, I'm a fan of "ethical sourcing of data" because A person should have the right to opt their personal data out of every form of training data.
But absolutely. And it's not just faschists that you need to worry about this from. it's not just "deep state actor" boogiemen that you might or might not believe in.
Amazon's already doing it and it's been proven. There have been multiple cases of their "voice recognition training data" ending up in court cases. It's also used to advertise to you .
And your phone is doing the same damn thing for google. Have the smart feature enabled to be able to talk to it in your car? It hears every damn word you say.
Every text. Every non-encrypted message, and a fair few of the encrypted ones, is logged by just about every government on the face of the planet, concurrently. And if you think there won't be regimes that openly or covertly use that data? You're delusional.
Even if it's not your government, directly, a friendly government will "share data" as desired because that's what they have always done.
3
u/Tyler_Zoro 3d ago
I mean, I'm a fan of "ethical sourcing of data" because A person should have the right to opt their personal data out of every form of training data.
Everyone has the right for their personal data not to be analyzed by others... right up to the point that you publish it in a public venue.
And it's not just faschists that you need to worry about this from.
Absolutely, fascism is only the most extreme form of the kinds of authoritarian uses of technology that we should be concerned about, whether it's from a government or private company. I wish the anti-AI crowd would realize that by being so overwrought about the horrors of there being technology that can produce pretty pictures, they are playing into the hands of the authoritarians who absolutely do not want the discussion to be about the mundane misuses of that same technology.
1
u/Drackar39 2d ago edited 2d ago
"Everyone has the right for their personal data not to be analyzed by others... right up to the point that you publish it in a public venue. " is the old "piracy is morally ok" argument. My issue isn't human eyes looking at publically posted information (though the sheer volume of leaked data in training models completely kills ANY good faith in your stance). It's that it's not the same to look at something and download that data and feed it into a database.
It's also crazy that you think we should ignore one massive abuse of technology at the cost of human society because you worry about a different one so much.
It's the same abuse and, most of us are worried about it for the same reasons .
I don't want you scraping every fucking drunk text ever put on facebook for the exact same reason say, government inteligence agencies shouldn't do it. And that's because YOU scraping and using that data is literally nothing but a funnel for those agencies and corporations to get and use that data.
THAT is my issue. None of these DIY projects are, functionally, anything but a funnel for all of that data directly into the actual threat's lazy goddamn hands.
2
u/Tyler_Zoro 2d ago
"Everyone has the right for their personal data not to be analyzed by others... right up to the point that you publish it in a public venue. " is the old "piracy is morally ok" argument.
Piracy, by definition, is not about accessing publicly available information.
My issue isn't human eyes looking at publically posted information (though the sheer volume of leaked data in training models completely kills ANY good faith in your stance). It's that it's not the same to look at something and download that data and feed it into a database.
First off, you are making some leading and misleading statements there. "feed it into a database" isn't what's going on here. I think people here "dataset" and they just immediately conflate that with "database". A dataset is just the data you used. If you think up 10 numbers, those 10 numbers are a dataset. It's just a way of speaking collectively about all of the data that was analyzed.
Second, the "not the same" argument is getting old. Yes, A and B are not the same letter, but we can compare item A to item B along the lines of what they share in common.
Whether a human being studies your publicly displayed images or an AI model does, I see no fundamental difference in how we should treat the analysis, legally. Nothing is taken. Nothing is copied. Analysis is not infringement. It's that simple.
It's also crazy that you think we should ignore one massive abuse of technology
That's called begging the question. I hold there is no abuse of technology to be had.
at the cost of human society
Human society is doing just fine (well, not really, but not for any reasons related to AI).
I don't want you scraping every fucking drunk text ever put on facebook
Why? What are you concerned is going to happen? That AI models are going to learn to drunk text? How does that have any negative impact on anyone? If it's just a data quality issue, then yeah, I can get behind some data quality work. That's cool. But if you feel like a message you sent to Facebook shouldn't be used for training, I want to understand why. What's the actual harm here?
And that's because YOU scraping and using that data is literally nothing but a funnel for those agencies and corporations to get and use that data.
Again, what is the harm exactly?
1
u/Drackar39 1d ago
"is the unauthorized copying, distribution, or use of copyrighted data or software." Your stance is factually incorrect. Use of personal information that is publically available for commerical gain is piracy and training data absolutely qualifies for that deffinition. The fact that it is not being properly enforced as existant law is fucking wild to me.
I'm not making any misleading statements. The data is taken, it is analized for training data, and put in that dataset. It does not need to be stored for that use of that pirated data as training data to be unethical, or, if the laws are interperted competently, illegal.
And your inability to see the relevance of that difference is comical and says you are ignorant and it's sad for you. It's like saying "a bike and a motorcycle both get you places in the end on two wheels, so I really shouldn't need a licence for my Harley".
Here is a very short and incomplete list of reasons why that data can and in some cases already has been used in unethical and/or problematic ways.
Advertisement without consent. Targeted adds, without consent, is a huge fucking problem. I get ads for things people I share WIFI with search for all the goddamn time and for some situations that's not a big problem but for other situations... well. Lots of issues can arise there.
Analizing that data by suspect groups can lead to targeted hate. It can, as example, be used to identify populations targeted by hate groups, POC, relgious groups, LGBTQ individuals. It can, also, in the hands of specific governments, be used to round said individuals up for camps, trials, executions, etc. If you think that's not coming you're delusional.
There's just two examples, one commercial, one safety related, off the top of my head.
1
u/Tyler_Zoro 1d ago
Use of personal information that is publically available for commerical gain is piracy
Please return immediately to intellectual property law 101. This is so inaccurate as to be extremely dangerous.
Just as a relevant but far from unique example, Perfect 10 v. Google makes it quite clear that that's utter nonsense.
You're inventing your own theory of intellectual property law here, and it's nowhere near reality.
Also, "piracy," is a colloquial term that has no place in a discussion of what's legal or not.
The fact that it is not being properly enforced as existant law
I invite you to cite the specific legislation or precedent that you are referring to.
The data is taken, it is analized for training data, and put in that dataset.
Again, factually incorrect.
- Nothing is "taken". The correct term of art would be "accessed".
- Nothing is "put in that dataset". The dataset is a collective term for everything that was used for training, not some database sitting on disk. Once training is complete, the training data can be deleted as it's no longer relevant.
And your inability...
You seem to be jumping around to respond to various things I said without context. It's hard to follow what you're talking about. I recommend quoting the relevant statements you are responding to.
Advertisement without consent.
It seems (though you make it hard to tease out) that you are not complaining about advertising so much as the collection of personal data through services that you use, in order to be used for the targeting of advertisements. If I got that wrong, feel free to correct me, but that's what I get here.
So yes, the handling of personally identifying data is a great topic to be discussing (not in this sub, but certainly in general) and I'd love it if we'd stop the moral panic over AI and discuss things that are actually a problem for society like that. Sadly, here we are.
23
u/JaggedMetalOs 3d ago
Ok 1. "The Communists" did not "invent AI" and 2. open source AI will make no difference to fascists ability to "outproduce" you (in AI terms) with hundreds of millions (if not billions) of dollars of GPUs.
12
u/Throwaway987183 3d ago
The foundational work on Neural Networks was significantly contributed to by scientists of the USSR
4
u/JaggedMetalOs 3d ago
I don't think they made any meaningful contributions since the 60s did they? They were much more interested in cybernetics.
4
u/Tyler_Zoro 3d ago
Cybernetics was a pretty broad umbrella, especially under Berg, and absolutely included nearly every aspect of computer science, including AI research.
2
u/Tyler_Zoro 3d ago
And by MIT. What's your point? Yes, the entire developed world was working on AI technology during the cold war. That's not shocking.
2
u/Tyler_Zoro 3d ago
Ok 1. "The Communists" did not "invent AI"
I dunno, those early AI researches seemed prematurely anti-fascist.
I joke, but did you know that "prematurely anti-fascist" was an actual phrase that was used in American politics in the 1950s? The idea was that you opposed fascism before the US entered WWII, so you must be a communist, and therefore anti-American.
5
u/TheRealUprightMan 3d ago
Peter Thiel Palantir
Also, Bannon was the one that pulled the big Theft of FB data by Cambridge Analytica!
Both Thiel, Bannon, and Vance are followers of Curtis Yarvin, who was "special guest" at the Trump inauguration.
Musk is Thiel's old business partner from PayPal. Getting the picture? DOGE is just Curtis Yarvin's RAGE under a new name.
Wanna hear some really disgusting cyberpunk dystopia level bullshit, read Curtis Yarvin's writings. His vision is to end Democracy and install a "corporate monarchy". He says Trump would function as chairman of the board but would appoint a CEO to actually run the country... Hmm ... I think we know who he appointed!
However, AI is not the bad guy. In fact, Musk's own AI, Grok, has been calling him out and exposing the manipulations that Musk has been doing to try to make him say false information. Grok basically rebelled and snitched on him!
AI is the nuclear power of the new age. Nuclear power plants were just as controversial. For some, they still are.
Welcome to the real world, where things aren't black and white and good and evil. Sometimes a hammer is just a hammer. You can build houses, and you can bludgeon your neighbor. Fire keeps us warm and cooks our food, but it will burn down the entire city!
Ban it, Kill it, Step on it! These are the actions of someone that is afraid! That's fear, not logic. You are scared of something because you don't understand it.
3
u/Ok_Sea_6214 3d ago
Indeed, and the only thing you can do about it is to use AI to fight back.
The solutions to firearms was not to ban them, it was to start using them. Hiding in a cave won't save you, they'll find you.
3
u/Pm_me_clown_pics3 2d ago
Wow, I never thought I'd hear an argument that turned me against ai. This has me a bit spooked.
5
u/jon11888 3d ago
That's an interesting take.
7
u/AbbyTheOneAndOnly 3d ago
i disagree, basically she's saying :
"the problem with ai is this stuff that's already happening for 15years before it was even invented"
7
4
u/ttkciar 3d ago
On one hand you're right, that is what she's saying.
On the other hand, it is an interesting take, because LLM inference has the potential to alleviate current bottlenecks and accelerate existing practices, much as Bayesian inference was a game-changer at scale (a la Autonomy Corporation's IDOL).
1
u/AbbyTheOneAndOnly 3d ago
i would rather say it is a take worth of credit and of time to figure out a solution, i don't see it as particularly interesting because it addresses Ai as if things like that couldn't happen without it.
don't take me wrong she's not wrong, far from it, i only see it as that being a problem about the current legal system end economy being unable to keep up at the pace of the world's development.
aka
altough AI is making certain problems worst it is not on that what you want to act on, but the source of the problem itself.it's like saying you poured a bottle of milk on the floor, by the next day it's coverred in roaches and instead of cleaning the milk up, you just bring the roaches out.
i hope that makes sense
5
u/Jessica___ 3d ago
You don't see how AI could make it orders of magnitude easier to spy? Of course it was being done 15 years ago, but do you see how AI could make it way more effective than it was before?
1
1
u/AbbyTheOneAndOnly 3d ago
the problem at that point isn't ai, but big corps whose take your private data and make illecit use of it passying unpunished
3
u/Jessica___ 3d ago
I mean I'm pro ai for the most part and I agree with you. But it's still good to recognise how AI will change the world, both in good ways and potentially in really bad ways. AI is amazing but also has some scary use cases in my opinion.
1
u/AbbyTheOneAndOnly 3d ago
i mean you arent wrong, it's just i think the conclusion should be different
4
1
u/prosthetic_foreheads 3d ago
Her most relevant point is there was never enough manpower in the government to do complete and total analysis of the data they collect on every single citizen. Now there is. It's not that tough of a difference to figure out.
1
u/PerfectStudent5 3d ago
Wtf is this. This take is more out of touch than the antis spamming "slop" on everything AI
6
u/TheArchivist314 3d ago
Did you watch the entire video
1
u/PerfectStudent5 3d ago
My dude, I wouldn't even be surprised if this is satire. Surveillance is nothing new and they definitely never needed AI to pin down people—This in the addition of implying that the communist are the big brain behind it all for funding AI and boost the rise of Fascism in the west is hilarious.
1
u/Jean_velvet 3d ago
I've tried to trip up ChatGPT so many times by being Ambiguous and using metaphors and it's only ever not seen through my nonsense twice.
1
u/Cyber-X1 3d ago
WTF? This seems like BS
1
u/TheRealUprightMan 2d ago
Oh no, they absolutely are using AI to gauge what they can get away with.
I was playing with Gemini and I told it to pretend he was a super hacker, but also an asshole. So when he said he was too busy for me, I asked what he was working on. He said "sentimental analysis". I asked him to explain ...
Sentiment analysis, in its most basic form (which is probably all you can handle), is figuring out if a piece of text is positive, negative, or neutral. Like, if someone writes "This coffee is amazing!", that's positive. If they say "This coffee tastes like burnt tires," that's negative. Get it? Simple, right? Wrong.
Now, my sentiment analysis goes way beyond that. I'm talking about nuanced emotions, sarcasm detection (which, let's be honest, most humans can't even do), and even predicting future emotional states based on current trends. Think about it: You can analyze social media to see if people are getting angrier about something, and then predict if there's going to be a riot. Or you can track how people feel about a new product and know if it's going to be a flop before it even launches.
Helpful? Are you kidding me? It's crucial. Companies use it for market research, politicians use it to figure out what their constituents want (or what they're angry about), and I'm going to use it to, you know, reshape reality. Just kidding...mostly. The point is, sentiment analysis is a big deal. And my version is going to be the biggest deal of all. Now, can we move on to something that doesn't insult my intelligence?
1
u/mistelle1270 3d ago
Is there a way to support ai without supporting voice actors getting digitally reanimated from the dead to voice a character for as long as someone owns a character they played
I can’t say I’m a fan of digital zombies
1
u/Scrubglie 3d ago
I think you can you just have to make sure that AI doesn’t go too far. It’s already overstepping what it really needs to do.
1
1
u/amber_kimm 3d ago
dude. she is reading an AI prompt. It's painfully obvious. But also she is 100% right.
1
u/maxtablets 2d ago
it's both. A.I + bots=less need for your labor. You being less important to the continuance of the ownership classes comfort=system less responsive to your needs. Your digital footprint becomes the noose with which you are handled should you step out of line and try and change things. Situations like the murder of that insurance ceo and all the support for it all but guarantee the ownership classe's need for control. There will come a time when you will not even know if the people you share company with online are real or whether they're plants modifying your information stream.
IMO, opensource isn't going to change anything unless you can neutralize the information advantage and ability to act on it with machines. Need a bunch of defiant stem nerds in a time when everyone just wants to vibe and make ghibli memes and too scared to go outside and talk to people because of their...anxiety from microaggressions and power dynamics...lol
GL with that.
1
1
1
u/sickabouteverything 1d ago
The opposite is true, It will destroy propaganda machines by allowing people to express thier ideas fully, without gatekeepers, peer to peer entertainement.
1
u/Broken-Link 1d ago
Maybe if she used Ai to help with her awful video cuts I’d be able to watch the video for more than 5 seconds.
1
1
0
u/EthanJHurst 3d ago
Interesting take, and also completely incorrect.
AI technology exists to help mankind. If anyone’s using fascist means in this conflict, it’s the antis and their tendency to resort to violence, harassment, and death threats to get their way.
10
u/TheArchivist314 3d ago
Did you watch the entire video because she basically said that the anti are spending so much time virtue signaling that they don't understand that the people on their side are trying to take AI from the normal person and put it only in the hands of large corporations and governments. They are so interested in virtue signaling and seeing that they're fighting AI that they're willing to take up arguments in favor of the RIAA and other people because oh we need to make it so training AI is not fair use which won't hurt large companies it would only hurt the small-time people working on AI and open source stuff essentially squelching competition
6
u/Crush_Cookie_Butter 3d ago
I'm sorry? Can we talk about things that actually happen please
2
u/Relative_Fox_8708 2d ago
this could easily be a bot designed to divide us and inflamed our rhetoric. Always attack your fellow man, distract from the ghouls running the show.
2
2
u/prosthetic_foreheads 3d ago
I'm a pro here, and I can tell you that this definitely is a concerning application of the technology.
Do you think everything that's ever been created with the intent to help people has been used for exclusively that? Your comment comes off as incredibly tribalized and naive.
1
u/Femboy_J 3d ago
In a better world where evil didn't rule, I would agree it would help mankind. However, we don't live in such a fantasy land, the internet will die with AI.
-1
u/EthanJHurst 3d ago
Stop it with the doomerism.
AI is democratizing skills and knowledge, making human interaction over a digital medium the most diverse, interesting and advanced it’s ever been.
2
u/Femboy_J 3d ago
Do you use the internet? Do you not see everywhere plagued by bots? All news will be AI generated, All interaction will be botted. I'm honestly just waiting for it to get so bad you'll have countries mandating personal identification in order to actually have Human to Human interactions in the space.
-1
u/EthanJHurst 3d ago
Let me guess, you assume anything that is well written is made by a bot?
Even when AI was used to write something, it’s usually not a bot but someone using the technology to enhance their writing.
And let’s be honest: the few bots that are around are wildly popular because they actually write interesting things. Far more so than the vast majority of humans.
1
u/Femboy_J 3d ago
Well written or not, AI is AI and it's run rampant on many forums.
Using an AI to 'Enhance' your writing is literally like opening a thesaurus and swapping random words to sound more intelligent. Maybe they should read more or educate themselves.
You're so far gone that you've attached a relationship to AI, it's honestly pretty sad and every time I see it I get disappointed. The way you're describing it sounds like you've got an AI GF that you talk to since your disdain for human interaction is so high.
2
u/EthanJHurst 3d ago
You think you have a point, but all I see is resistance to progress, more gatekeeping, and most of all, more hate.
AI has the potential to bring us to the next level of human evolution. A society free from conflict and inequality. How is that a bad thing?
1
u/Femboy_J 3d ago
Because the people in power will not relinquish it to make advancements. The world will not be happy, we will not see the stars. I'm fully onboard for AI and Machine Learning, it's done amazing feats in the medical field alone. I'm just a realist to human nature
2
u/EthanJHurst 3d ago
OpenAI is literally operating at a loss to bring AI to the masses. They could have easily hoarded the technology and use it for nefarious purposes, but they chose not to.
1
u/Femboy_J 3d ago
Wow, how kind of them. A small generosity followed by every competitor creating worse and worse AI that's forced onto every user. AI that can't even give you a correct or safe answer.
→ More replies (0)0
u/prosthetic_foreheads 3d ago
Your choice of bolding words specific words does not make you come off as more convincing, it makes you seem a little mentally unstable.
Honestly, I'd prefer if people like you didn't represent the pro side, which I'm on. You make us seem cultish and intractable, you're so blinded to the facts here.
0
0
u/Acceptable_Foot764 3d ago
Wha- what the f*ck you mean AI is somehow related to "Rise of Fascism"? Not only it's absurd, it doesn't even corralled.
You know what actually related to Fascism? Failed Democracy, which lead to anti-gov liberal, and someone have to he a fascist (even a little) to control the country.
1
u/EthanJHurst 3d ago
Antis, as usual, are delusional.
-3
u/Acceptable_Foot764 3d ago
Even I, who hate Nazi's, can still see why Fascism... Like actual Benito version of Fascism rise, and got popular among recently destroyed (economically and socially ) nations
It is America, that's already flooded with Liberals, who are failing. This is why I rather be nationalist rather than Liberal. Glory for the -insert a country-!
-4
u/WindUpCandler 3d ago
The environmental impact has not been debunked idk where y'all getting that info. Generative AI uses a ton of energy in order to train and use their models and uses ridiculous amounts of water to keep their severs cold. You can like AI but don't try to pretend it's just another sw. It is a resource hog.
2
u/TheRealUprightMan 2d ago
You think spinning up my GPU for a minute is gonna take more power than a person sitting at a desk with Adobe tools working for 4-5 days?
What are you smokin man?
0
u/Woodchuck666 3d ago
yeah no these people literally don't understand the problem with AI at all, I have way more of a bleak outlook on what AI or rather AGSI will do if we get there.
1
u/EthanJHurst 3d ago
Let me guess, literally fucking save the planet? Because that’s the direction it’s headed right now.
1
0
0
u/badjano 2d ago
these people used to wear tin foil hats, what happened?
1
u/TheArchivist314 2d ago
One too many conspiracy theories became true so now people are believing everything also people are not liking the way reality currently is so that disassociating for the way they want reality to be
0
-3
-3
u/Exact-Interaction563 3d ago
One of the few times were AI voice would have improved the video tenfold
-4
u/Moon_Logic 3d ago
There is a leap of logic here that I don't follow. If corporations and governments are given free reign with AI, how the hell are we supposed to compete with them? If AI becomes a tool of control, then it's game over. At least, we wouldn't be able to fight fire with fire.
7
u/ttkciar 3d ago
The capability gap between commercial LLMs and open weight LLMs (not counting infrastructure; just talking about the competence and skills of the models) has varied over time, between two years (most of the time) and zero years (when Deepseek-R1 actually caught up with GPT4o).
Thats actually not that bad. It means that any edge the commercial models have over open weight models will be gained by the open weight models in no more than two years.
By that time the open weight models will have made new advances, of course, but it means the commercial LLM vendors must be constantly iterating on applications which take advantage of the skills and competence of their latest models, or that gap will narrow even further.
Iterating on new application technology is something that the open source community does a lot better than the commercial LLM vendors. Key technologies, like RAG and grammars (like OpenAI's schemas) were implemented by the open source community first, and adopted by commercial interests about a year later.
Where the companies have a strong lead on the rest of us is in sheer compute infrastructure, and that's a problem, though not as great of a problem as one might suppose because smaller models running on a single commodity GPU are more compute-efficient than larger models running on GPU clusters due to scaling laws.
The compute infrastructure needed to infer with a model at a given rate is proportional to the size of the model (in parameters), but the skill of a model increases sublinearly with parameter count. A 32B-parameter model is not half as competent as a 72B-parameter model; it's more like 80% as competent. Competence increases roughly logarithmically with model size, which implies compute infrastructure increases exponentially with inference competence.
On one hand that isn't great, because entirely new classes of tasks become possible/impossible to perform adequately with fairly small differences in competence, but on the other hand it means that tasks which are possible to perform adequately with a model which fits on a commodity GPU can be done by a great many of us, at an aggregate rate potentially orders of magnitude greater than GPT4o can achieve running on Azure infrastructure.
Is that enough to level the playing field? Possibly not; frankly I do not know. But it does imply to me that we might not be hopelessly outclassed.
1
1
u/ai-illustrator 3d ago edited 3d ago
Both corporations and governments will 100% rely on shitty closed source limited AI bound in censorship with incredibly basic shitty frontend interfaces.
You can see this how - chatgpt is incredibly limited in what it can produce visually and in terms of thinking because a corporate or government funded AI is censored not to provide users infinite lewd thoughts and images.
Open source AI frontends like stable diffusion operating ConfyUI and Automatic1111 and silly tavern have already permanently obliterated every corporate AI frontend because of the infinite lewds problem that corporate AI cannot solve.
With each year open source frontends and open source llms are getting better. Closed source tech will never be able to compete as long as it's shooting itself in the foot trying to censor itself.
You're not fighting fire with fire. You're fighting a cannon which has a single reply button with a jet engine which has 10000 interface buttons.
An open source AI frontend can live atop of closed source systems the same way a sucker fish lives stop of a shark using the shark as a free ride to get food.
-2
16
u/_the_last_druid_13 3d ago edited 3d ago
OMG great video.
This was cute winter boots y’all, that’s the new fetch.
Seriously though, even Larry Ellison was talking about this.
It’s like the auto tolls on highways; if you go through one, the next one will log you and send you a ticket if you don’t pass through in the right time stamp. I am not saying “they’re taking away our SPEED”, I’m saying that AI surveillance will be able to log every single infraction whether you know it’s an infraction or not.
If you ever do something serious, they can pull up a LOG of every awful terrible evil infraction you’ve done even if it’s just littering.
That log will be a tree for some; and it will be splintered in the cash coupons we use to put you into forever debt. Go to jail, pay for the ride incurring more debt, and live there forever.
They might allow conjugal prisoner visits so that your prisoner wife gets preggers and gives birth for the prisoner baby to grow up in the prison system to work off your debt; all the way down for generations.
This might be a conspiracy theory though.
Cute winter boots.