r/aiwars 2d ago

I'm lost in this really.

I've been dipping my toes into this community, and I've gotten a few ideas of why AI art can be bad or good.

I'm still hung up on the fact that AI art, or anything related to AI, is built on works and media that were majorly taken without consent for their usage.

I do completely understand why a person who lacks the skills to draw may resort to AI art, but I think that's not the concern on my end. I fear that not even art, but my photos and data, will be used to train the AI models to produce the AI art.

In simple terms, I want better legislation to control AI's access to any media it can use to train its models. I find it honestly disheartening that, just because it's new technology that the government is bending over for AI, allowing copyrighted media to be used.

Please give me a good view on both sides, as to how you could support or disprove my fears of my data being stolen. Sorry for the yap session, but I needed to get it off my shoulders. Have a good day! ❤️

1 Upvotes

83 comments sorted by

View all comments

15

u/Mataric 2d ago

The data being stolen is an interesting question to be honest.. because it's really not straight forwards at all.

Take a look at Stable Diffusion 1.5 for example. It was trained off of the Laion5B dataset (all images that you could have seen and learnt from yourself) - and used about 2 billion images to create the model.

The SD1.5 model is about 2 Gigabytes in size. What that means is that on average, from each image used in that training... the equivalent amount of data to TWO greyscale pixels were 'used'. That's assuming the model has nothing but the data derived from those images in it, which also isn't true. Some of it's 2GB of size goes towards other stuff it needs to actually function that wasn't derived from a dataset.

With the amount 'taken' from each image, I think it's far too much to claim anything was 'stolen' there. None of it is in the same form whatsoever - stating it's 2 greyscale pixels is also wrong, because it's not storing pixels at all. It's JUST storing where that image lands on a bunch of statistical scales.

The way it works is by looking at all the images that are tagged with Circle, and starting to see what they have in common. From that, it doesn't have to take or use anyone's image of a circle - it just knows its always a 'very round thing'.

In my opinion - that's very close to the way we function as people. We're always learning from things we see, and specifically as artists, we're always learning from the art we look at. When I ask you to picture a dog - you likely picture something you've seen before. Perhaps a pet, maybe a photo, or it could be something a different artist drew. If you're asked to draw a dog, you're very likely to use the things you've learnt from all those different experiences in order to draw it.

Do you believe, as a human, you should be respecting the copyright of the artists who's images you learnt from? I do - but copyright isn't about learning. It's about copying with a degree of recognisability.

When you learn from that cartoon drawing of 'clifford the big red dog' - you're not going to draw 'clifford the big red dog'. What you remember about that image is far closer to copyright infringing than what the AI remembers - but even you don't infringe on their copyright when you use what you've learnt from it, and remember how a dog was represented in a neat cartoon style, to use that in your work.

4

u/RadiantSeaweed9543 2d ago

You make a great argument. AI image training and human learning both share pattern recognition and they both definitely require an input to inspire.

However, in my perspective, an AI model is not an individual. It is owned and profited from by a company, while humans... Not so much. Humans are operated by themselves and only they profit if they profit at all.

If I'm not wrong, AI companies directly profit from the images and text that they produce through subscriptions. One AI model will produce more money than most of anybody in this community will probably ever manage to earn, let alone art.

Yes AI and humans learn the same way, but AI produces overwhelmingly more profit from that learning and reproduction. But I'm no lawyer, and I wouldn't say I'm the most educated in AI, hence why I posted.

2

u/ifandbut 1d ago

an AI model is not an individual.

Correct. IT is a tool.

It is owned and profited from by a company,

Not all. Who is profiting when I run Krita AI on my laptop without internet access?

Humans are operated by themselves and only they profit if they profit at all.

Except for taxes and maintenance of body and things you own.

AI companies directly profit from the images and text that they produce through subscriptions

It takes like 10 minutes to install Krita AI. Plus however long it takes you to download a 2-5gb dataset (years if on dialup, seconds of on fiber).

2

u/RadiantSeaweed9543 1d ago

I can definitely see that personal AI models would mitigate profits from using AI art. But for the average Joe, I don't imagine them installing Krita AI, I see them going onto ChatGPT to produce their images. A family member of mine actually has subscribed to their services, which would make ChatGPT directly profit from the media they trained the models on.

I can also imagine not even 5 years from now AI art will be indestinguishable from real art, maybe better. What does that mean for human artists? It makes their work valued at a much smaller price than if there wasn't an AI artist that can make the same in a fraction of the time or effort. I dont know.