
Used all the way. I haven’t looked at prices recently but I have gotten 8TB SAS drives for $40 each. Hard to beat that.
Used all the way. I haven’t looked at prices recently but I have gotten 8TB SAS drives for $40 each. Hard to beat that.
That’s not really relevant to the discussion. The number of users doesn’t matter. The point is that people will still create things even if there’s no money in doing it.
Jellyfin is another example of something I use every day that is completely developed for free. The is no difference whether 100 people or 100 million people use it. It exists because the people who built it want it to exist.
If we didn’t have copyright then people wouldn’t be able to justify putting effort into creating content because they wouldn’t be guaranteed financial compensation for the time and effort they put in.
The irony of saying this on Lemmy. Lemmy is piece of software developed and distributed for free to people who host it for free. If somebody truly wants to make something they will create it even without profit incentive.
In what way? I share my server with 8 friends/family and it does everything I need it to.
Or any proof of stake coin like Ethereum, which doesn’t require any mining at all. The electricity argument is extremely out of date for most coins besides Bitcoin itself.
As far as I know GPU mining is pretty much completely dead because after Ethereum switched the yields on everything else tanked.
There’s a reason it’s won game of the year so many years in a row.
If you want to do less math you can just drop some zeroes and say it’s the same as making $70k while losing $2.50
Like the comment I replied to already explained, this information is necessary to make informed development decisions. If you don’t know who is using what feature you might be wasting resources on something barely anyone uses while neglecting something everyone needs.
You also need some of that data for security purposes. You can’t implement rate limiting or prevent abuse if you can’t log and track how your servers are being interacted with.
Yeah as someone who has worked in web development for over 20 years everything in here is completely standard. Almost every major website in existence collects this kind of analytical data.
The hardware survey doesn’t ask every single user, it just gets a sample. So it probably just happened to hit a few more Windows 7 people this month.
This happens to me constantly. Just the other day I asked some friends for something and then they sent the literal exact opposite of that thing. Pretend I asked for blue with red stripes they gave me green with yellow polka dots. And it wasn’t just one person it was three separate people who all decided that made sense for some reason.
I was extremely specific too, even more than usual because I know people constantly misinterpret me. I made extra sure to not use any language with vague meanings and it still happened anyway. It’s like we live in alternate realities where words have completely different meanings.
It makes me not want to talk to people at all.
Again, even an exact copy is not stealing. It’s copyright infringement. Theft is a different crime.
But paraphrasing is not copyright infringement either. It’s no different than Wikipedia having a synopsis for every single episode of a TV series. Telling someone about what a work contains for informational purposes is perfectly fine.
Sorry, I misinterpreted what you meant. You said “any AI models” so I thought you were talking about the model itself should somehow know where the data came from. Obviously the companies training the models can catalog their data sources.
But besides that, if you work on AI you should know better than anyone that removing training data is counter to the goal of fixing overfitting. You need more data to make the model more generalized. All you’d be doing is making it more likely to reproduce existing material because it has less to work off of. That’s worse for everyone.
What you’re asking for is literally impossible.
A neural network is basically nothing more than a set of weights. If one word makes a weight go up by 0.0001 and then another word makes it go down by 0.0001, and you do that billions of times for billions of weights, how do you determine what in the data created those weights? Every single thing that’s in the training data had some kind of effect on everything else.
It’s like combining billions of buckets of water together in a pool and then taking out 1 cup from that and trying to figure out which buckets contributed to that cup. It doesn’t make any sense.
If the model isn’t overfitted it’s also not even copying. By their nature LLMs are transformative which is the whole point of fair use.
For me on Arch, Flatpaks are kinda useless. I can maybe see the appeal for other distros but Arch already has up-to-date versions of everything and anything that’s missing from the main repos is in the AUR.
I also don’t like how it’s a separate package manager, they take up more space, and to run things from the CLI it’s flatpak run com.website.Something
instead of just something
. It’s super cumbersome compared to using normal packages.
Same here. Switched to Arch in 2015 so I am also coming up on the 9 year mark. I have had very few issues, and the ones I have had were usually my fault for doing something stupid. I used Windows, OS X, and Ubuntu previously and compared to those Arch is a dream. Hence why I’ve stuck with it for so long now.
but after fresh install
See, there’s your problem. If you never re-install this is longer a factor. Sure I had to do those things, but I had to do them exactly once like 8 years ago…
That’s pretty neat!