

There can be theoretical audit or blame issues , since you’re not “paying” then how does the company pass the buck (SLA contracts) if something fucks up with LE.
There can be theoretical audit or blame issues , since you’re not “paying” then how does the company pass the buck (SLA contracts) if something fucks up with LE.
Ironically the shortening of cert lengths has pushed me to automated systems and away from the traditional paid trust providers.
I used to roll a 1-year cert for my CDN, and manually buy renewals and go through the process of signing and uploading the new ones, it wasn’t particularly onerous, but then they moved to I think either 3 or 6 months max signing, which was the point where I just automated it with Let’s Encrypt.
I’m in general not a fan of how we do root of trust on the web, I much prefer had DANE caught on, where I can pin a cert at the DNS level that is secured with DNSSEC and is trusted through IANA and the root zone.
IP law needs overhauling, but these are the last people (aside from Disney et al) I’d trust to draft the new ones.
The US manages to store 1.5B pounds of cheese it doesn’t do anything with, I think China can handle constructing some warehouse to hold what it digs up from the ground.
I don’t think it’s hyperbole to say a significant percentage of Git activity happens on GitHub (and other “foundries”) – which are themselves a far cry from efficient.
My ultimate takeaway on the topic is that we’re stuck with Git’s very counterintuitive porcelain, and only satisfactory plumbing, regardless of performance/efficiency; but if Mercurial had won out, we’d still have its better interface (and IMO workflow), and any performance problems could’ve been addressed by a rewrite in C (or the Rust one that is so very slowly happening).
If only, this is “modern” PhysX, we’d need the source to the original Ageia PhysX 2.X branch to fix it properly.
The amount of stupid AI scraping behavior I see even on my small websites is ridiculous, they’ll endlessly pound identical pages as fast as possible over an entire week, apparently not even checking if the contents changed. Probably some vibe coded shit that barely functions.
Yeah, electric motors are what I notice the most. Be it on washers/dryers, garbage disposals (which range from 1/3, 1/2, 3/4, 1HP) and more.
Probably a mix of Z systems, that stuff goes back 20-odd years, and even then older code can still run on new Z systems which is something IBM brags about.
Mainframes aren’t old they’re just niche technology, and that includes enterprise Java software.
Uh, Java is specifically supported by IBM in the Power and Z ISA, and they have both their own distribution, and guides for writing Java programs for mainframes in particular.
This shouldn’t be a surprise, because after Cobol, Java is the most enterprise language that has ever enterprised.
Reddit is becoming such a shit hole anyway, the site barely functions on mobile browser now, half the time it has API errors or fails to load.
If they were a small or free service I wouldn’t have much issue, but they do charge, I don’t think it’s too much to ask that they at least attempt to scrape the wider web.
Building their own database seems the prudent thing long-term, I don’t doubt they could shore up coverage over Bing. They don’t have to replace the other indexes wholesale, just supplement it.
They have smallweb and news indexing, but other than that AFAICT they rely completely on other providers. Which is a shame, Google allows submitting sites for indexing and notifies if they can’t.
Running a scraper doesn’t need to cover everything since they have access to other indexes, but they really should be developing that ability instead of relying on Bing and other providers to provide good results, or results at all.
Small web always returns 0 results for anything that isn’t extremely broad, unfortunately.
I’ve been using Kagi for the last year+.
Personally, I wish they’d tone down the AI stuff that ruined Google, but at least you can turn most of it off.
Their results are okay, a little better than Bing, but obviously they’re limited by their existing index providers, I wish they’d run their own spiders and crawl for their own data, since I think Bing fails on a lot of coverage of obscure websites.
In general I find the weighting of modern indexes to be subpar, though the SEO industry has made it a hard problem to tackle, I wish more small websites and forums were higher ranked, and AI slop significantly de rated.
Also not a huge fan of the company and a lot of it’s ardent customers, who heavily protested a suicide prevention popup if you used it to searched for how to kill yourself.
I really wish Matrix had been more successful, but it has some pretty core problems that prevented it from gaining more traction.
It fell into the same trap as XMPP, though perhaps even worse, with a focus more on its protocol and specification than a single unified product vision. The reference server implementation is slow, and using a language not optimal for its purpose, with alternative server implementations left incomplete and unsupported. It took a long time for them to figure out voice and video and for it to work well, and the “user flow” still isn’t at Discord levels.
I’ve rooted for Matrix for a long time, but as a former XMPP evangelist, to me the writing on the wall says it isn’t suited for success either. I’d love to be wrong, but I don’t see a way through.
No thanks, people hop from centralized platform to centralized platform thinking things will be different.
Consolidated power, especially when fully captured by market forces, will always enshitify, the only out is federation.
I finally made the move to setting up an account here and wean my reddit usage.
It’s getting so bad on there, so many bots, trolls, and paid agitators. Plus the uptick in fascist apologists. Smaller communities with higher bars to entry produce better conversation, in my experience.
I think it’s “the algorithm”, people basically just want to be force-fed “content” – look how successful TikTok is, largely because it has an algorithm that very quickly narrows down user habits and provides endless distraction.
Mastodon and fediverse alternatives by comparison have very simple feeds and ways to surface content, it simply doesn’t “hook” people the same way, and that’s competition.
On one hand we should probably be doing away with “the algorithm” for reasons not enumerated here for brevity, but on the other hand maybe the fediverse should build something to accommodate this demand, otherwise the non-fedi sites will.