

The “you will all submit to RTO and like it” machine has finally found a way to package this message in a way that wide eyed internet activists will support. Congratulations to them, I guess.
The “you will all submit to RTO and like it” machine has finally found a way to package this message in a way that wide eyed internet activists will support. Congratulations to them, I guess.
Obviously, yes, but at that level of knowledge as a user, you either don’t know about that or don’t feel comfortable enough to deal with it.
Think of this like one of those steam reviews: 4,000 hours played, do not recommend
Debian – I just wasn’t ready for it. Got told “oh you’re using Mint? That’s nice but you should try out Debian it’s the Real Deal™” but the reason I was using Mint back then in the first place was that it was my first step out of the Windows ecosystem, I was scared shitless and didn’t understand anything. What do you mean I don’t get a huge pretty start menu?! How am I supposed to find stuff then?!
The prime problem is that every social space eventually becomes a circlejerk. Bots and astroturfing exacerbate the problem but it exists perfectly fine on its own – in the early 2000s I had the misfortune of running across plenty of gigantic, years-long circlejerks where definitely no bots or nefarious foreign manipulators were involved (I’m talking console wars, Harry Potter ship wars, stupid shit like that). People form circle jerks in the same way that salts form crystals. It’s just in their nature.
The thing with circlejerks isn’t that there’s overwhelming agreement on some subject. You’ll get dunked on in most any social media space for claiming that the Earth is flat or that Putin is a swell guy, that in itself is obviously not a problem. What makes a circlejerk is that takes get cheered for and upvoted not in proportion to how much they are anchored in reality, but in proportion to how useful they are in galvanizing allies and disrupting enemies. Whoever shouts “glory to the cause” in the most compelling way gets all the oxygen. At that point the amount of brain rot is only going to increase. No matter how righteous the cause, inevitably there comes the point where you can go on the Righteous Cause Forum and post “2+2=5, therefore all glory to the cause” and get 400 upvotes.
Everyone talks a big game about how much they like truth, reason and moral consistency, but in the end when it’s just them and the upvote button and “do I stop and honestly examine this argument that gives me warm fuzzy feelings”, “is it really fair to dunk on Hated Group X by applying a standard I would never apply to anyone else” – the true colors show. It’s depressing and it makes most of social media into information silos where totalizing ideologies go to get validated, and if you feel alienated by this then clearly that space isn’t for you.
The only outcome I can imagine is the brigade closing this write-up as a duplicate and dragging off the author kicking and screaming, never to be seen again, like what happens to the vtuber protagonist in The Waldo Moment. The idea has grown too powerful for even him to contain it anymore.
I do exactly this kind of thing for my day job. In short: reading a syntactic description of an algorithm written in assembly language is not the equivalent of understanding what you’ve just read, which in turn is not the equivalent of having a concise and comprehensible logical representation of what you’ve just read, which in turn is not the equivalent of understanding the principles according to which the logical system thus described will behave when given various kinds of input.
This is an issue that has plagued the machine learning field since long before this latest generative AI craze. Decision trees you can understand, SVMs and Naive Bayes too, but the moment you get into automatic feature extraction and RBF kernels and stuff like that, it becomes difficult to understand how the verdicts issued by the model relate to the real world. Having said that, I’m pretty sure GPTs are even more inscrutable and made the problem worse.
no ethical people without explainable people
Jules Verne wasn’t a technical expert either, but here we are somehow. Don’t underestimate a keen and observant imagination.
Yes, definitely. It instigated a lot of turmoil and a gamut of spicy takes regarding the fundamental question of whether password managers as a model “work”. On the one hand some people laughed at the idea of putting your password on the cloud and touted post-it notes for being a more secure alternative. On the other hand people extolled the virtues of the cryptographic model at the base of password managers, claiming that even if tomorrow the entire LastPass executive org went rogue, your password would still be safe.
As far as I understand, the truth is more nuanced. Consider that this breach took place 9 months ago, but you’re only reading about cracked passwords now. It seems like the model did what it was supposed to do, and people behind the breach had to patiently brute-force victim master passwords. This means they got to the least secure passwords first: If you picked “19 deranged geese obliterating a succulent dutch honey jar at high noon” or whatever, you’re probably safe. But it doesn’t strike me as too wise to get complacent on account of this, either. Suppose next time the attackers get enough access to “tweak” the LastPass chrome extension to exfiltrate passwords. Now what?
The thing is we’re stuck between a rock and a hard place with passwords. We already know it’s impractical to ask users to remember 50 different secure passwords. So assuming we solve this using a password vault, there’s no optimal place to keep it. On the cloud you get incidents like this. Outside of the cloud one day you’re going to lose your thumb drive, your machine, your whatever. “So keep a backup” but who out of your normie relatives is honestly going to do this, and do you really trust a backup you haven’t used in 5 years to work in the moment of truth? I don’t know if there is any proper solution in the immediately visible solution space, and if there is, I don’t know if anyone has the financial incentive to implement it, sell it, buy it. People say the future is in passwordless authentication, FIDO2 etc, but try to google actually using one of these for your 5 most-used accounts, you’re not going to come out of the experience very thrilled.
If you’re the company CEO and you’ve spent years shouting a marketing pitch of “scooters! Scooters! Scooters instead of walking! Scooters! They’re the future!” then yes, it’s a bad look if you walk, never mind if you issue a company wide walking mandate.
Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.
Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance – proof of concept ready in less than a day.
Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I’ve encountered all these intimately as I’ve had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an “impossibility theorem for putting AI on autopilot”? Or are these limitations just artifacts we can engineer away and route around?
It seems like instead of having this discussion, it’s become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That’s, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, “they also laughed at Bozo the clown” and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this “ha ha suck it AI” discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end – do they lose any face? I think they don’t. I think this is why this sneering has become such a lucrative online professional sport.
In the future your browser will be able to remotely attest that you have no viable security solution to block the infection and no working backups as a condition for being served these malicious ads, increasing the ad value since they can now be more precisely targeted
Well, fine, and I can’t fault new published material having a “no AI” clause in its term of service. But that doesn’t mean we get to dream this clause into being retroactively for all the works ChatGPT was trained on. Even the most reasonable law in the world can’t be enforced on someone who broke it 6 months before it was legislated.
Fortunately the “horses out the barn” effect here is maybe not so bad. Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the “hands off” clause - basically like the knowledge cutoff, but forever. It’s untenable, OpenAI will be forced to cave and pay up.
With this whole “you pay for a screen package, love is sharing a password” business they just dug a PR hole for themselves. If they’d had offered price X per screen from the start, then introduced a “same household discount” or whatever, we wouldn’t have all this outrage. But execs can only see as far as next quarter and here we are.
“Chessify” on Android worked for me (also has the advantage that you just take a picture, instead of setting up the position by hand). Unfortunately 1 minute later the game gave me a chicken that I had to keep fed with worm emojis, so I created a stockpile of worms for the chicken and it died of overfeeding. I rage quit the game on the spot.
I don’t disagree, but don’t pretend you haven’t effectively set up the equal and opposite thing here. No mods will ban anyone but other than that every comment section is an implicit competition for best pro-Palestinian talking point, even when decency demands otherwise. We don’t talk about Oct 7, and if we do it was friendly fire, and if it wasn’t it was a natural consequence of Israeli policy in Gaza and that is the real issue. Yeah fine we admit the attack was not a hundred percent morally sound if you insist so much, but we don’t assign a moral weight to it or linger on it because hey when you make innocents suffer, you sow the wind and eventually reap the whirlwind, oh sure Hamas’ response was ugly but what can you do, you know, be a bastard and it comes around. Now it is our moral duty to call loud and clear for a ceasefire – the cycle of violence must stop.
I know what you’re thinking: that’s not fair! That’s not my opinion! Yeah, the circlejerk doesn’t care about your private opinion. You know better than to contradict any of the above around here in writing, and that’s enough. I’m sure a lot of people privately think “oh… tbh that last IDF strike was unconscionable” before posting on /r/worldnews the part of their opinion they know the crowd will like better.