• 0 Posts
  • 28 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
rss
  • Generally, if someone’s being a total asshole so severely that they have to be yeeted with several thousand other unaware bystanders, I expect to see a bunch of examples within the first… 2, maybe 3, links.

    If someone can point me to a concise list of examples (actual data), I find it more disturbing that an admin on another server can yeet my account because they make noise on a discord server.I mean, yes, federating is a feature, but why even offer the ability to enroll users? Maybe for a group of friends, or something, but just rando users is nothing but a liability to everyone involved.



  • I got around 5 links deep for each of the links in the admin’s post, and fuck if I know. There was an argumentative user, but they were articulate and thoughtful. Not dropping slurs or wasting space nonsense, but still bordering on “edgy”. The person pushing the defederation appeared to be bullying them and on a power trip.

    It was embarrassing. That’s all I took away. (My opinion can change if someone digs through the shitpiles of nothingness to pull up some actual naughty posts, but that’s not going to be me.)


  • I almost thought I had written your comment and completely forgot about it. No, I just almost made the exact comment and want that hour of my life back.

    If there was some over the top racist rant, I sure didn’t see it. And the admin pushing for the defederation sounds so bizarre. Bizarre is the best word I could come up with because “petty” makes me think it was like high school politics. This is closer to a grade school sandbox argument.

    The worst I saw was “defedfags” and it was used in a way that was meant to highlight how they never said anything offensive. Like saying, “If you thought what I said before was offensive, let’s see how you respond to something intended to be negative.”

    The crazy thing is that the decision is being made because the admin just liked a post. It’s not even because of the post content - which has nothing controversial and appeared maybe 8 times in my Lemmy/kbin feed yesterday.

    Editing to add that this is the article: https://kbin.social/search?q=wakeup+call


  • At first glance, I probably thought JXL was another attempt at JPEG2000 by a few bitter devs, so I had ignored it.

    Yeah, my examples/description was more intended to be conceptual for folks that may not have dealt with the nitty gritty. Just mental exercises. I’ve only done a small bit of image analysis, so I have a general understanding of what’s possible, but I’m sure there are folks here (like you) that can waaay outclass me on details.

    These intermediate-to-deep dives are very interesting. Not usually my cup of tea, but this does seem big. Thanks for the info.


  • (fair warning - I go a little overboard on the examples. Sorry for the length.)

    No idea on the details, but apparently it’s more efficient for multithreaded reading/writing.

    I guess that you could have a few threads reading the file data at once into memory. While one CPU core reads the first 50% of the file, and second can be reading in the second 50% (though I’m sure it’s not actually like that, but as a general example). Image compression usually works some form of averaging over an area, so figuring out ways to chop the area up, such that those patches can load cleanly without data from the adjoining patches is probably tricky.

    I found this semi-visual explanation with a quick google. The image in 3.4 is kinda what I’m talking about. In the end you need equally sized pixels, but during compression, you’re kinda stretching out the values and/or mapping of values to pixels.

    Not an actual example, but highlights some of the problems when trying to do simultaneous operations…

    Instead of pixels 1, 2, 3, 4 being colors 1.1, 1.2, 1.3, 1.4, you apply a function that assigns the colors 1.1, 1.25, 1.25, 1.4. You now only need to store the values 1.1, 1.25, 1.4 (along with location). A 25% reduction in color data. If you wanted to cut that sequence in half for 2 CPUs with separate memory blocks to read at once, you lose some of that optimization. Now CPU1 and CPU2 need color 1.25, so it’s duplicated. Not a big deal in this example, but these bundles of values can span many pixels and intersect with other bundles (like color channels - blue can be most efficiently read in 3 pixels wide chunks, green 2 pixel wide chunks, and red 10 pixel wide chunks). Now where do you chop those pixels up for the two CPUs? Well, we can use our “average 2 middle values in 4 pixel blocks” approach, but we’re leaving a lot of performance on the table with empty or useless values. So, we can treat each of those basic color values as independent layers.

    But, now that we don’t care how they line up, how do we display a partially downloaded image? The easiest way is to not show anything until the full image is loaded. Nothing nothing nothing Tada!

    Or we can say we’ll wait at the end of every horizontal line for the values to fill in, display that line, then start processing the next. This is the old waiting for the picture to slowly load in 1 line at a time cliche. Makes sense from a human interpretation perspective.

    But, what if we take 2D chunks and progressively fill in sub-chunks? If every pixel is a different color, it doesn’t help, but what about a landscape photo?

    First values in the file: Top half is blue, bottom green. 2 operations and you can display that. The next values divide the halves in half each. If it’s a perfect blue sky (ignoring the horizon line), you’re done and the user can see the result immediately. The bottom half will have its values refined as more data is read, and after a few cycles the user will be able to see that there’s a (currently pixelated) stream right up the middle and some brownish plant on the right, etc. That’s the image loading in blurry and appearing to focus in cliche.

    All that is to say, if we can do that 2D chunk method for an 8k image, maybe we don’t need to wait until the 8k resolution is loaded if we need smaller images for a set. Maybe we can stop reading the file once we have a 1024x1024 pixel grid. We can have 1 high res image of a stoplight, but treat is as any resolution less than the native high res, thanks to the progressive loading.

    So, like I said, this is a general example of the types of conditions and compromises. In reality, almost no one deals with the files on this level. A few smart folks write libraries to handle the basic functions and everyone else just calls those libraries in their paint, or whatever, program.

    Oh, that was long. Um, sorry? haha. Hope that made sense!


  • Oh, I’ve just been toying around with Stable Diffusion and some general ML tidbits. I was just thinking from a practical point of view. From what I read, it sounds like the files are smaller at the same quality, require the same or less processor load (maybe), are tuned for parallel I/O, can be encoded and decoded faster (and there being less difference in performance between the two), and supports progressive loading. I’m kinda waiting for the catch, but haven’t seen any major downsides, besides less optimal performance for very low resolution images.

    I don’t know how they ingest the image data, but I would assume they’d be constantly building sets, rather than keeping lots of subsets, if just for the space savings of de-duplication.

    (I kinda ramble below, but you’ll get the idea.)

    Mixing and matching the speed/efficiency and storage improvement could mean a whole bunch of improvements. I/O is always an annoyance in any large set analysis. With JPEG XL, there’s less storage needed (duh), more images in RAM at once, faster transfer to and from disc, fewer cycles wasted on waiting for I/O in general, the ability to store more intermediate datasets and more descriptive models, easier to archive the raw photo sets (which might be a big deal with all the legal issues popping up), etc. You want to cram a lot of data into memory, since the GPU will be performing lots of operations in parallel. Accessing the I/O bus must be one of the larger time sinks and CPU load becomes a concern just for moving data around.

    I also wonder if the support for progressive loading might be useful for more efficient, low resolution variants of high resolution models. Just store one set of high res images and load them in progressive steps to make smaller data sets. Like, say you have a bunch of 8k images, but you only want to make a website banner based on the model from those 8k res images. I wonder if it’s possible to use the the progressive loading support to halt reading in the images at 1k. Lower resolution = less model data = smaller datasets to store or transfer. Basically skipping the downsampling.

    Any time I see a big feature jump, like better file size, I assume the trade off in another feature negates at least half the benefit. It’s pretty rare, from what I’ve seen, to have improvements on all fronts.





  • Probably based on the Cap’n Crunch whistle pay phone hack.

    Someone correct me if I’ve missed a few bits, but here’s the story…

    First, a little history.

    Payphones were common. If you’re younger, you’ve probably seen them in movies. To operate them, you picked up the handset, listened for the dial tone (to make sure no one yanked the cord loose), inserted the amount shown by the coin slot, and then dialed. You have a limited amount of time before an automatic message would ask you to add more money. If you dialed a long distance number, a message would play telling you how much more you needed to insert.

    There were no digital controls to this - no modern networking. The primitive “computers” were more like equipment you’d see in a science class. So, to deal with the transaction details, the coin slot mechanism would detect the type of coin inserted, mute the microphone on the handset, and transmit a series of tones. Just voltage spikes. The muting prevented the background noise from interfering with the signal detection. Drop a quarter in the slot and you’d hear the background noise suddenly disappear followed by some tapping sounds (this was just bleed through).

    It’s also relevant to know that cereals used to include a cheap, little toy inside. At one point, Cap’n Crunch had a whistle which had a pitch of 2600Hz.

    The story goes that someone* figured out that the tones sent by the payphones were at 2600Hz - same as the whistle. You could pick up a payphone handset and puff into the whistle a certain number of times, and ti would be detected as control signals (inserting money).

    That’s right! Free phone calls to anywhere. I’m hazy on the specifics, but I’m pretty sure there were other tricks you could do, like directly calling restricted technician numbers, too. The reason the 2600Hz tone was special had to do with something like it was used as a general signal that didn’t trigger billing.

    It knocked the idea of phone hacking, or “phreaking”, from a little known quirk, to an entire movement. Some of the stuff was wild and if you’re interested, look up the different “boxes” that people distributed blueprints for. Eventually, the phone companies caught on and started making it harder to get at wires and more sophisticated coin receptacles.

    If you’ve ever seen the magazine 2600 back in the 90s and early 00s, that’s the origin of the name.

    All that is to say, if you knew nothing about technology and watched a guy whistle into a phone to get special access, you’d probably be freaked out. Who knows what that maniac could do with a flute!

    • I could have sworn it was Mitnick, but might have been someone else.



  • I don’t WANT to agree, but I kinda do.

    We’re here because Reddit was shit on top shit, led by gaping anus. We all accept that Meta is the same.
    We didn’t want Reddit profiting from our work. Meta will do the same, only more competently.

    Defederation is useless at scale They can continually spin up new instances that act as spies and bridges to Meta’s area.
    Once enough Meta bridge nodes are woven into the Fedi, they’ll be masked by a backchannel to mask the exchange/activity.

    Someone plz tell me I’m wrong, but this is how I think things work in the background…

    • Bob creates a Lemmy node - @Zucc1.ughfuckoff. It has 3 users and basically shops around until someone in lemmy.world’s sphere allows federation. Zucc1 looks like any random, small instance.
    • Once federated, Zucc1 syncs to its connected Lemmy instances - for now there is no Meta connection.
    • Zucc1 can then federate with a bunch of other instances, including Zucc2.
    • This repeats for a few weeks, infiltrating Fedi. This could be happening now.
    • A new set of Lemmy nodes spin up and federate only with a portion of the spy instances. The spy instances don’t respect the federation rules, distributing portions of the Fedi sync back to the Meta connected nodes, masking the source and destination.
    • Once signed posts are received by the spy nodes, user names are swapped with a table synced by spy and bridge instances. @User1@T4server.threads becomes @User7@Zucc4.ughfuckoff.
      • The Threads user sees their message from @someone@lemmy.world (which can also be swapped if they worry Threads users care about any of this stuff).
      • The Lemmy user sees the message from @User@Zucc4.ughfuckoff.

    Probably easy to combat when it’s one instance here and there. If it’s constant and automated, federating would have to be paused until the spies are weeded out and there’s a better detection strategy. If they get a big enough network going, they could all dip out at once, change identity, and refederate back in as the Fedi network flips out because of all the sync mismatches. Just more new nodes joining in. They have the source code, so they can act differently from other instances as long as it doesn’t cause problems.

    Is this a realistic scenario or am I way off base? I feel like it has to be one of the two.



  • My point is that this argument makes as much sense as what I wrote, so it’s encouraging the you think it’s ridiculous.

    “Versus” is a valueless delineation separating two subjects. There are two groups: The people of the Fediverse and the people not in the Fediverse. Neither one is good or bad, and in fact, many are a part of BOTH. That self awareness cancels any perceived negativity. We’re all probably some level of “normie,” and I’ve never heard someone use that word without immediate laughter by all parties. Sure, maybe in the early 00s by grade school punks, but I don’t think anyone does or should care.

    The point you’re actually making, without articulating it well, is the lack of terminology for federated groups. No one wants to say, “I’m a member of a select federated Lemmy and Kbin instances within the larger Fediverse.” You want an affirmative set of terms, so that delineation can be made; you want to say, “The X have this, and the not(X) have that.” From there you can get to value judgements, based on the expression of X, and I’ll recognize your concerns. The ridiculousness of those terms not existing makes it VERY hard to claim intentional negativity/harm because it simultaneously draws attention that group X in this case doesn’t have their shit together enough to come up with a nickname or shorthand.

    “You’re better than us? What are you?”
    “Well, you see, I’m a part of a federated network of…”
    (Looks up - everyone left)

    So, until someone comes up with some non-super-cringe terms for this wonderful mess, the discussion is a waste of everyone’s time. And until then, I suggest taking it on a case by case basis. If someone is offended, tell them that’s not intended because we don’t have OUR shit together, ask them what they prefer, and use that term around them.


  • I 100% agree that word is cringe and I’m totally into the fediverse for the long haul, but we have to address the pachyderm in the room: The word “Fediverse” is just as cringe.

    I, … I’m sorry. I can read it in a document, but the second a human being types it, I can’t take it seriously. I don’t care if folks want to shorten it to something like the FI (Federated Instances). Yes, there are other uses of the word “federate”, but it immediately sounds like a federal intraweb domain or a group of Star Trek policy makers.

    “Fediverse” is “netizen 2.0.”
    “Fediverse” is “cruising on the information superhighway Pro.”
    Please tell me I’m not alone in thinking this.


  • No no no, it’s stereotyping and prejudice when OTHER people do it to US. WE should tell THEM that THEY are US, and by saying this to OURSELVES we have said it to THEM, so that WE know that THEY know, but now THEY are a THEM again.

    YOU don’t get it. WE get it. YOU should all be like US where there is no YOU and US, there is only the WE that is YOU and US, but thereis no YOU and US, there is only the WE that is YOU and US, but thereis no YOU and US, there is only the WE that is YOU and US, but thereis no YOU and US, there is only the WE that is YOU and US.

    Simple. See? You don’t? But, YOU must because there is no…


  • So you’re saying there are people who DO use “normies” and people that DON’T use “normies”. These are not two groups of people. Shit, I just joined this thread, so that makes ME one of YOU, and there’s OTHERS that aren’t here. Are WE the elitists? Or are THEY the “normies”? YOU said there’s no there’s no US or THEM, so EVERYONE is talking in this thread. ANYONE not in this thread must not exist because I know I exist, so YOU thread posters must exist, but wait, that makes ME an US and YOU a THEM.

    (I’m not trying to be snarky, but this argument is exactly as nonsensical.)


  • Yeah, I assume I’ll be wrestling with package managers regardless (I remember YAST having it’s own thing that didn’t always play nice with others), and supposedly Ubuntu was looking to move away from snaps, so another major factor that will be changing soon.

    But Arch? I dunno, man. Younger me used to update shit daily and read changelogs, but current me lets stuff go a few months. I;m not sure that attitude or my level of comfort are quite at Arch levels. I’ll give it another look, though. Or maybe I’ll just go FreeBSD to spite everyone and embrace my masochism!