

If it’s a single, generated, “initial” commit that I actually want to keep (say, for ex I used the forge to generate a license file) then I would often rebase on top of it. Quick and doesn’t get rid of anything.
If it’s a single, generated, “initial” commit that I actually want to keep (say, for ex I used the forge to generate a license file) then I would often rebase on top of it. Quick and doesn’t get rid of anything.
Thank you!!!
Doo you happen to have a good, informative link? Or perhaps a company name I could look up?
This sounds incredibly cool.
I think that’s the point of it. I also don’t think they’ve encountered every possible bug with it yet, so it might take a while before all the kinks get ironed out.
The big “win” that these uids seem to already bring is unicity of id for resources - the problems I’ve had in previous editions of Godot were of the form where an asset’s locally generated id doesn’t match up in the various scenes that use it.
The uids’ re-generation might be faulty, as they’re new to 4.4.
In theory the problem can still arise without using uids - it certainly has for me in the past, working on a godot 4.2 project on 2 separate machines. Godot imported the same things twice, once independently on each machine, and so generated different ids for them. From what I can tell the biggest error/mistake on my part was opening the project in the godot editor before pulling the newest commits.
The best approach I’ve found so far has been to be very conscious about when a new scene is created, and to similarly be very mindful when merging git branches.
If a scene was added, but not its .uid file (or a resource but not its .uid or .import) then whoever pulls the code and then opens their 4.4 editor will generate new uids on their end. This generation probably updates the scattered .tscn files that participate in and/or use the newly added scene.
Sadly I don’t have an exact method or workflow to recommend beyond trying to do git pulls/fetches before opening the editor, especially when new scenes, nodes, or resources have been added to the project.
The whiplash from the two titles in your second pic really hit me hard
To extend your metaphor: be the squirrel in the digital forest. Compulsively bury acorns for others to find in time of need. Forget about most of the burial locations so that new trees are always sprouting and spreading. Do not get attached to a single trunk ; you are made to dance across the canopy.
A second good read is her follow-up/response post: Re: Re: Bluesky and Decentralization
Thank you both (@NinjaFox@lemmy.blahaj.zone, @ChaoticNeutralCzech@feddit.org) for taking the time to make this post not just more accessible but somewhat more bit-/link-rot-resilient by duplicating the image’s info as a text comment.
We don’t talk about it as much as authoritarian censorship, ip & copyright related takedowns, and their ilk, but image macros/memes often have regrettably small lifetimes as publicly accessible data in my experience. It might be for any number of reasons, including:
or (more probably) a combination of all three and more.
In any case as silly as image memes are, they’re also an important vector for keeping culture and communities alive (at least here on the fediverse). In 5-10 years, this transcription has a much higher chance of still hanging around in some instance’s backups than the image it is transcribing.
P.S.: sure, knowyourmeme is a thing, but they’re still only 1 website and I’m not sure if there’s not much recent fediverse stuff there yet. The mastodon page last updated in 2017 and conflates the software project with the mastodon.social instance (likely through a poor reading of it’s first source, a The Verge article that’s decent but was written in 2017).
P.P.S.: ideally, OP (@cantankerous_cashew@lemmy.world) could add this transcription directly to the post’s alt text, but I don’t know if they use a client that makes that easy for them…
It’s a bit sad, but not that surprising, that if this is true then Microsoft is clearly not tasking their most experienced engineers on the control panel (you know, that part of the OS who’s function is to allow you to tweak all the rest of the OS?).
I think downvote anonymity is the bigger part of the problem, not downvotes in general. Unless I’m misunderstanding, what you’re proposing amounts to “if you want to downvote in a community you’ll need to make an account on it’s instance”. This would be a nice option to have, but it should also remain an option.
In your +50/-90 example, showing at least the instance provenance for votes allows more (sub)cases. If I can see that 55 of the downvotes come from the instance hosting the community, that’s potentially a very different situation than if only 5 do. Or if 70 of the downvotes come from a pair of instances that aren’t the community host. The current anonymity of these downvotes flattens these nuances into the same “-40”, which I agree isn’t great when it can lead to deletion - but I’d argue that’s also an entirely separate problem that might be better addressed from a different angle. I find that disabling downvotes from other instances entirely flattens things just as much if not more, just not in the same manner. Instead of wondering how representative a big upvote or downvote count is, I’m now wondering how representative a big upvote count is, period. That might seem like 50% less wondering but with no downvotes at all it might also only be about 50% less votes.
I’m not convinced silencing negative outside contributions won’t just shift the echo-chamber-forming to one that’s more based around a form of toxic positivity and/or reddit-style reposts and joke comments, either.
Revealing from which instances downvotes come from doesn’t prevent opinion downvotes but it allows dulling their bite. The same is true for opinion upvotes.
From my understanding votes are more-or-less already somewhat public on lemmy between it’s implementation and what federation needs to function properly. At the very least, each instance knows how many votes they’re getting from the other instances. We should embrace the nuances federation brings to the problem instead of throwing them away entirely.
So much thought has been put into “how do we convey the different instances’ character and their relations to each other to new (potential) users in a way that doesn’t a) overload them and/or b) scare them away with content that rubs them the wrong way” in communities and posts like these, when potentially we just need to render more visible the data that is already present on the instance servers.
I’ll acknowledge up-front that the “just” in the previous sentence is carrying a lot of weight; data viz is not easy on the best of days and votes have so little screen real-estate to work with. On top of that, any UI feature that can make what I’m suggesting palatable and accessible to non-power users would also need to be replicated across most popular clients. They’re written in a motley assortment of programming languages and ecosystems, and range from targeting browsers to native smartphone OSes, so the development efforts would be difficult to share and carry over from one client to the next. Still, they’re called votes: there’s a lot of prior art in polling software and news coverage of elections from the past few years that should be publicly accessible (at least in terms of screenshots, stills, and videos of the UI, if not a working version of it to play around with).
On top of this, I don’t know how much effort this would require on backend devs for lemmy (and kbin/mbin I forget which is the survivor, and piefed, and any other threadiverse instance software I’m currently unaware of). I wouldn’t expect keeping track of vote provenance to prove immensely difficult, but it could cause some sort of combinatorial explosion in the overhead required by the different sorting algorithms proposed (I’m ignorant on how much they cache vs how often they’re run for lemmy, for example).
I can’t foretell if this would “solve” opinion downvotes on it’s own, but I do think it would contribute to the necessary conditions for people to drift away from the more toxic forms of it. It could also become another option for viewing feeds on top of “subscribed”/“local”/“all” + the different vote rankings.
From what I understand its origin in street racing was because japanese drivers (specifically? might have been Asian more generally) were souping up cars to look pretty but still not run great. I’m hazy on the details and my google-fu is failing me - I wish I had a more precise answer but overall I recall being bummed out at how even the origins of the term weren’t as clean as I had hoped.
The issue isn’t just local. “This is predicted to cascade into plunging property values in communities where insurance becomes impossible to find or prohibitively expensive - a collapse in property values with the potential to trigger a full-scale financial crisis similar to what occurred in 2008,” the report stressed.
I know this isn’t the main point of this threadpost, but I think this is another way in which allowing housing to be a store of value and an investment instead of a basic right (i.e. decommodifying it) sets us up for failure as a society. Not only does it incentivize hoarding and gentrification while the number of homeless continues to grow, it completely tanks our ability to relocate - which is a crucial component to our ability to adapt to the changing physical world around us.
Think of all the expensive L.A. houses that just burned. All that value wasted, “up in smoke”. How much of those homes’ value is because of demand/supply, and how much is from their owners deciding to invest in their resale value? How much money, how much human time and effort could have been invested elsewhere over the years? Notably into the parts of a community that can more reliably survive displacement, like tools and skills. I don’t want to argue that “surviving displacement” should become an everyday focus, rather the opposite: decommodifying housing could relax the existing investment incentives towards house market value. When your ability to live in a home goes from “mostly only guaranteed by how much you can sell your current home” to “basically guaranteed (according to society’s current capabilities)”, people will more often decide to invest their money, time, and effort into literally anything else than increasing their houses’ resale value. In my opinion, this would mechanically lead to a society that loses less to forest fires and many other climate “disasters”.
I have heard that Japan almost has a culture of disposable-yet-non-fungible homes: a house is built to last its’ builders’/owners’ lifetime at most, and when the plot of land is sold the new owner will tear down the existing house to build their own. I don’t know enough to say how - or if - this ties into the archipelago’s relative overabundance of tsunamis, earthquakes, and other natural disasters, but from the outside it seems like many parts of the USA could benefit from moving closer to this Japanese relationship with homes.
Why do we even need a server? Why can’t I pull this directly off the disk drive? That way if the computer is healthy enough, it can run our application at all, we don’t have dependencies that can fail and cause us to fail, and I looked around and there were no SQL database engines that would do that, and one of the guys I was working with says, “Richard, why don’t you just write one?” “Okay, I’ll give it a try.” I didn’t do that right away, but later on, it was a funding hiatus. This was back in 2000, and if I recall correctly, Newt Gingrich and Bill Clinton were having a fight of some sort, so all government contracts got shut down, so I was out of work for a few months, and I thought, “Well, I’ll just write that database engine now.”
Gee, thanks Newt Gingrich and Bill Clinton?! Government shutdown leads to actual production of value for everyone instead of just making a better military vessel.
Ooooh, that’s a good first test / “sanity check” !
May I ask what you are using as a summarizer? I’ve played around with locally running models from huggingface, but never did any tuning nor straight-up training “from scratch”. My (paltry) experience with the HF models is that they’re incapable of staying confined to the given context.
I’m not sure if this is how @hersh@literature.cafe is using it, but I could totally see myself using an LLM to check my own understanding like the following:
Ironically, this exercise works better if the LLM “hallucinates”; noticing a hallucination in its summary is a decent metric for my own understanding of the chapter.
My reading of the article is also that the anode is bonding with the protons (aka hydrogen nuclei) as part of the redox process to generate current.
I was going to try it out, and then the website asked me for my email :(
I don’t want a feed aggregator that has its own account, I want one that just lets me use my existing network/feed-specific accounts.
I imagine (/hope) that the email-for-signup is only while the software is in alpha/beta/unreleased, to help them get user feedback.
It just takes a little effort to filter to see and reach the right people’s content. Otherwise, I don’t think completely withdrawing would be very beneficial in my industry and the era I live in.
I have been thinking about this a lot. Wrestling with how much consumption I can allow myself to sustain, and how much I can allow myself to abstain from.
As more and more of the world around me is interfaced with through machines and/or the internet, I can’t just “take a break from computers” for a few days to give my brain a break from that environment anymore. From knowledge to culture, so much is being shared and transferred digitally today. I agree with the author that we can’t just ignore what’s going on in the digital spaces that we frequent, but many of these spaces are built to get you to consume. Just as one must go into the hotbox to meet the heaviest weed smokers, one shouldn’t stay in the hotbox taking notes for too long at once because of the dense ambient smoke. Besides, how do you find the stuff worth paying attention to without wading through the slop and bait? The web has become an adversarial ecosystem, so we must adapt our behavior and expectations to continue benefiting from its best while staying as safe as possible from its worst.
Some are talking about “dark forest”, and while I agree I think a more apt metaphor is that of small rural villages vs urban megalopolises. The internet started out so small that everyone knew where everyone else lived, and everyone depended on everyone else too much to ever think of aggressively exploiting anyone. Nowadays the safe gated communities speak in hushed tones of the less savory neighborhoods where you can lose your wallet in a moment of inattention, while they spend their days in the supermarkets and hyper-malls owned by their landlords.
The setup for Wall-E might take place decades or centuries from now, but it feels like it’s already happened to the web. And that movie doesn’t even know how the humans manage to rebuild earth and their society, it just implies that they succeed through the ending credits murals.
From what I understand, you could un ironically do this with a file system using BTRFS. You’d maybe need a
udev
rule to automate tracking when the “Power Ctrl+Z” gets plugged in.