
I’d allow it, but then put some hidden white text in my resume to manipulate the AI into giving me a higher score.
Three raccoons in a trench coat. I talk politics and furries.
Other socials: https://ragdollx.carrd.co/
I’d allow it, but then put some hidden white text in my resume to manipulate the AI into giving me a higher score.
Guns brought in through the border, toxic waste infecting the population, and now Trump wants to start a trade war with Mexico and other countries…
Frankly I wouldn’t be surprised if the US was responsible for like half of Mexico’s problems.
Cope. The idea always sucked and made no sense. (Also I just hate Zuck and hope he gets Luigi’d 🙏)
America will get there.
Add another right-wing extremist to the ever-growing pile:
https://www.start.umd.edu/data-tools/profiles-individual-radicalization-united-states-pirus
Knowing Nvidia’s exorbitant pricing, I think I’ll keep Intel’s Arc B580 in my wishlist.
You might just want to use Kaggle tbh
Eh, I’ll continue enjoying my yiff there. Once most of the artists & content creators I follow move to Bluesky I probably won’t have any reason to stick to Twitter though.
I heard that he’s an ethereal being from another dimension that has already faded away from our plane of existence so the police is wasting its time looking for him and should close the case.
They did test those block towers to see if they were resistant to earthquakes, and they were still standing after a test comparable to the strongest earthquake in California. Though I agree that compared to the other options available it does look way more unsafe and inefficient.
Frankly at this point it’s a moral necessity
QnQ pwease down’t ask me abouwt Tiananmen Squawe, that’s vewy mean…
It’s a known problem - though of course, because these companies are trying to push AI into everything and oversell it to build hype and please investors, they usually try to avoid recognizing its limitations.
Frankly I think that now they should focus on making these models smaller and more efficient instead of just throwing more compute at the wall, and actually train them to completion so they’ll generalize properly and be more useful.
led by British media personality Andrew Doyle
The course’s reading list includes Doyle’s own books
So this is just state-sanctioned propaganda from some random guy
Maybe they should pay the crazy dude in the street corner who’s shouting about the end of the world to give a course as well
I wonder if Doyle is even going to give an actual definition of the term ‘woke’
Honestly I’m not surprised considering California politicians seem to have a knack for making money disappear into thin air with magic (corruption): https://www.independent.co.uk/news/world/americas/us-politics/gavin-newsom-homeless-money-budget-b2544347.html
Despite what the law might say, there’s no evidence whatsoever that letting trans people use their preferred bathroom causes any “injury or harm” to cis people.
To the contrary, there is evidence that restricting bathroom access is harmful to trans people - and cis people too, like Jay, a cis woman who was harassed in a bathroom after being mistaken for a trans person.
Even if they use the “right” restroom trans people are in danger of being harassed all the same if they pass too well: https://www.advocate.com/news/2022/7/12/trans-man-brutally-assaulted-using-womens-restroom-campground
Damned if you do, and damned if you don’t. The point of laws like this isn’t to protect cis women and girls, it’s just to cause as much suffering as possible, because that’s all Republicans care about.
I would like to propose some changes to that title:
Microsoft CEO’s pay rises 63% to $79m,
despite[because of] devastating year for layoffs: 2550jobs lost[employees were fired by their greedy CEO] in 2024 [because he wanted more money]
Conservatives have already said that they want to inspect children’s genitals, so it’s only a matter of time until they start saying that they want to regularly inspect women’s genitals as well to “protect unborn children” (read: control women and fulfill their sick fetish)
Not quite, since the whole thing with image generators is that they’re able to combine different concepts to create new images. That’s why DALL-E 2 was able to create a images of an astronaut riding a horse on the moon, even though it never saw such images, and probably never even saw astronauts and horses in the same image. So in theory these models can combine the concept of porn and children even if they never actually saw any CSAM during training, though I’m not gonna thoroughly test this possibility myself.
Still, as the article says, since Stable Diffusion is publicly available someone can train it on CSAM images on their own computer specifically to make the model better at generating them. Based on my limited understanding of the litigations that Stability AI is currently dealing with (1, 2), whether they can be sued for how users employ their models will depend on how exactly these cases play out, and if the plaintiffs do win, whether their arguments can be applied outside of copyright law to include harmful content generated with SD.
Well they don’t own the LAION dataset, which is what their image generators are trained on. And to sue either LAION or the companies that use their datasets you’d probably have to clear a very high bar of proving that they have CSAM images downloaded, know that they are there and have not removed them. It’s similar to how social media companies can’t be held liable for users posting CSAM to their website if they can show that they’re actually trying to remove these images. Some things will slip through the cracks, but if you show that you’re actually trying to deal with the problem you won’t get sued.
LAION actually doesn’t even provide the images themselves, only linking to images on the internet, and they do a lot of screening to remove potentially illegal content. As they mention in this article there was a report showing that 3,226 suspected CSAM images were linked in the dataset, of which 1,008 were confirmed by the Canadian Centre for Child Protection to be known instances of CSAM, and others were potential matching images based on further analyses by the authors of the report. As they point out there are valid arguments to be made that this 3.2K number can either be an overestimation or an underestimation of the true number of CSAM images in the dataset.
The question then is if any image generators were trained on these CSAM images before they were taken down from the internet, or if there is unidentified CSAM in the datasets that these models are being trained on. The truth is that we’ll likely never know for sure unless the aforementioned trials reveal some email where someone at Stability AI admitted that they didn’t filter potentially unsafe images, knew about CSAM in the data and refused to remove it, though for obvious reasons that’s unlikely to happen. Still, since the LAION dataset has billions of images, even if they are as thorough as possible in filtering CSAM chances are that at least something slipped through the cracks, so I wouldn’t bet my money on them actually being able to infallibly remove 100% of CSAM. Whether some of these AI models were trained on these images then depends on how they filtered potentially harmful content, or if they filtered adult content in general.