• 3 Posts
  • 25 Comments
Joined 2 years ago
cake
Cake day: June 3rd, 2023

help-circle
rss



  • I wanted to like Mastodon but couldn’t. The only reason I used microblogging services like Twitter was to shitpost about Vampire: The Masquerade. Said game includes lots of death, blood, and other topics that make some folks uncomfortable. On Twitter, the atmosphere was very “don’t like, don’t read”, but Mastodon has an intense culture about using content warnings on anything that might make someone marginally uncomfortable. I’m cool with that, but I can’t do it on my shitposting or it sort of ruins the joke. Bluesky doesn’t have that atmosphere.











  • That’s why Sansar not allowing it really surprised me. SL has a decent grip on moderating and working with sexual content, something most companies don’t have. The fear of sexual content and how to manage it shouldn’t be something Linden Lab grapples with. I guess they wanted to test whether their product was appealing without it and ultimately the answer was no.








  • It’s not going to replace actual dedicated writers, but it’s definitely going to hinder people learning to write and make up a large portion of the text online. It may also make it harder for actual writers to be found in all the noise. I heard a little while back about a scifi magazine which had to close its submissions because it was getting too many AI-written stories and sorting through the real versus fake was becoming difficult for them.

    As for who’s going to train the AI, that’s part of what I’m arguing here - future LLMs are going to wind up being trained on AI-generated text because there will be so much of it online that screening it out becomes near impossible. Reddit mods already have challenges screening out chat GPT bots from their comments. When a future LLM scrapes the web for writen words, it’ll come back with lots of garbage AI text which will taint its learning pool. AIs will learn from AIs and become worse for it.