Cryptography nerd

Fediverse accounts;
Natanael@slrpnk.net (main)
Natanael@infosec.pub
Natanael@lemmy.zip

Lemmy moderation account: @TrustedThirdParty@infosec.pub - !crypto@infosec.pub

@Natanael_L@mastodon.social

Bluesky: natanael.bsky.social

  • 0 Posts
  • 148 Comments
Joined 6 months ago
cake
Cake day: January 18th, 2025

help-circle
rss


  • The only viable competition to LIDAR is structured light (see Leap Motion, there’s equivalent sensors for cars), which uses an IR source with patterned light and multiple high frame rate cameras to calculate depth from the reflections. In theory light field photography with special lenses is possible too, but far more computationally heavy for real-time use IIRC

    There’s some safety issues with LIDAR at close range (it’s a laser! it can damage cameras, etc), which is basically the main reason to not use it. But Tesla are dumb enough to try to replace them with cameras alone, and not even using proper multi-camera techniques to calculate depth



  • This case didn’t cover the copyright status of outputs. The ruling so far is just about the process of training itself.

    IMHO the generative ML companies should be required to build a process tracking the influence of distinct samples on the outputs, and inform users of potential licensing status

    Division of liability / licensing responsibility should depend on who contributes what to the prompt / generation. The less it takes for the user to trigger the model to generate an output clearly derived from a protected work, the more liability lies on the model operator. If the user couldn’t have known, they shouldn’t be liable. If the user deliberately used jailbreaks, etc, the user is clearly liable.

    But you get a weird edge case when users unknowingly copy prompts containing jailbreaks, though

    https://infosec.pub/comment/16682120