I’m a robotics researcher. My interests include cybersecurity, repeatable & reproducible research, as well as open source robotics and rust programing.
Have you had any luck with projectors for coding? I’ve only ever used them for large mob-programming sessions, like during hackathons. I feel like the low/narrow contrast of projectors makes it hard to use for dark mode, not to mention the space real estate requirements. :P
Still kind of sad that the transflective display technology demoed in the $100 laptop project from a decade or so ago never took off.
Personally, I’ve been happy using an LG TV for a single monitor setup. I have had to switch to KDE Plasma v6 for better font rendering given its unusual OLED pixel layout, as well as for native HDR support. But it’s been nice to have a large physical font while still at default DPI. Although, I wouldn’t’t mind upgrading to 8K later when they get affordable, as the smallest 4K TVs at 42" happen to push the physical DPI down towards that of just 1440p panel.
Tagging an image is simply associating a string value to an image pushed to a container registry, as a human readable identifier. Unlike an image ID or image digest sha, an image tag is only loosely associated, and can be remapped later to another image in the same registry repo, e.g latest
. Untagging is simply removing the tag from the registry, but not necessarily the associated image itself.
Ah man, I’m with a project that already uses a poly repo setup and am starting an integration repo using submodules to coordinate the Dev environment and unify with CI/CD. Sub modules have been great for introspection and and versioning, rather than relying on some opaque configuration file to check out all the different poly repos at build time. I can click the the sub module links on GitHub and redirect right to the reference commit, while many IDEs can also already associate the respective git tag for each sub module when opening from the super project.
I was kind of bummed to hear that working trees didn’t have full support with some modules. I haven’t used working trees with this super project yet, but what did you find about its incompatibility with some modules? Are there certain porcelain commands just not supported, or certain behaviors don’t work as expected? Have you tried the global git config to enable recursive over sub modules by default?
I fell for it. It took me a minute into the game time to figure what was up and double check today’s date.
Does the live iso created by this process include the dependencies or kernel modules upon live boot? E.g. could I use this to create an ISO image that includes, or pre bakes, any custom or necessary drivers for Nvidia GPUs or finicky Wi-Fi cards when used/booted as just a live USB? That could really help when you’d otherwise have a chicken and egg problem after a hard drive failure and no live USB to safe boot with working networking or display output.
I’m going to try and set one up for the rest of my project team. Looks like a neat way to simplify install setup.
I’m using a recent 42" LG OLED TV as a large affordable PC monitor in order to support 4K@120Hz+HDR@10bit, which is great for gaming or content creation that can appreciate the screen real estate. Anything in the proper PC Monitor market similarly sized or even slightly smaller costs way more per screen area and feature parity.
Unfortunately such TVs rarely include anything other than HDMI for digital video input, regardless of the growing trend connecting gaming PCs in the living room, like with fiber optic HDMI cables. I actually went with a GPU with more than one HDMI output so I could display to both TVs in the house simultaneously.
Also, having an API as well as a remote to control my monitor is kind of nice. Enough folks are using LG TVs as monitors for this midsize range that there even open source projects to entirely mimic conventional display behaviors:
I also kind of like using the TV as simple KVMs with less cables. For example with audio, I can independently control volume and mux output to either speakers or multiple Bluetooth devices from the TV, without having fiddle around with repairing Bluetooth peripherals to each PC or gaming console. That’s particularly nice when swapping from playing games on the PC to watching movies on a Chromecast with a friend over two pairs of headphones, while still keeping the house quite for the family. That kind of KVM functionality and connectivity is still kind of a premium feature for modest priced PC monitors. Of course others find their own use cases for hacking the TV remote APIs:
A while back, I tried looking into what it would take to modify Android to disable Bluetooth microphones for wireless headsets, allowing for call audio to be streamed via regular AAC or aptX, and for the call microphone to be captured from the phones internal mic. This would prevent the bit rate for call audio in microphone being effectively halved when using the ancient HFP/HSP Bluetooth codecs, instead allowing for the same call quality as when using a wired headset. This would help when multitasking with different audio sources, such as listening to music while hanging out on discord, without the music being distorted from the lower bit rate of HFP/HSP. This would also benefit regular VoLTE, as the regular call audio quality already exceeds that of legacy Bluetooth headset profiles.
Although, I didn’t manage to tease apart the mechanics of the audio policy configuration files used by the source Android project, given the sparse documentation and vague commit history.
I’d certainly be fine with the awkwardness of holding up and speaking to my phone as if it was in speaker mode, but listening to the call over wireless headphones, in order to improve or double the audio quality. Always wondered what these audio policies fall back to when a Bluetooth device doesn’t have a headset profile, but it’s almost impossible to find high quality consumer grade Bluetooth headphones without a microphone nowadays.
For the call setting under Bluetooth audio devices, I really wish they would break out or separate the settings for using the audio device as a source or sink for call audio. Sort of like how you can disable HSP/HSF Bluetooth profiles for audio devices in Linux or Windows.
Similarly reported (in more detail) by TechCrunch:
For anybody wondering what the Mastodon security issue is - CVE-2023-36460, you can send a toot which makes a webshell on instances that process said toot. #CVE202336460 #TootRoot
Looks like they posted the video process timelapse of that artwork here:
I’ll have to checkout their webcomic Pepper&Carrot. Thanks for the reference!
Image Transcription: Meme
A photo of an opened semi-trailer unloading a cargo van, with the cargo van rear door open revealing an even smaller blue smart car inside, with each vehicle captioned as “macOS”, “Linux VM” and “Docker” respectively in decreasing font size. Onlookers in the foreground of the photo gawk as a worker opens each vehicle door, revealing a scene like that of russian dolls.
*I’m a human volunteer content transcriber and you could be too! *
That looks neet. Although I suspect this would succumb to the same cross post discoverability issues where URLs pointing to the same video would not match string for string. A better approach might be to facilitate inline embedding of HTML video players into Lemmy using browser extensions, where user scripts could be used to preview youtube links or re-write them to nocookie, allowing the Lemmy web UI to still avoid the use of cross-origin scripts by default.
Found the full transcription for the video from OP author:
Note to self: use
youtube.com
instead ofyoutu.be
for better cross post detection and lemmy integration
For programming tutorials, yep, I also prefer reading documentation instead. Although, it looks like this tutorial these folks put out doesn’t have much of anything you could copy from, like terminal commands, given its a recorded walkthrough in using the graphical web UI. YouTube also now allows for searching the auto or manual transcription text, which is handy when creators always forget to include timestamped chapters.
I suspect this comment was posted to spell out the meme for those unfamiliar, but I wanted to thank you for transcribing it into text for those that also may be blind or visually impaired. With the loss of r/TranscribersOfReddit , I salute your contribution! Please keep at it!
https://www.theverge.com/2023/6/23/23771396/reddit-subreddit-community-transcribers-accessibility
Could you explain a little more on that? Just curious.