

Eh the general consensus is that this launch is a small early adopter phase and they have cheaper versions in the pipeline. They’re only producing <100k units, definitely not going for mass adoption yet
Eh the general consensus is that this launch is a small early adopter phase and they have cheaper versions in the pipeline. They’re only producing <100k units, definitely not going for mass adoption yet
They terk er jerbs
It’s a bit complex, and you can find a better answer elsewhere, but a model is a set of “weights” and “bias” that make up the pathways of the neurons in a neural network.
A model can include other features but at its core it gives users the ability to run an “ai” like gpt, though models aren’t limited to only natural language processing.
Yes, you can download the models and run them on your computer, generally there will be instructions in each repository, but in general it involves downloading the model which can be very large and running it using an existing ml framework like pytorch.
It’s not a place for the layman right now, but with a few hours of research you could make it happen.
I personally run several models that I got through huggingface on my computer, llama2 which is similar to gpt3 is the one I use the most.
Huggingface takes a bit of getting used to but it’s the place to find models and datasets, imo it may be one of the most important websites on the internet today.
I went all out and got the 192, I’ve been using it to run local machine learning models successfully. Llama2 70b runs fairly well after quantizing to 16 instead of the original 32 which ate all 192GB and 40GB of swap before running out of system memory. Smaller models like the llama2 7b are wicked fast.
Performance as far as normal development goes is simply divine, I can have basically every project I ever work on open on my dual 4k monitors without any slowdown ever. Simultaneously compiling and running models in the background without a stutter.
My biggest complaint so far is with my thunderbolt 4 dock not supporting 144hz my monitors can crank out.
I have had one system crash so far, not sure of the cause, but overall stability has been impeccable.
I’m used to x86 machines, one flaw with the apple silicon switch in general is that some of my react native libraries were compiled in a way that make it difficult to compile without rosetta, that’s obviously not apple’s problem, nor is it specifically a studio issue.
9k was incredibly painful, but I’m happy to have a machine that outperforms most retail machines on the market for vram and machine learning without spending even more.
One thing to consider, if this turned out to be accepted, it would make it much harder to prosecute actual csam, they could claim “ai generated” for actual images