The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
That’s not the case for the newer open source drivers from nvidia. They’re only compatible with the last few generations of cards but they’re performant and the only feature they lack is CUDA to my knowledge. Not talking nouveau here
Haskell mentioned λ 💪 λ 💪 λ 💪 λ 💪 λ
You can make custom images of this with some software called BlueBuild. I base mine off of the SecureBlue project then tweak it for my needs
Yeah you’d really only say it on the theoretical side of things, I’ve definitely heard it in research and academia but even then people usually point to the particulars of their work first