User Tools

Site Tools


ai:formats-faq

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
ai:formats-faq [2023/11/26 05:32] – [LLM Format Comparison] prompt the reader to think naptasticai:formats-faq [2023/12/16 16:13] (current) – [What hardware works?] naptastic
Line 26: Line 26:
       * Intel: Newer than (???)       * Intel: Newer than (???)
       * AMD: Zen architecture.       * AMD: Zen architecture.
-      * (there might be a way around some of this by compiling things from source, but... please no. If you don't have and can't get a new enough CPU, this hobby is too expensive for you.)+      * William Schaub has [[https://blog.longearsfor.life/blog/2023/11/26/building-pytorch-for-systems-without-avx2-instructions/|this blog post]] for people who don't have AVX2 support. He adds: "I ended up doing the same for torchaudio and torchvision because it turns out that the C++ API ended up mismatched from the official packages.  it's the same process except no changes needed in the cmake config."
     * Most users have more CPU-attached DRAM than GPU-attached VRAM, so more models can run via CPU inference.     * Most users have more CPU-attached DRAM than GPU-attached VRAM, so more models can run via CPU inference.
     * CPU/DRAM inference is orders of magnitude slower than GPU/VRAM inference. (More info needed.)     * CPU/DRAM inference is orders of magnitude slower than GPU/VRAM inference. (More info needed.)
ai/formats-faq.1700976720.txt.gz · Last modified: 2023/11/26 05:32 by naptastic