ai:formats-faq
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | |||
ai:formats-faq [2023/11/26 05:32] – [LLM Format Comparison] prompt the reader to think naptastic | ai:formats-faq [2023/12/16 16:13] (current) – [What hardware works?] naptastic | ||
---|---|---|---|
Line 26: | Line 26: | ||
* Intel: Newer than (???) | * Intel: Newer than (???) | ||
* AMD: Zen architecture. | * AMD: Zen architecture. | ||
- | * (there might be a way around some of this by compiling things from source, but... please no. If you don't have and can't get a new enough CPU, this hobby is too expensive for you.) | + | * William Schaub has [[https:// |
* Most users have more CPU-attached DRAM than GPU-attached VRAM, so more models can run via CPU inference. | * Most users have more CPU-attached DRAM than GPU-attached VRAM, so more models can run via CPU inference. | ||
* CPU/DRAM inference is orders of magnitude slower than GPU/VRAM inference. (More info needed.) | * CPU/DRAM inference is orders of magnitude slower than GPU/VRAM inference. (More info needed.) |
ai/formats-faq.1700976720.txt.gz · Last modified: 2023/11/26 05:32 by naptastic