ai:faq
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
ai:faq [2023/11/27 20:41] – I said FRIENDLIER! naptastic | ai:faq [2023/12/19 22:10] (current) – [1. WHAT MODEL DO I USE?!] ref controversy about HF leaderboard naptastic | ||
---|---|---|---|
Line 27: | Line 27: | ||
* [[https:// | * [[https:// | ||
- | | + | * **NOTE**: There is currently (2023-12) controversy about how useful the leaderboard is. This has to do with model contamination. (TODO: add " |
+ | | ||
* [[https:// | * [[https:// | ||
Line 63: | Line 64: | ||
**Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down. | **Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down. | ||
- | ==What | + | ===What |
- | (At least not yet, or not well) | + | * LLMs generate text and code |
- | * Arithmetic, including counting | + | * They can integrate with... |
+ | * Diffusers generate images | ||
+ | * Upscaling | ||
+ | * Fill-in and fill-out | ||
+ | * Video | ||
+ | * (anything else?) | ||
+ | * Data format conversions | ||
+ | * OCR (" | ||
+ | * Speech-to-text (partially a classification problem; might be better served with other tools.) | ||
+ | * text-to-speech (though this might be better served by other tools) | ||
+ | * **lots of other stuff.** | ||
- | ==What' | + | ===What |
- | This is really not an answerable question. TODO this needs a fuller/ | + | * LLMs are still pretty bad with math. |
+ | * Music generation | ||
+ | * OCR for music transcription is still a hilariously impractical idea. | ||
+ | * **lots of other stuff.** | ||
- | ==What kind of hardware | + | ===What kind of hardware |
- | * see [[formats-faq]] for now; this deserves its own page | + | It depends. (I know, I know... I hate that answer too but it's the truth.) |
- | ==What software do I need?== | + | Buying a CPU for inference is folly. The only advantage a CPU has is that it usually has more DRAM than the GPU has VRAM, so it can load larger models. The difference in inference speed is at least an order of magnitude. Choosing a GPU, the most important factor is how much VRAM it has. |
- | * Different software is useful | + | |
- | ==What do all these terms mean?== | + | * For maximum ease and speed, buy Nvidia GPUs. They are really expensive, though. |
+ | * For a reduced cost, more headaches, and fewer applications that currently support it, buy AMD. They' | ||
+ | * Intel GPUs have the best price/VRAM ratio of the bunch, but there is almost no support. Getting them to work is (mostly) almost impossible even for experienced system administrators. | ||
+ | |||
+ | ===What do all these terms mean?=== | ||
(nap definitely needs help with this) | (nap definitely needs help with this) | ||
* need a glossary | * need a glossary | ||
- | ==How do I do the thing?== | + | ===How do I do the thing?=== |
* Start with < | * Start with < | ||
* Links to how-to' | * Links to how-to' | ||
- | ==How do I get help with the thing?== | + | ===How do I get help with the thing?=== |
* Read < | * Read < | ||
* Discord servers | * Discord servers |
ai/faq.1701117705.txt.gz · Last modified: 2023/11/27 20:41 by naptastic