ai:faq
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
ai:faq [2023/11/24 20:21] – [WE REALLY DON'T KNOW WHAT MODEL YOU SHOULD USE.] friendlier naptastic | ai:faq [2023/12/19 22:10] (current) – [1. WHAT MODEL DO I USE?!] ref controversy about HF leaderboard naptastic | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | =====Frequently Asked Questions | + | =====Frequently Asked Questions |
- | (maintainers: | + | ====1. WHAT MODEL DO I USE?!==== |
+ | While this guide will attempt to point you in the right direction and save you some time finding a good model for you, **It is literally impossible to give a definitive answer.** There is no "best for", "best right now", "best among", | ||
- | ====WE REALLY DON'T KNOW WHAT MODEL YOU SHOULD USE.==== | + | In order to pick a model one must consider: |
- | There is no way to answer this. In order to pick the " | + | |
- | * your use case, | + | |
- | * what resources | + | |
- | * what formats you can use, | + | |
- | * what tradeoffs you're willing to make, | + | - **your use case**. |
- | We can try to put together | + | This grumpy pile of text is gradually turning into a guide--hopefully not too misguided--for selecting models. |
- | ===Aren't there at least comparisons? | + | ===How do I know if my model is compatible with my system? |
+ | Probably not 100% guarantee, but... we can reduce the chances of a wasted download. | ||
+ | |||
+ | The situation is really complicated but this is a FAQ so I'll keep it simple: | ||
+ | - **Read the model card.** If it doesn' | ||
+ | - If you know the model will fit completely in VRAM, the best performance comes from GPTQ models. (2023-12; I haven' | ||
+ | - If the model will not fit **completely** in VRAM, you **cannot** use GPTQ; use GGUF instead. | ||
+ | * GGUF comes in multiple quantization formats. You only need to download one. Use Q5_K_M if you're not sure. | ||
+ | |||
+ | More details on the formats can be found [[ai: | ||
+ | |||
+ | ===Are | ||
Sure. If you find a good one, send it to me and I'll add a link here. | Sure. If you find a good one, send it to me and I'll add a link here. | ||
- | * [[https:// | + | |
- | * [[https:// | + | * **NOTE**: There is currently (2023-12) controversy about how useful the leaderboard is. This has to do with model contamination. (TODO: add " |
+ | | ||
+ | * [[https:// | ||
- | ===Can' | + | ===Can I at try one before I download it?=== |
Yes. Nap does not know how. Please ask for edit permission and fill this section in. <3 | Yes. Nap does not know how. Please ask for edit permission and fill this section in. <3 | ||
- | ====Please read these few short paragraphs before diving into The Answers.===== | + | |
+ | ===Are there any other shortcuts worth taking? | ||
+ | I only know of one more: Use a model that someone else in your situation is already using, and they already know it works well. **I'd like to collect a few //(dozen)// such reports here**, if possible. Also what hardware, software, and speed you get, if possible. | ||
+ | |||
+ | * brain: Passes almost every AGI test given over a 40-year period. 90b tensors, 100t weights, runs on a completely proprietary stack. When it's thinking hard, it generates about 14-16 tokens/ | ||
+ | |||
+ | =====Please read these few short paragraphs before diving into The Answers.====== | ||
===Philosophy=== | ===Philosophy=== | ||
I very much subscribe to the "Stone Soup" philosophy of open-source. The problem is that everyone wants to be the person bringing the stone. But stone soup only needs one stone! We need tables, utensils, meat, vegetables, seasonings, firewood, and people to tend the fire and stir the pot and cut up the ingredients... | I very much subscribe to the "Stone Soup" philosophy of open-source. The problem is that everyone wants to be the person bringing the stone. But stone soup only needs one stone! We need tables, utensils, meat, vegetables, seasonings, firewood, and people to tend the fire and stir the pot and cut up the ingredients... | ||
Line 38: | Line 58: | ||
* (I'm not gonna switch to MediaWiki.) | * (I'm not gonna switch to MediaWiki.) | ||
- | And now, without further ado: The Answers. | + | And now, without further ado: |
+ | =====The Answers===== | ||
====Getting Started==== | ====Getting Started==== | ||
**Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down. | **Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down. | ||
- | ==What | + | ===What |
- | (At least not yet, or not well) | + | * LLMs generate text and code |
- | * Arithmetic, including counting | + | * They can integrate with... |
+ | * Diffusers generate images | ||
+ | * Upscaling | ||
+ | * Fill-in and fill-out | ||
+ | * Video | ||
+ | * (anything else?) | ||
+ | * Data format conversions | ||
+ | * OCR (" | ||
+ | * Speech-to-text (partially a classification problem; might be better served with other tools.) | ||
+ | * text-to-speech (though this might be better served by other tools) | ||
+ | * **lots of other stuff.** | ||
+ | |||
+ | ===What CAN'T AI do right now?=== | ||
+ | * LLMs are still pretty bad with math. | ||
+ | * Music generation is in its infancy. | ||
+ | * OCR for music transcription is still a hilariously impractical idea. | ||
+ | * **lots of other stuff.** | ||
- | ==What's the best ____?== | + | ===What |
- | This is really not an answerable question. TODO this needs a fuller/ | + | It depends. (I know, I know... I hate that answer too but it's the truth.) |
- | ==What kind of hardware do I need?== | + | Buying a CPU for inference is folly. The only advantage a CPU has is that it usually has more DRAM than the GPU has VRAM, so it can load larger models. The difference in inference speed is at least an order of magnitude. Choosing a GPU, the most important factor is how much VRAM it has. |
- | * see [[formats-faq]] | + | |
- | ==What software do I need?== | + | * For maximum ease and speed, buy Nvidia GPUs. They are really expensive, though. |
- | * Different software is useful for different goals. See the Applications section below for more detailed information about each application. | + | * For a reduced cost, more headaches, and fewer applications that currently support it, buy AMD. They' |
+ | * Intel GPUs have the best price/VRAM ratio of the bunch, but there is almost no support. Getting them to work is (mostly) almost impossible even for experienced system administrators. | ||
- | ==What do all these terms mean?== | + | ===What do all these terms mean?=== |
(nap definitely needs help with this) | (nap definitely needs help with this) | ||
* need a glossary | * need a glossary | ||
- | ==How do I do the thing?== | + | ===How do I do the thing?=== |
* Start with < | * Start with < | ||
* Links to how-to' | * Links to how-to' | ||
- | ==How do I get help with the thing?== | + | ===How do I get help with the thing?=== |
* Read < | * Read < | ||
* Discord servers | * Discord servers |
ai/faq.1700857293.txt.gz · Last modified: 2023/11/24 20:21 by naptastic