ai:faq
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
ai:faq [2023/11/13 18:19] – created naptastic | ai:faq [2023/12/19 22:10] (current) – [1. WHAT MODEL DO I USE?!] ref controversy about HF leaderboard naptastic | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | =====Frequently Asked Questions | + | =====Frequently Asked Questions |
- | (maintainers: | + | ====1. WHAT MODEL DO I USE?!==== |
+ | While this guide will attempt to point you in the right direction and save you some time finding a good model for you, **It is literally impossible to give a definitive answer.** There is no "best for", "best right now", "best among", | ||
- | Please take a moment | + | In order to pick a model one must consider: |
+ | - **Compatibility**: | ||
+ | - **Resources**: | ||
+ | - **Tradeoffs**: | ||
+ | - **Surprises**: | ||
+ | - **your use case**. | ||
+ | This grumpy pile of text is gradually turning into a guide--hopefully not too misguided--for selecting models. | ||
+ | |||
+ | ===How do I know if my model is compatible with my system?=== | ||
+ | Probably not 100% guarantee, but... we can reduce the chances of a wasted download. | ||
+ | |||
+ | The situation is really complicated but this is a FAQ so I'll keep it simple: | ||
+ | - **Read the model card.** If it doesn' | ||
+ | - If you know the model will fit completely in VRAM, the best performance comes from GPTQ models. (2023-12; I haven' | ||
+ | - If the model will not fit **completely** in VRAM, you **cannot** use GPTQ; use GGUF instead. | ||
+ | * GGUF comes in multiple quantization formats. You only need to download one. Use Q5_K_M if you're not sure. | ||
+ | |||
+ | More details on the formats can be found [[ai: | ||
+ | |||
+ | ===Are there at least comparisons? | ||
+ | Sure. If you find a good one, send it to me and I'll add a link here. | ||
+ | |||
+ | * [[https:// | ||
+ | * **NOTE**: There is currently (2023-12) controversy about how useful the leaderboard is. This has to do with model contamination. (TODO: add " | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | |||
+ | ===Can I at try one before I download it?=== | ||
+ | Yes. Nap does not know how. Please ask for edit permission and fill this section in. <3 | ||
+ | |||
+ | ===Are there any other shortcuts worth taking?=== | ||
+ | I only know of one more: Use a model that someone else in your situation is already using, and they already know it works well. **I'd like to collect a few //(dozen)// such reports here**, if possible. Also what hardware, software, and speed you get, if possible. | ||
+ | |||
+ | * brain: Passes almost every AGI test given over a 40-year period. 90b tensors, 100t weights, runs on a completely proprietary stack. When it's thinking hard, it generates about 14-16 tokens/ | ||
+ | |||
+ | =====Please read these few short paragraphs before diving into The Answers.====== | ||
===Philosophy=== | ===Philosophy=== | ||
I very much subscribe to the "Stone Soup" philosophy of open-source. The problem is that everyone wants to be the person bringing the stone. But stone soup only needs one stone! We need tables, utensils, meat, vegetables, seasonings, firewood, and people to tend the fire and stir the pot and cut up the ingredients... | I very much subscribe to the "Stone Soup" philosophy of open-source. The problem is that everyone wants to be the person bringing the stone. But stone soup only needs one stone! We need tables, utensils, meat, vegetables, seasonings, firewood, and people to tend the fire and stir the pot and cut up the ingredients... | ||
- | Please consider how many people have put how much time into generating and assembling this information. Yes it's naptastic organizing the page (at least right now) but all the info is coming from other people. I do not want to scare people off from asking questions; otherwise I don't know what to put in the FAQ! But if you are going to bring questions, please also be willing to put some time in to test things | + | Please consider how many people have put how much time into generating and assembling this information. Yes it's naptastic organizing the page (at least right now) but all the info is coming from other people. I do not want to scare people off from asking questions; otherwise I don't know what to put in the FAQ! But if you are going to bring questions, please also be willing to put some effort into figuring it out yourself, |
- | Important note: **YOU CAN USE AI TO HELP YOU WITH THIS!!!** It's not cheating! | + | Important note: **YOU CAN USE AI TO HELP WRITE STUFF!!!** It's not cheating! |
===Conduct=== | ===Conduct=== | ||
Line 22: | Line 58: | ||
* (I'm not gonna switch to MediaWiki.) | * (I'm not gonna switch to MediaWiki.) | ||
- | And now, without further ado: The Answers. | + | And now, without further ado: |
+ | =====The Answers===== | ||
====Getting Started==== | ====Getting Started==== | ||
**Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down. | **Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down. | ||
- | ==What | + | ===What |
- | (At least not yet, or not well) | + | * LLMs generate text and code |
- | * Arithmetic, including counting | + | * They can integrate with... |
+ | * Diffusers generate images | ||
+ | * Upscaling | ||
+ | * Fill-in and fill-out | ||
+ | * Video | ||
+ | * (anything else?) | ||
+ | * Data format conversions | ||
+ | * OCR (" | ||
+ | * Speech-to-text (partially a classification problem; might be better served with other tools.) | ||
+ | * text-to-speech (though this might be better served by other tools) | ||
+ | * **lots of other stuff.** | ||
+ | |||
+ | ===What CAN'T AI do right now?=== | ||
+ | * LLMs are still pretty bad with math. | ||
+ | * Music generation is in its infancy. | ||
+ | * OCR for music transcription is still a hilariously impractical idea. | ||
+ | * **lots of other stuff.** | ||
- | ==What's the best ____?== | + | ===What |
- | This is really not an answerable question. TODO this needs a fuller/ | + | It depends. (I know, I know... I hate that answer too but it's the truth.) |
- | ==What kind of hardware do I need?== | + | Buying a CPU for inference is folly. The only advantage a CPU has is that it usually has more DRAM than the GPU has VRAM, so it can load larger models. The difference in inference speed is at least an order of magnitude. Choosing a GPU, the most important factor is how much VRAM it has. |
- | * see [[formats-faq]] | + | |
- | ==What software do I need?== | + | * For maximum ease and speed, buy Nvidia GPUs. They are really expensive, though. |
- | * Different software is useful for different goals. See the Applications section below for more detailed information about each application. | + | * For a reduced cost, more headaches, and fewer applications that currently support it, buy AMD. They' |
+ | * Intel GPUs have the best price/VRAM ratio of the bunch, but there is almost no support. Getting them to work is (mostly) almost impossible even for experienced system administrators. | ||
- | ==What do all these terms mean?== | + | ===What do all these terms mean?=== |
(nap definitely needs help with this) | (nap definitely needs help with this) | ||
* need a glossary | * need a glossary | ||
- | ==How do I do the thing?== | + | ===How do I do the thing?=== |
* Start with < | * Start with < | ||
* Links to how-to' | * Links to how-to' | ||
- | ==How do I get help with the thing?== | + | ===How do I get help with the thing?=== |
* Read < | * Read < | ||
* Discord servers | * Discord servers |
ai/faq.1699899592.txt.gz · Last modified: 2023/11/13 18:19 by naptastic