User Tools

Site Tools


ai:faq

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai:faq [2023/11/17 17:05] – add FAQ #1: what model do I use naptasticai:faq [2023/12/19 22:10] (current) – [1. WHAT MODEL DO I USE?!] ref controversy about HF leaderboard naptastic
Line 1: Line 1:
-=====Frequently Asked Questions about local hosting of LLMs===== +=====Frequently Asked Questions About Hosting Your Own AI===== 
-(maintainers: naptastic)+====1. WHAT MODEL DO I USE?!==== 
 +While this guide will attempt to point you in the right direction and save you some time finding a good model for you, **It is literally impossible to give a definitive answer.** There is no "best for", "best right now", "best among", or really any other kind of "best". It's "best" to let go of "best". ;-)
  
-====What Model Do I Use?==== +In order to pick a model one must consider: 
-**PLEASE FOR THE LOVE OF GOD STOP ASKING THIS**+  **Compatibility**: what formats you can use, 
 +  - **Resources**: depending on your situation, you might become limited by GPU speed, VRAM, CPU speed, DRAM, hard disk space, or (less likely) bandwidth. 
 +  - **Tradeoffs**: Fast, Cheap, Good: choose at most two. ("Good" and "Easy" draw from the same well.) 
 +  - **Surprises**: probably some other considerations, and **finally**, 
 +  - **your use case**.
  
-There is no way to answer this. It is so completely dependent on your use case, what resources you have available, what formats you can use, what tradeoffs you're willing to make... We can try to put together simple guide for "here's how you pick a good one"; naptastic is too tired to write that guide though. I just need people to understand, **it's not an answerable question**. I know it comes across as dismissive, but that's not how it's meant. It's honest.+This grumpy pile of text is gradually turning into a guide--hopefully not too misguided--for selecting models.
  
-Please take moment to read these few paragraphs before diving into The Answers.+===How do I know if my model is compatible with my system?=== 
 +Probably not 100% guarantee, but... we can reduce the chances of wasted download.
  
 +The situation is really complicated but this is a FAQ so I'll keep it simple:
 +  - **Read the model card.** If it doesn't have one, don't download it. The model card is also the most likely place to find reasons a model might not work for you.
 +  - If you know the model will fit completely in VRAM, the best performance comes from GPTQ models. (2023-12; I haven't personally verified this.)
 +  - If the model will not fit **completely** in VRAM, you **cannot** use GPTQ; use GGUF instead.
 +    * GGUF comes in multiple quantization formats. You only need to download one. Use Q5_K_M if you're not sure.
 +
 +More details on the formats can be found [[ai:formats-faq|here]].
 +
 +===Are there at least comparisons?===
 +Sure. If you find a good one, send it to me and I'll add a link here.
 +
 +  * [[https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard|HuggingFace leaderboard]] contains all kinds of scores you might care about.
 +    * **NOTE**: There is currently (2023-12) controversy about how useful the leaderboard is. This has to do with model contamination. (TODO: add "contamination" to the glossary and maybe make a page about it)
 +  * [[https://www.reddit.com/user/WolframRavenwolf/|u/WolframRavenwolf]] is the only Redditor I see posting in-depth comparisons of models. Their testing has a narrow focus and might not match your use case.
 +  * [[https://nsfw-chatbot-rankings.web.app/#/|NSFW Chatbot Leaderboard]] exists
 +
 +===Can I at try one before I download it?===
 +Yes. Nap does not know how. Please ask for edit permission and fill this section in. <3
 +
 +===Are there any other shortcuts worth taking?===
 +I only know of one more: Use a model that someone else in your situation is already using, and they already know it works well. **I'd like to collect a few //(dozen)// such reports here**, if possible. Also what hardware, software, and speed you get, if possible.
 +
 +  * brain: Passes almost every AGI test given over a 40-year period. 90b tensors, 100t weights, runs on a completely proprietary stack. When it's thinking hard, it generates about 14-16 tokens/second. (It has almost been discovered [[https://www.reddit.com/r/totallynotrobots/comments/7ne308/comment/ds268zl/?utm_source=reddit&utm_medium=web2x&context=3|once]].)
 +
 +=====Please read these few short paragraphs before diving into The Answers.======
 ===Philosophy=== ===Philosophy===
 I very much subscribe to the "Stone Soup" philosophy of open-source. The problem is that everyone wants to be the person bringing the stone. But stone soup only needs one stone! We need tables, utensils, meat, vegetables, seasonings, firewood, and people to tend the fire and stir the pot and cut up the ingredients... I very much subscribe to the "Stone Soup" philosophy of open-source. The problem is that everyone wants to be the person bringing the stone. But stone soup only needs one stone! We need tables, utensils, meat, vegetables, seasonings, firewood, and people to tend the fire and stir the pot and cut up the ingredients...
  
-Please consider how many people have put how much time into generating and assembling this information. Yes it's naptastic organizing the page (at least right now) but all the info is coming from other people. I do not want to scare people off from asking questions; otherwise I don't know what to put in the FAQ! But if you are going to bring questions, please also be willing to put some time in to test things and report back when you have successes.+Please consider how many people have put how much time into generating and assembling this information. Yes it's naptastic organizing the page (at least right now) but all the info is coming from other people. I do not want to scare people off from asking questions; otherwise I don't know what to put in the FAQ! But if you are going to bring questions, please also be willing to put some effort into figuring it out yourself, and report back when you have successes.
  
-Important note: **YOU CAN USE AI TO HELP YOU WITH THIS!!!** It's not cheating!+Important note: **YOU CAN USE AI TO HELP WRITE STUFF!!!** It's not cheating!
  
 ===Conduct=== ===Conduct===
Line 27: Line 58:
     * (I'm not gonna switch to MediaWiki.)     * (I'm not gonna switch to MediaWiki.)
  
-And now, without further ado: The Answers.+And now, without further ado:
  
 +=====The Answers=====
 ====Getting Started==== ====Getting Started====
 **Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down. **Know your goals.** It is **critical** that you know what you want your AI to do for you. Even better if you have it written down.
  
-==What can'AI do?== +===What Things Can AI Do Right Now?=== 
-(At least not yet, or not well+  * LLMs generate text and code 
-    * Arithmeticincluding counting+    * They can integrate with... (fill this in plz) 
 +  * Diffusers generate images 
 +    * Upscaling 
 +    * Fill-in and fill-out 
 +    * Video 
 +    * (anything else?) 
 +  * Data format conversions 
 +    * OCR ("optical character recognition"which is just a fancy way of saying "image-to-text".) 
 +    * Speech-to-text (partially a classification problem; might be better served with other tools.) 
 +    * text-to-speech (though this might be better served by other tools) 
 +  * **lots of other stuff.** 
 + 
 +===What CAN'T AI do right now?=== 
 +  * LLMs are still pretty bad with math. 
 +  * Music generation is in its infancy. 
 +  * OCR for music transcription is still a hilariously impractical idea. 
 +  * **lots of other stuff.**
  
-==What's the best ____?== +===What kind of hardware should I buy?=== 
-This is really not an answerable questionTODO this needs a fuller/better explanation.+It depends(I know, I know... I hate that answer too but it's the truth.)
  
-==What kind of hardware do I need?== +Buying a CPU for inference is folly. The only advantage a CPU has is that it usually has more DRAM than the GPU has VRAM, so it can load larger models. The difference in inference speed is at least an order of magnitude. Choosing a GPU, the most important factor is how much VRAM it has.
-    * see [[formats-faq]] for now; this deserves its own page+
  
-==What software do I need?== +  * For maximum ease and speed, buy Nvidia GPUs. They are really expensive, though. 
-    Different software is useful for different goalsSee the Applications section below for more detailed information about each application.+  For a reduced cost, more headaches, and fewer applications that currently support it, buy AMDThey're still pretty expensive. 
 +  * Intel GPUs have the best price/VRAM ratio of the bunch, but there is almost no support. Getting them to work is (mostly) almost impossible even for experienced system administrators.
  
-==What do all these terms mean?==+===What do all these terms mean?===
 (nap definitely needs help with this) (nap definitely needs help with this)
     * need a glossary     * need a glossary
  
-==How do I do the thing?==+===How do I do the thing?===
   * Start with <nowiki>README.MD</nowiki> for the software you want to use. Seriously.   * Start with <nowiki>README.MD</nowiki> for the software you want to use. Seriously.
   * Links to how-to's   * Links to how-to's
  
-==How do I get help with the thing?==+===How do I get help with the thing?===
     * Read <nowiki>README.MD</nowiki> for the software you want to use again. Seriously.     * Read <nowiki>README.MD</nowiki> for the software you want to use again. Seriously.
     * Discord servers     * Discord servers
ai/faq.1700240706.txt.gz · Last modified: 2023/11/17 17:05 by naptastic