camenduru / text-generation-webui-colab
- среда, 26 июля 2023 г. в 00:00:07
A colab gradio web UI for running Large Language Models
colab | Info - Model Page |
---|---|
vicuna-13b-GPTQ-4bit-128g https://vicuna.lmsys.org |
|
vicuna-13B-1.1-GPTQ-4bit-128g https://vicuna.lmsys.org |
|
stable-vicuna-13B-GPTQ-4bit-128g https://huggingface.co/CarperAI/stable-vicuna-13b-delta |
|
gpt4-x-alpaca-13b-native-4bit-128g https://huggingface.co/chavinlo/gpt4-x-alpaca |
|
pyg-7b-GPTQ-4bit-128g https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b |
|
koala-13B-GPTQ-4bit-128g https://bair.berkeley.edu/blog/2023/04/03/koala |
|
oasst-llama13b-GPTQ-4bit-128g https://open-assistant.io |
|
wizard-lm-uncensored-7b-GPTQ-4bit-128g https://github.com/nlpxucan/WizardLM |
|
mpt-storywriter-7b-GPTQ-4bit-128g https://www.mosaicml.com |
|
wizard-lm-uncensored-13b-GPTQ-4bit-128g https://github.com/nlpxucan/WizardLM |
|
pyg-13b-GPTQ-4bit-128g https://huggingface.co/PygmalionAI/pygmalion-13b |
|
falcon-7b-instruct-GPTQ-4bit https://falconllm.tii.ae/ |
|
wizard-lm-13b-1.1-GPTQ-4bit-128g https://github.com/nlpxucan/WizardLM |
|
llama-2-7b-chat-GPTQ-4bit (4bit) https://ai.meta.com/llama/ |
|
llama-2-13b-chat-GPTQ-4bit (4bit) https://ai.meta.com/llama/ |
|
llama-2-7b-chat (16bit) https://ai.meta.com/llama/ |
|
llama-2-13b-chat (8bit) https://ai.meta.com/llama/ |
|
redmond-puffin-13b-GPTQ-4bit (4bit) https://huggingface.co/NousResearch/Redmond-Puffin-13B |
According to the Facebook Research LLaMA license (Non-commercial bespoke license), maybe we cannot use this model with a Colab Pro account. But Yann LeCun said "GPL v3" (https://twitter.com/ylecun/status/1629189925089296386) I am a little confused. Is it possible to use this with a non-free Colab Pro account?
https://www.youtube.com/watch?v=kgA7eKU1XuA
https://github.com/oobabooga/text-generation-webui (Thanks to @oobabooga
Model | License |
---|---|
vicuna-13b-GPTQ-4bit-128g | From https://vicuna.lmsys.org: The online demo is a research preview intended for non-commercial use only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us If you find any potential violation. The code is released under the Apache License 2.0. |
gpt4-x-alpaca-13b-native-4bit-128g | https://huggingface.co/chavinlo/alpaca-native -> https://huggingface.co/chavinlo/alpaca-13b -> https://huggingface.co/chavinlo/gpt4-x-alpaca |
llama-2 | https://ai.meta.com/llama/ Llama 2 is available for free for research and commercial use. |
Thanks to facebookresearch
Thanks to lmsys
Thanks to anon8231489123
Thanks to tatsu-lab ❤ for https://github.com/tatsu-lab/stanford_alpaca
Thanks to chavinlo
Thanks to qwopqwop200 ❤ for https://github.com/qwopqwop200/GPTQ-for-LLaMa
Thanks to tsumeone
Thanks to transformers
Thanks to gradio-app
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ
Thanks to Neko-Institute-of-Science
Thanks to gozfarb
Thanks to young-geng
Thanks to TheBloke
Thanks to dvruette
Thanks to gozfarb
Thanks to ehartford
Thanks to TheBloke
Thanks to mosaicml
Thanks to OccamRazor
Thanks to ehartford
Thanks to ausboss
Thanks to PygmalionAI
Thanks to notstoic
Thanks to WizardLM
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
Thanks to meta-llama
Thanks to TheBloke
Thanks to meta-llama
Thanks to localmodels
Thanks to NousResearch ❤ for https://huggingface.co/NousResearch/Redmond-Puffin-13B
Thanks to TheBloke