Text Generation How to Easily Work with GGUF Quantizations In KoboldCPP Everyone wants more bang for their buck when it comes to their business expenditures, and we want to ensure you have as many options as possible. Although you could certainly load full-weight fp16 models, it turns out that you may not actually need that level of precision, and it may
Text Generation Having Trouble With Your LLM Not Following Prompts? Just Ask It What It Wants No two large language models behave exactly alike, and with over 250,000 separate models and quantizations available to download, it's just not possible to get intimately acquainted with more than a handful of them. Models all have different strengths and weaknesses, and some are even used "
Text Generation What You'll Need to Run Falcon 180B In a Pod September 6th was a momentous day in large language model history, as Falcon-180 was released by the Technology Innovation Institute. To date, this is the single largest open-source LLM released to the public (edging out BLOOM-176b from 2022.) For quite some time, whether it was technical concerns or simply market
Text Generation Lessons While Using Generative Language and Audio For Practical Use Cases Generative AI makes developers lives much easier - but by how much? I have been learning German for the past year, and one of the things I thought would be personally useful would be to generate many conversations in German - via voice, which be extremely useful for me to
Text Generation The Effects Of Rank, Epochs, and Learning Rate on Training Textual LoRAs Have you ever wanted to have a large language model tell you stories in the voice and style of your favorite author? Well, through training a LoRA on their work, you can totally do that! There are so many different levers to flip when training a LoRA, though, and it
RunPod Roundup RunPod RoundUp 2 - 32k Token Context LLMs and New StabilityAI Offerings Welcome to the Runpod Roundup for the week ending July 29, 2023. In this issue, we'll be discussing the newest advancements in AI models over the past week, with a focus on new offerings that you can run in a RunPod instance right this second. In this issue,
Text Generation Meta and Microsoft Release Llama 2 LLM as Open Source When the original LlaMA was released earlier in 2023 by Meta, it was only provided to the research community. With the next iteration, Meta in collaboration with Microsoft appears to have a change of heart and has released it as an open-source model for anyone to use. Here's
Text Generation How To Install SillyTavern in a RunPod Instance While some might prefer the simple text-based entry of an oobabooga interface, others might want something a little more robust. SillyTavern offers a number of additional features above and beyond most methods of interfacing with an LLM, and there's been some demand on getting it set up and
Text Generation 16k Context LLM Models Now Available On RunPod Hot off the heels of the 8192-token context SuperHOT model line, Panchovix has now released another set of models with an even higher context window, matching the 16384 token context possible in the latest version of text-generation-webui (Oobabooga). Such a large context window is going to vastly improve performacne in
tldr A Deep Dive Into Creating an Effective TavernAI Character Roleplay has become a surprisingly popular use of AI over the past year, with entire services popping up devoted specifically to interacting with characters, fictional or not. It's begun to breach the realm of academic writing (such as this paper on ArXiv on LLM RP from EleutherAI from
Text Generation How To Use Very Large Language Models with RunPod - 65b (and higher) models Many LLMs (such as the classic Pygmalion 6b) are small enough that they can fit easily in almost any RunPod GPU offering. Others such as Guanaco 65B GPTQ are quantized which is a compression method. to reduce memory usage, meaning that you will be able to fit the model into
Text Generation SuperHot 8k Token Context Models Are Here For Text Generation Esteemed contributor TheBloke has done it again, and textgen enjoyers everywhere now have another avenue to further increase their AI storytelling partner's retention of what is occurring during a scene. Available on his GitHub are quantizations of several well known models, including but not limited to the following:
Text Generation KoboldAI - The Other Roleplay Front End, And Why You May Want to Use It As many blog entries in the past have been written on Oobabooga/text-generation-webui, we would be remiss if we failed to mention there was another much-loved frontend available for use on Runpod that may be of significant value to anyone interested in writing or roleplaying with an AI. KoboldAI comes
Text Generation Breaking Out Of The 2048 Token Context Limit in Oobabooga Since its inception, Oobabooga has had a hard upper limit of context of 2048 tokens for how much it can consider. Since this buffer includes everything in the Chat Settings panel including context, greeting, and any additional recent entries in the log, this can very quickly fill up to the
Text Generation How to Work With Long Term Memory In Oobabooga and Text Generation As fun as text generation is, there is regrettably a major limitation in that currently Oobabooga can only comprehend 2048 tokens worth of context, due to the exponential amount of compute required for each additional token considered. This context consists of everything provided on the Character tab, along with as
Text Generation Pygmalion-7b from PygmalionAI has been released, and it's amazing Last month, the latest iteration of the Pygmalion model was released. Although it is not that much larger as it is still only a 7b model compared to the commonly used 6b version, what it does with that parameter space has also been improved by leaps and bounds, especially with
Stable Diffusion Use DeepFloyd To Create Actual English Text Within AI! If you've ever tried to generate text in images in packages like Stable Diffusion, you're probably familiar with the positively haunting facsimile of language it manages to produce. It looks so much like it could be a real language, and ultimately manages to be gibberish -
Text Generation The Beginner's Guide to Textual Worldbuilding With Oobabooga and Pygmalion Pygmalion is an unfiltered chatbot AI model that you can interact with to ask questions, talk to for fun, or even roleplay with. One of the best parts about Pygmalion is that it is capable of "learning" over time in that it will refer to its available output
News Spin up a Text Generation Pod with Vicuna And Experience a GPT-4 Rival Why use Vicuna? The primary benefit of Vicuna is that it has a level of performance rivaled only by ChatGPT and Google Bard. The model has been tested across a wide variety of scenarios, including Fermi problems, roleplay scenarios, and math tasks, and a framework graded by GPT-4 showed that
Text Generation Setting up a ChatBot with the Oobabooga Text Generation WebUI template In this post we'll walk through setting up a pod on RunPod using a template that will run Oobabooga's Text Generation WebUI with the Pygmalion 6B chatbot model, though it will also work with a number of other language models such as GPT-J 6B, OPT, GALACTICA,