Supercharge Your LLMs Using SGLang For Inference: Why Speed and Efficiency Matter More Than Ever

Supercharge Your LLMs Using SGLang For Inference: Why Speed and Efficiency Matter More Than Ever

RunPod is proud to partner with LMSys once again to put a spotlight on its inference engine SGLang. LMSys has a storied history within the realm of language models with prior contributions such as the Chatbot Arena which compares outputs from competing models, Vicuna, an open source competitor to ChatGPT, and large datasets such as LMSYS-Chat-1M for studying model behavior. SGLang's inference engine is built to be easy to set up and implement, requiring little more than a few lines of code, and is built to scale to workloads of all sizes.

Rethinking Inference Efficiency

Previously with many large language model use cases, speed often took a backseat to inference quality. With the proliferation of LLM-as-a-service use cases, though, it's clear that token throughput is becoming just as important. Users are now interacting with models in a far more granular manner where a slow response time is going to be much more evident and off-putting. Can you imagine how frustrated a user must feel when they're reaching out to a company for technical support, and they're trying to navigate through the chatbot but it's overloaded and unresponsive? It's time to think about not only how much VRAM you're getting for your GPU spend, but also your maximum tokens per second output. Sure, you could simply move to a higher GPU spec, but what if you could just look for a more efficient implementation on the hardware budget you already have?

The SGLang Team

SGLang was originally developed at LMSYS.org, led by Ying Sheng and Lianmin Zheng. They are the co-founders of LMSYS and have collaborated on many other projects there, including FastChat, Chatbot Arena, Vicuna, LMSYS-Chat-1M, and S-LoRA. They have also worked on other projects, such as FlexGen.

Lianmin states, "We've been using Runpod to develop SGLang and run benchmarks. With a variety of affordable GPU models available, it's easy to test our code. RunPod has significantly sped up our development process." RunPod offers dozens of different GPU specs in various generations, allowing developers access to all levels of hardware to ensure compatibility and usability over a wide variety of real-life use cases.

SGLang has been built by individuals with diverse backgrounds from all over the world. The core developers actively working on it include individuals with passion, undergraduates from Shanghai Jiao Tong University, PhD students from Stanford, UC Berkeley, CMU, UCLA, and NTU, as well as engineers from tech companies like Databricks, X.ai, and ByteDance/TikTok.

How SGLang Works

The SGLang team has published a paper on Arxiv explaining their methodology, with the abstract printed below:

Large language models (LLMs) are increasingly used for complex tasks that require multiple generation calls, advanced prompting techniques, control flow, and structured inputs/outputs. However, efficient systems are lacking for programming and executing these applications. We introduce SGLang, a system for efficient execution of complex language model programs. SGLang consists of a frontend language and a runtime. The frontend simplifies programming with primitives for generation and parallelism control. The runtime accelerates execution with novel optimizations like RadixAttention for KV cache reuse and compressed finite state machines for faster structured output decoding. Experiments show that SGLang achieves up to 6.4x higher throughput compared to state-of-the-art inference systems on various large language and multi-modal models on tasks including agent control, logical reasoning, few-shot learning benchmarks, JSON decoding, retrieval-augmented generation pipelines, and multi-turn chat. 

A quick review of the literature reveals the following highlights on just how SGLang achieves these benefits:

  1. Many times inference engines "reinvent the wheel" on each generation, even though the vast majority of the provided context window may indeed be the same, only with one additional modification. This is especially true in long conversations. RadixAttention allows reuse of the KV cache, which means it is not merely starting on scratch during each run.
  2. Better implementation of parallelism allows a more complete use of the GPU hardware, leading to fewer wasted cycles.
  3. The compressed finite state machine allows SGLang to take bigger "steps" when guiding the language model's output, especially through parts of the desired format that are fixed or highly predictable.

Essentially, SGLang finds ways to better utilize the hardware given to it to use than the competition does, leading to huge efficiency gains over other engines, all other things being equal.

Benefits of SGLang

These efficiency gains are truly where SGLang shines. Their benchmarks speak for themselves, showing what their engine is capable of with a benchmark of an input dataset of a uniform distribution of 1-256 tokens:

  • 5,000 tokens per second (t/s) serving Llama3-8B bf16 on a single A100
  • Up to 4,000 t/s serving Llama3-70B bf16 on an 8xA100 cluster
  • Up to 10,000 t/s serving Llama3-70B fp8 on an 8xH100 cluster
  • Up to 2,500 t/s serving Llama3-405B fp8 on an 8xH100 cluster

On top of that, SGLang is under the Apache 2.0 license and is completely open source; this speed, flexibility, and licensing makes it suitable for enterprise-level applications serving a wide field of users, especially use cases those requiring a fast response time. SGLang is the fastest among popular open-source inference solutions on many benchmarks and is especially suitable for batch processing and synthetic data generation.

These gains position SGLang as one of the engines of choice where response time is critical. Applications such as virtual assistants, real-time language translation, and even gaming have an increasing need for large language models. The game Infinite Craft is a great example: although it looks like a simple though addictive browser-based game, in truth it is driven entirely by numerous short-form LLM prompts as described by creator Neal Agarwal: "I'm using the latest Llama 2 LLM from Facebook on the backend ... Every time someone tries to craft something novel, I ask Llama 2 with a prompt what the result should be." This is demonstrative of the direction and the token throughput demands that the general public is going to be creating with locally-hosted LLMs, and these are the cases that the SGLang inference engine is going to excel in.

Already, major organizations such as Databricks, Bytedance, UC Berkley, and UCLA are utilizing SGLang for serving LLMs to the public, for researching, or for data pipelining. Some models served at LMSYS Chatbot Arena are also powered by SGLang.

How to Get Started with SGLang on RunPod

You can install and run SGLang in any Pytorch-equipped pod on the platform. Just go to the My Pods page and spin up a pod of any type. You will want to be sure to expose a port for the server to run on, which can be done on the Edit Pod screen (the default that the server runs is 30000.) Installation instructions for the package itself can be found on SGLang's Github, but the easiest way is probably going to be through pip as shown below:

pip install --upgrade pip
pip install "sglang[all]"

# Install FlashInfer CUDA kernels
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.3/

After you install the package, you can start a server with the following command in a terminal:

python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000

If you do not have the Meta-Llama-3-8B model already installed, it will helpfully download it for you from Huggingface. You can specify whatever model you would like to use in this command by swapping out its Huggingface path.

You can then send commands through cURL. Localhost will work if you are running on the pod itself. You can also send commands through the proxy if your terminal session is on an endpoint other than the pod (just swap out localhost for the proxy address, e.g. https://8uxszoc3paq2fh-8888.proxy.runpod.net/) The SGLang team has provided documentation for sampling parameters on their Github as well.

curl http://localhost:30000/generate \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Once upon a time,",
    "sampling_params": {
      "max_new_tokens": 16,
      "temperature": 0
    }
  }'

You can also connect through Jupyter Notebook once the server is running and send prompts for inference that way:

from sglang import function, system, user, assistant, gen, set_default_backend, RuntimeEndpoint

@function
def multi_turn_question(s, question_1, question_2):
    s += system("You are a helpful assistant.")
    s += user(question_1)
    s += assistant(gen("answer_1", max_tokens=256))
    s += user(question_2)
    s += assistant(gen("answer_2", max_tokens=256))

set_default_backend(RuntimeEndpoint("http://localhost:30000"))

state = multi_turn_question.run(
    question_1="What is the capital of the United States?",
    question_2="List two local attractions.",
)

for m in state.messages():
    print(m["role"], ":", m["content"])

print(state["answer_1"])

This and other examples can be found on the SGLang Github.

Conclusion

SGLang isn’t just another inference engine; it’s a game-changer for anyone working with large language models. Are you looking to deploy large language models with blazing fast inference speed and optimal efficiency? It's time to try out SGLang on RunPod! Using SGLang over other inference engines can lead to a marked decrease in your serverless billing due to the efficiency gains you'll enjoy. Your requests will finish that much faster and your users happier. We even have an experimental worker for SGLang that you can try out, right now, with more plans to roll it out to the wider platform shortly. How much do you think you might save in the end?

SGLang is designed to be fast, easy to use, and scalable with a focus on token throughput, a design point that is going to become more and more relevant as locally-hosted LLMs find their way into new applications. Optimizations allow for an unprecedented level of efficiency in fielding user requests. Major organizations are already incorporating it into their processes, and it couldn't be easier to get up and running on RunPod and start serving your users whether in a pod or serverless capacity.

If you'd like to discuss SGLang further, please feel free to consider joining the RunPod Discord as well as the LMSys Discord and review LMSys' body of work on their site at lmsys.org.