Home Artificial Intelligence Meet PowerInfer: A Fast Large Language Model (LLM) on a Single Consumer-Grade GPU that Speeds up Machine Learning Model Inference By 11 Times

Meet PowerInfer: A Fast Large Language Model (LLM) on a Single Consumer-Grade GPU that Speeds up Machine Learning Model Inference By 11 Times

Meet PowerInfer: A Fast Large Language Model (LLM) on a Single Consumer-Grade GPU that Speeds up Machine Learning Model Inference By 11 Times
https://github.com/SJTU-IPADS/PowerInfer

Generative Large Language Models (LLMs) are well known for their remarkable performance in a variety of tasks, including complex Natural Language Processing (NLP), creative writing, question answering, and code generation. In recent times, LLMs have been run on approachable local systems, including home PCs with consumer-grade GPUs for improved data privacy, customizable models, and lower inference costs. Local installations prioritize low latency over high throughput; however, LLMs are difficult to implement on consumer-grade GPUs because of high memory requirements.

These models, which are frequently autoregressive transformers, produce text token by token and, for each inference, need access to the complete model with hundreds of billions of parameters. This limitation is noticeable in local deployments because there is less space for parallel processing when handling individual requests. Two current strategies to deal with these memory problems are offloading and model compression.

In a recent study, a team of researchers presented PowerInfer, an effective LLM inference system designed for local deployments using a single consumer-grade GPU. PowerInfer reduces the requirement for expensive PCIe (Peripheral Component Interconnect Express) data transfers by preselecting and preloading hot-activated neurons onto the GPU offline and using online predictors to identify active neurons during runtime. 

The core idea behind PowerInfer’s design is to make use of the high locality that comes with LLM inference, which is typified by a power-law distribution in neuron activation. This distribution shows that most cold neurons change based on certain inputs, whereas a tiny fraction of hot neurons consistently activate across different inputs.

The team has shared that PowerInfer is a GPU-CPU hybrid inference engine that makes use of this understanding. It preloads cold-activated neurons onto the CPU for computation and hot-activated neurons onto the GPU for instant access. By distributing the workload strategically, the GPU’s memory requirements are greatly reduced, and there are fewer data transfers between the CPU and GPU. 

PowerInfer integrates neuron-aware sparse operators and adaptive predictors to optimize performance further. Neuron-aware sparse operators directly interact with individual neurons, eliminating the need to operate on entire matrices, while adaptive predictors help identify and forecast active neurons during runtime. These optimizations enhance computational sparsity and effective neuron activation.

The team has evaluated PowerInfer’s performance, which has shown an average token creation rate of 13.20 per second and a peak performance of 29.08 tokens per second. These outcomes have been achieved using a single NVIDIA RTX 4090 GPU and a variety of LLMs, including the OPT-175B model. This performance only falls 18% short of the best-in-class server-grade A100 GPU, demonstrating PowerInfer’s effectiveness on mainstream hardware.

Upon evaluation, PowerInfer has also shown that it has the capability to run up to 11.69 times faster than the current llama.cpp system while retaining model fidelity. In conclusion, PowerInfer offers a significant boost in LLM inference speed, indicating its potential as a solution for advanced language model execution on desktop PCs with constrained GPU capabilities.


Check out the Paper and GithubAll credit for this research goes to the researchers of this project. Also, don’t forget to join our 34k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment