
Intel and Weizmann Institute Speed AI with Speculative Decoding Advance
July 16, 2025 — At the International Conference on Machine Learning (ICML), researchers from Intel Labs and the Weizmann Institute of Science introduced a major advance in speculative decoding. The new technique, presented at the conference in Vancouver, Canada, enables any small “draft” model to accelerate any large language model (LLM) regardless of vocabulary differences.
“We have solved a core inefficiency in generative AI,” said Oren Pereg, senior researcher, Natural Language Processing Group, Intel Labs. “Our research shows how to turn speculative acceleration into a universal tool. This isn’t just a theoretical improvement; these are practical tools that are already helping developers build faster and smarter applications today.”
Speculative decoding is an inference optimization technique designed to make LLMs faster and more efficient without compromising accuracy. It works by pairing a small, fast model with a larger, more accurate one, creating a “team effort” between models.
A traditional LLM generates each word step by step. It fully computes “Paris,” then “a”, then “famous”, then “city” and so on, consuming significant resources at each step. With speculative decoding, the small assistant model quickly drafts the full phrase “Paris, a famous city…” The large model then verifies the sequence. This dramatically reduces the compute cycles per output token.
This universal method by Intel and the Weizmann Institute removes the limitations of shared vocabularies or co-trained model families, making speculative decoding practical across heterogeneous models. It delivers performance gains of as much as 2.8x faster inference without loss of output quality.1 It also works across models from different developers and ecosystems, making it vendor-agnostic; it is open source ready through integration with the Hugging Face Transformers library.
In a fragmented AI landscape, this speculative decoding breakthrough promotes openness, interoperability and cost-effective deployment from cloud to edge. Developers, enterprises and researchers can now mix and match models to suit their performance needs and hardware constraints.
“This work removes a major technical barrier to making generative AI faster and cheaper,” said Nadav Timor, Ph.D. student in the research group of Prof. David Harel at the Weizmann Institute. “Our algorithms unlock state-of-the-art speedups that were previously available only to organizations that train their own small draft models.”
The research paper introduces three new algorithms that decouple speculative coding from vocabulary alignment. This opens the door for flexible LLM deployment with developers pairing any small draft model with any large model to optimize inference speed and cost across platforms.
The research isn’t just theoretical. The algorithms are already integrated into the Hugging Face Transformers open source library used by millions of developers. With this integration, advanced LLM acceleration is available out of the box with no need for custom code.
More Context: Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies (Intel Labs and the Weizmann Institute of Science Research Paper)
Source: Intel