
Vendor » Groq
Features
Big Data Career Notes: November 2019 Edition
In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the big data community. Whether it’s a promotion, new company hire, or even an accolade, we’ve got the details Read more…
A Wave of Purpose-Built AI Hardware Is Building
Google last week unveiled the third version of its Tensor Processing Unit (TPU), which is designed to accelerate deep learning workloads developed in its TensorFlow environment. But that's just the start of a groundswell Read more…
This Just In
MOUNTAIN VIEW, Calif., Aug. 5, 2024 — Groq, a leader in fast AI inference, has secured a $640 million Series D round at a valuation of $2.8B. Read more…
MOUNTAIN VIEW, Calif., July 24, 2024 — Groq, a leader in fast AI inference, launched Llama 3.1 models powered by its LPU AI inference technology. Read more…
MOUNTAIN VIEW, Calif., Feb. 13, 2024 — Groq, a generative AI solutions company, is the winner in the latest large language model (LLM) benchmark by ArtificialAnalysis.ai, besting eight top cloud providers in key performance indicators including Latency vs. Read more…
LAS VEGAS, Jan. 9, 2024 — The need for speed is paramount in consumer generative AI applications and only the Groq LPU Inference Engine generates 300 tokens per second per user on open-source large language models (LLMs), like Llama 2 70B from Meta-AI. Read more…
MOUNTAIN VIEW, Calif., Oct. 26, 2023 — Groq, an artificial intelligence (AI) solutions company, announced today that it will have a booth and multiple talks at the premier industry conference for high performance compute, SC23, from November 12-17 in Denver, CO. Read more…
MOUNTAIN VIEW, Calif., Aug. 31, 2023 — Groq, an artificial intelligence (AI) solutions provider, today announced it has more than doubled its inference performance of the Large Language Model (LLM), Llama-2 70B, in just three weeks and is now running at more than 240 tokens per second (T/s) per user on its LPU system. Read more…