Follow BigDATAwire:

Vendor » Cerebras Systems

Features

OpenXLA Delivers Flexibility for ML Apps

Machine learning developers gained new abilities to develop and run their ML programs on the framework and hardware of their choice thanks to the OpenXLA Project, which today announced the availability of key open source Read more…

Cerebras Hits the Accelerator for Deep Learning Workloads

When it comes to large neural networks like BERT or GPT-3, organizations often must wait weeks or even months for a training task to complete if they’re using traditional CPU and GPU clusters. But with its massive Wafe Read more…

A Wave of Purpose-Built AI Hardware Is Building

Google last week unveiled the third version of its Tensor Processing Unit (TPU), which is designed to accelerate deep learning workloads developed in its TensorFlow environment. But that's just the start of a groundswell Read more…

This Just In

Cerebras and Core42 Launch Global Access to OpenAI’s gpt-oss-120B

Aug 28, 2025 |

ABU DHABI, United Arab Emirates and SUNNYVALE, Calif., Aug. 28, 2025 — Cerebras and Core42 have announced the global availability of OpenAI’s gpt-oss-120B. Core42 AI Cloud via Compass API brings Cerebras Inference at 3,000 tokens per second to power enterprise-scale agentic AI. Read more…

Cerebras Launches Cerebras Inference Cloud Availability in AWS Marketplace

Jul 9, 2025 |

PARIS, July 9, 2025 — At the RAISE Summit in Paris, France, Cerebras Systems announced that Cerebras Inference Cloud is now available in AWS Marketplace bringing Cerebras’ ultra-fast AI inference to enterprise customers, and enabling the next era of high performance, interactive, and intelligent agentic AI applications. Read more…

Cerebras Powers Notion AI for Work with Ultra-Fast Enterprise Search

Jul 9, 2025 |

PARIS and SUNNYVALE, Calif., July 9, 2025 — Cerebras Systems has announced that Notion, the all-in-one connected workspace, is using Cerebras’ industry-leading AI inference technology to power instant, enterprise-scale document search for its AI offering, Notion AI for Work. Read more…

Cerebras Partners with Hugging Face, DataRobot, Docker to Accelerate Agentic AI Development

Jul 8, 2025 |

PARIS, July 8, 2025 — Today at the RAISE Summit in Paris, France, Cerebras Systems announced new partnerships and integrations with Hugging Face, DataRobot and Docker. These collaborations dramatically increase accessibility and impact of Cerebras’ ultra-fast AI inference, enabling a new generation of performant, interactive, and intelligent agentic AI applications. Read more…

Cerebras Makes Frontier AI Accessible with Qwen3-235B at One-Tenth the Cost

Jul 8, 2025 |

PARIS, July 8, 2025 — Cerebras Systems today announced the launch of Qwen3-235B with full 131K context support on its inference cloud platform. Alibaba’s Qwen3-235B delivers model intelligence that rivals frontier models such as Claude 4 Sonnet, Gemini 2.5 Flash, and DeepSeek R1 across a range of science, coding, and general knowledge benchmarks, according to independent tests by Artificial Analysis. Read more…

Cerebras Partners with IBM to Accelerate Enterprise AI Adoption

May 9, 2025 |

May 9, 2025 — Editor’s Note: IBM and Cerebras Systems have announced a collaboration to integrate Cerebras’ AI computing hardware with IBM’s watsonx platform. The goal is to help enterprises run generative AI models at scale while meeting requirements for trust, efficiency, and integration in complex environments. Read more…

Meta Collaborates with Cerebras to Drive Fast Inference for Developers in New Llama API

May 2, 2025 |

SUNNYVALE, Calif., May 2, 2025 — Meta has teamed up with Cerebras to offer ultra-fast inference in its new Llama API, bringing together the world’s most popular open-source models, Llama, with the world’s fastest inference technology, delivered by Cerebras. Read more…

Cerebras Announces 6 New AI Datacenters Across North America and Europe

Mar 11, 2025 |

SUNNYVALE, Calif., March 11, 2025 — Cerebras Systems today announced the launch of six new AI inference datacenters powered by Cerebras Wafer-Scale Engines. These state-of-the-art facilities, equipped with thousands of Cerebras CS-3 systems, are expected to serve over 40 million Llama 70B tokens per second, making Cerebras the world’s #1 provider of high-speed inference and the largest domestic high speed inference cloud. Read more…

Cerebras Partners with Hugging Face to Deliver High-Speed AI Inference

Mar 11, 2025 |

SUNNYVALE, Calif., March 11, 2025 — Cerebras and Hugging Face today announced a new partnership to bring Cerebras Inference to the Hugging Face platform. HuggingFace has integrated Cerebras into HuggingFace Hub, bringing the world’s fastest inference to over five million developers on HuggingFace. Read more…

Cerebras Powers Perplexity Sonar with Industry’s Fastest AI Inference

Feb 12, 2025 |

SUNNYVALE, Calif., Feb. 13, 2025 — Cerebras Systems has announced its pivotal role in powering Sonar, an advanced model optimized for Perplexity search. Read more…

BigDATAwire