
WEKA Launches NeuralMesh to Serve Needs of Emerging AI Workloads

WEKA today pulled the cover off its latest product, NeuralMesh, which is a re-imagining of its distributed file system that’s designed to handle the expanding storage and serving needs–as well as the tighter latency and resiliency requirements–of today’s enterprise AI deployments.
WEKA described NeuralMesh as “a fully containerized, mesh-based architecture that seamlessly connects data, storage, compute, and AI services.” It’s designed to support the data needs of large-scale AI deployments, such as AI factories and token warehouses, particularly for emerging AI agent workloads that utilize the latest reasoning techniques, the company said.
These agentic workloads have different requirements than traditional AI systems, including a need for faster response times and a different overall workflow that’s not based on data but on service demands. Without the types of changes that WEKA has built into NeuralMesh, traditional data architectures will burden organizations with slow and inefficient agentic AI workflows.
“This new generation of AI workload is completely different than anything we’ve seen before,” Liran Zvibel, cofounder and CEO at WEKA, said in a video posted to his company’s website. “Traditional high performance storage systems are reaching the breaking point. What used to work great in legacy HPC now creates bottlenecks. Expensive GPUs are sitting idle waiting for data or needlessly computing the same tokens over and over again.”
With NeuralMesh, WEKA is developing a new data infrastructure layer that’s service-oriented, modular, and composable, Zvibel said. “Think of it as a software-defined fabric that interconnects data, compute, and AI services across any environment with extreme precision and efficiency.”
From an architectural point of view, NeuralMesh has five components. They include Core, which provides the foundational software-defined storage environment; Accelerate, which creates direct paths between data and applications and distributes metadata across the cluster; Deploy, which ensure the system can be run anywhere, from virtual machines and bare metal to clouds and on-prem systems; Observe, which provides manageability and monitoring of the system; and Enterprise Services, which provides security, access control, and data protection.
According to WEKA, NeuralMesh adopts computer clustering and data mesh concepts. It utilizes multiple parallelized paths between applications and data, and distributes data and metadata “intelligently,” the company said. It works with clusters running CPUs, GPUs, and TPUs, running on prem, in the cloud, or anywhere in between.
Data access times on NeuralMesh are measured in microseconds rather than milliseconds, the company claimed. The new offering “dynamically adapts to the variable needs of AI workflows” through the use of microservices that handle various functions, such as data access, metadata, auditing, observability, and protocol communication. These microservices run independently and are coordinated through APIs.
WEKA claimed NeuralMesh actually gets faster and more resilient as data and AI workloads increase, the company claims. It achieves this feat in part due to the data striping routines that it uses to protect data. As the number of nodes in a NeuralMesh cluster goes up, the data is striped more broadly to more nodes, reducing the odds of data loss. As far as scalability goes, NeuralMesh can scale upwards from petabytes to exabytes of storage.
“Nearly every layer of the modern data center has embraced a service-oriented architecture,” WEKA’s Chief Product Officer Ajay Singh wrote in a blog. “Compute is delivered through containers and serverless functions. Networking is managed by software-defined platforms and service meshes. Observability, identity, security, and even AI inference pipelines run as modular, scalable services. Databases and caching layers are offered as fully managed, distributed systems. This is the architecture the rest of your stack already uses. It’s time for your storage to catch up.”
Related Items:
WEKA Keeps GPUs Fed with Speedy New Appliances
Legacy Data Architectures Holding GenAI Back, WEKA Report Finds
How to Capitalize on Software Defined Storage, Securely and Compliantly