

(Summit Art Creations/Shutterstock)
Kinetica got its start building a GPU-powered database to serve fast SQL queries and visualizations for US government and military clients. But with a pair of announcements at Nvidia’s GTC show last week, the company is showing it’s prepared for the coming wave of generative AI applications, particularly those utilizing retrieval augmented generation (RAG) techniques to tap unique data sources.
Companies today are hunting for ways to leverage the power of large language models (LLMs) with their own proprietary data. Some companies are sending their data to OpenAI’s cloud or other cloud-based AI providers, while others are building their own LLMs.
However, many more companies are adopting the RAG approach, which has surfaced as perhaps the best middle ground between that doesn’t require building your own model (time-consuming and expensive) or sending your data to the cloud (not good privacy and security-wise).
With RAG, relevant data is injected directly into the context window before being sent off to the LLM for execution, thereby providing more personalization and context in the LLMs response. Along with prompt engineering, RAG has emerged as a low-risk and fruitful method for juicing GenAI returns.

The VRAM boost in Nvidia’s Blackwell GPU will help Kinetica keep the processor fed with data, Negahban said
Kinetica is also now getting into the RAG game with its database by essentially turning it into a vector database that can store and serve vector embeddings to LLMs, as well as by performing vector similarity search to optimize the data it sends to the LLM.
According to its announcement last week, Kinetica is able to serve vector embeddings 5x faster than other databases, a number it claims came from the VectorDBBench benchmark. The company claims its able to achieve that speed by leveraging Nvidia’s RAPIDS RAFT technology.
That GPU-based speed advantage will help Kinetica customers by enabling them to scan more of their data, including real-time data that has just been added to the database, without doing a lot of extra work, said Nima Negahban, co0founder and CEO of Kinetica.
“It’s hard for an LLM or a traditional RAG stack to be able to answer a question about something that’s happening right now, unless they’ve done a lot of pre-planning for specific data types,” Negahban told Datanami at the GTC conference last week, “whereas with Kinetica, we’ll be able to help you by looking at all the relational data, generate the SQL on the fly, and ultimately what we put just back in the context for the LLM is a simple text payload that the LLM will be able to understand to use to give the answer to the question.”
This essentially gives users the capability to talk to their complete corpus of relational enterprise data, without doing any preplanning.
“That’s the big advantage,” he continued, “because the traditional RAG pipelines right now, that part of it still requires a good amount of work as far as you have to have the right embedding model, you have to test it, you have to make sure it’s working for your use case.”
Kinetica can also talk to other databases and function as a generative federated query engine, as well as do the traditional vectorization of data that customers put inside of Kinetica, Negahban said. The database is designed to be used for operational data, such as time-series, telemetry, or teleco data. Thanks to the support for NVIDIA NeMo Retriever microservices, the company is able to position that data in a RAG workflow.
But for Kinetica, it all comes back to the GPU. Without the extreme computational power of the GPU, the company has just another RAG offering.
“Basically you need that GPU-accelerated engine to make it all work at the end of the day, because it’s got to have the speed,” said Negahban, a 2018 Datanami Person to Watch. “And we then put all that orchestration on top of it as far as being able to have the metadata necessary, being able to connect to other databases, having all that to make it easy for the end user, so basically they can start taking advantage of all that relational enterprise data in their LLM interaction.”
Related Items:
Bank Replaces Hundreds of Spark Streaming Nodes with Kinetica
Kinetica Aims to Broaden Appeal of GPU Computing
Preventing the Next 9/11 Goal of NORAD’s New Streaming Data Warehouse
August 1, 2025
- MIT: New Algorithms Enable Efficient Machine Learning with Symmetric Data
- Micron Expands Storage Portfolio with PCIe Gen6 and 122TB SSDs for AI Workloads
- DataRobot Announces Agent Workforce Platform Built with NVIDIA
- Menlo Ventures Report: Enterprise LLM Spend Reaches $8.4B as Anthropic Overtakes OpenAI
- Confluent Announces $200M Investment Across Its Global Partner Ecosystem
- Zilliz Sets New Industry Standard with VDBBench 1.0 for Benchmarking Real Vector Database Production Workloads
- Symmetry Systems CSO Releases Book on Data Security Strategies for the AI Era
July 31, 2025
- Google DeepMind’s AlphaEarth Model Aims to Transform Climate and Land Monitoring
- Elsevier Launches Reaxys AI Search for Natural Language Chemistry Queries
- Boomi Brings Sovereign Data Integration to Australia
- Scality Releases Open Source COSI and CSI Drivers to Streamline File Storage Provisioning
- Informatica Boosts AI Capabilities with Latest Intelligent Data Management Cloud Platform Release
- Helix 2.0 Gives Global Enterprises the Fastest Path to AI Agents on a Private GenAI Stack
- Anaconda Raises Over $150M in Series C Funding to Power AI for the Enterprise
- Supermicro Open Storage Summit Showcases the Impact of AI Workloads on Storage
- Observe Closes $156M Series C as Enterprises Shift to AI-Powered Observability at Scale
- Stack Overflow’s 2025 Developer Survey Reveals Trust in AI at an All Time Low
July 30, 2025
- Scaling the Knowledge Graph Behind Wikipedia
- LinkedIn Introduces Northguard, Its Replacement for Kafka
- Top 10 Big Data Technologies to Watch in the Second Half of 2025
- Rethinking Risk: The Role of Selective Retrieval in Data Lake Strategies
- What Are Reasoning Models and Why You Should Care
- Apache Sedona: Putting the ‘Where’ In Big Data
- Rethinking AI-Ready Data with Semantic Layers
- Top-Down or Bottom-Up Data Model Design: Which is Best?
- What Is MosaicML, and Why Is Databricks Buying It For $1.3B?
- LakeFS Nabs $20M to Build ‘Git for Big Data’
- More Features…
- Supabase’s $200M Raise Signals Big Ambitions
- Mathematica Helps Crack Zodiac Killer’s Code
- Promethium Wants to Make Self Service Data Work at AI Scale
- Solidigm Celebrates World’s Largest SSD with ‘122 Day’
- AI Is Making Us Dumber, MIT Researchers Find
- Toloka Expands Data Labeling Service
- The Top Five Data Labeling Firms According to Everest Group
- With $20M in Seed Funding, Datafy Advances Autonomous Cloud Storage Optimization
- Ryft Raises $8M to Help Companies Manage Their Own Data Without Relying on Vendors
- AWS Launches S3 Vectors
- More News In Brief…
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- OpenText Launches Cloud Editions 25.3 with AI, Cloud, and Cybersecurity Enhancements
- TigerGraph Secures Strategic Investment to Advance Enterprise AI and Graph Analytics
- Promethium Introduces 1st Agentic Platform Purpose-Built to Deliver Self-Service Data at AI Scale
- StarTree Adds Real-Time Iceberg Support for AI and Customer Apps
- Gathr.ai Unveils Data Warehouse Intelligence
- Databricks Announces Data Intelligence Platform for Communications
- Graphwise Launches GraphDB 11 to Bridge LLMs and Enterprise Knowledge Graphs
- Open Source Data Integration Company Airbyte Closes $26M Series A
- More This Just In…