

(vladwel/Shutterstock)
Data professionals with plans to build lakehouses atop the Apache Iceberg table format have two new Iceberg services to choose from, including one from Tabular, the company founded by Iceberg’s co-creator, and another from Dremio, the query engine developer that is holding its Subsurface 2023 conference this week.
Apache Iceberg has emerged as one of the core technologies upon which to build a data lakehouse, in which the scalability and flexibity of data lakes is merged with the data governance, predictability, and proper SQL behavior associated with traditional data warehouses.
Originally created by engineers at Netflix and Apple to deal with data consistency issues in Hadoop clusters, among other problems, Iceberg is emerging as a defacto data storage standard for open data lakehouses that work with all analytics engines, including open source offerings like Trino, Presto, Dremio, Spark, and Flink, as well as commercial offerings from Snowflake, Starburst, Google Cloud, and AWS.
Ryan Blue, who co-created Iceberg while at Netflix, founded Tabular in 2021 to build a cloud storage service around the Iceberg core. Tabular has been in a private beta for a while now, but today the company announced that it is now open for business with its Iceberg service.
According to Blue, the new Tabular service basically works as a universal table store running in AWS. “It manages Iceberg tables in a customer’s S3 bucket and allows you to connect up any of the compute engines that you want to use with that data,” he says. “It comes with the catalog you need to track what tables and metadata are there, and it comes with integrated RBAC security and access controls.”
In addition to bulk and streaming data load options, Tabular provides automated management tasks for maintaining the lakehouse going forward, including compaction. According to Blue, Tabular’s compaction routines can shrink the size of customers’ Parquet files by up to 50%.
“Iceberg was the foundation for all of this and now we’re just building on top of that foundation,” says Blue, a Datanami 2022 Person to Watch. “It’s a matter of being able to detect that someone wrote 1,000 small files and clean them up for them if they’re using our compaction service, rather than relying on people, data engineers in particular, who are expected to not write a thousand small files into a table, or not write pipelines that are wasteful.”
Tabular built its own metastore, sometimes called a catalog, which is necessary for tracking the metadata used by the various underlying compute engines. Tabular’s metastore is based on a distributed database engine, and scales better than the Apache Hive metastore, Blue says. “We’re also targeting a lot better features than what’s provided by the Hive metastore or wire-compatible Hive metastores like Glue,” he says.
Tabular’s service will also protect against the ramifications of accidentally dropping a table from the lakehouse. “It’s really easy to be in the wrong database, to drop a table, and then realize, uh oh, I’m going to break a production pipeline with what I just did!” Blue says. “How do I quickly go and restore that? Well, there is no way in Hive metastore to quickly restore a table that you’ve dropped . What we’ve done is we’ve built a way to just keep track of dropped tables and clean then up… That way, you can go and undrop a table.”
Blue, who spoke today during Dremio’s Subsurface event and timed the launch of Tabular to the event, describes Tabular as the bottom half of a data warehouse. Users get to decide for themselves what analytical engine or engines they use to populate the upper half of the warehouse, or lakehouse.
“We’re purposefully going after the storage side of the data warehouse rather than the compute side, because there’s a lot of great compute engines out there. There’s Trino, Snowflake, Spark, Dremio, Cloudera’s suite of tools. There’s a lot of things that are good at various pieces of this. We want all of those to be able to interoperate with one central repository of tables that make up your analytical data sets. We don’t want to provide any one of those. And we actually think it’s important that we separate the compute from the storage at the vendor level.”
Users can get started with the Tabular service for free, and are free to use it until the 1TB limit is hit. Blue says that should give testers enough time to familiarize themselves with the service, see how it works with their data, and “fall in love” with the product. “Up to 1TB we’re managing for free,” he says. “Once you get there we have base, professional, and enterprise plans.”
Tabular is available only on AWS today. For more information see www.tabular.io and Blue’s blog post from today.
Dremio Discusses Arctic
Meanwhile, Dremio is also embracing Iceberg as a core component of its data stack, and today during the first day of its Subsurface 2023 conference, it discussed a new Iceberg-based offering dubbed Dremio Arctic.
Arctic is a data storage offering from Dremio that’s built atop Iceberg and available on AWS. The offering brings its own metadata catalog that can work with an array of analytic engines, including Dremio, Spark, and Presto, among others, along with automated routines for cleaning up, or “vacuuming” Iceberg tables.
Arctic also provides fine-grained access control and data governance, according to Tomer Shiran, Dremio’s founder and chief product officer.
“You can see exactly who changed what, in what table and when, down to the level of what SQL command has changed this table in the last week,” Shiran says, “or was there a Spark job and what is the ID that changed the data. and you can see all the history of every single table in the system.”
Arctic also enables another feature that Dremio calls “data as code.” Just as Git is used to manage source code for computer programs and enable users to easily roll back to previous versions, Iceberg (via Arctic) can enable data professionals to work more easily with data.
Shiran says he’s very excited about the potential for data as code within Arctic. He says there are a variety of obvious use cases for treating data as code, including ensuring the quality of ETL pipelines by using “branching;” enabling experimentation by data scientists and analysts; delivering reproducibility for data science models; recovering from mistakes; and troubleshooting.
“At Dremio, in terms of our product and technology, we’ve worked very hard to make Apache Iceberg easy,” Shiran says. “You don’t really need to understand any of the technology.”
Subsurface 2023 continues on Thursday, March 2. Registration is free at www.dremio.com/subsurface/live/winter2023.
Related Items:
Open Table Formats Square Off in Lakehouse Data Smackdown
Snowflake, AWS Warm Up to Apache Iceberg
Apache Iceberg: The Hub of an Emerging Data Service Ecosystem?
June 13, 2025
- PuppyGraph Announces New Native Integration to Support Databricks’ Managed Iceberg Tables
- Striim Announces Neon Serverless Postgres Support
- AMD Advances Open AI Vision with New GPUs, Developer Cloud and Ecosystem Growth
- Databricks Launches Agent Bricks: A New Approach to Building AI Agents
- Basecamp Research Identifies Over 1M New Species to Power Generative Biology
- Informatica Expands Partnership with Databricks as Launch Partner for Managed Iceberg Tables and OLTP Database
- Thales Launches File Activity Monitoring to Strengthen Real-Time Visibility and Control Over Unstructured Data
- Sumo Logic’s New Report Reveals Security Leaders Are Prioritizing AI in New Solutions
June 12, 2025
- Databricks Expands Google Cloud Partnership to Offer Native Access to Gemini AI Models
- Zilliz Releases Milvus 2.6 with Tiered Storage and Int8 Compression to Cut Vector Search Costs
- Databricks and Microsoft Extend Strategic Partnership for Azure Databricks
- ThoughtSpot Unveils DataSpot to Accelerate Agentic Analytics for Every Databricks Customer
- Databricks Eliminates Table Format Lock-in and Adds Capabilities for Business Users with Unity Catalog Advancements
- OpsGuru Signs Strategic Collaboration Agreement with AWS and Expands Services to US
- Databricks Unveils Databricks One: A New Way to Bring AI to Every Corner of the Business
- MinIO Expands Partner Program to Meet AIStor Demand
- Databricks Donates Declarative Pipelines to Apache Spark Open Source Project
June 11, 2025
- What Are Reasoning Models and Why You Should Care
- The GDPR: An Artificial Intelligence Killer?
- Fine-Tuning LLM Performance: How Knowledge Graphs Can Help Avoid Missteps
- It’s Snowflake Vs. Databricks in Dueling Big Data Conferences
- Snowflake Widens Analytics and AI Reach at Summit 25
- Top-Down or Bottom-Up Data Model Design: Which is Best?
- Why Snowflake Bought Crunchy Data
- Inside the Chargeback System That Made Harvard’s Storage Sustainable
- Change to Apache Iceberg Could Streamline Queries, Open Data
- dbt Labs Cranks the Performance Dial with New Fusion Engine
- More Features…
- Mathematica Helps Crack Zodiac Killer’s Code
- It’s Official: Informatica Agrees to Be Bought by Salesforce for $8 Billion
- AI Agents To Drive Scientific Discovery Within a Year, Altman Predicts
- Solidigm Celebrates World’s Largest SSD with ‘122 Day’
- DuckLake Makes a Splash in the Lakehouse Stack – But Can It Break Through?
- The Top Five Data Labeling Firms According to Everest Group
- Who Is AI Inference Pipeline Builder Chalk?
- IBM to Buy DataStax for Database, GenAI Capabilities
- ‘The Relational Model Always Wins,’ RelationalAI CEO Says
- VAST Says It’s Built an Operating System for AI
- More News In Brief…
- Astronomer Unveils New Capabilities in Astro to Streamline Enterprise Data Orchestration
- Yandex Releases World’s Largest Event Dataset for Advancing Recommender Systems
- Astronomer Introduces Astro Observe to Provide Unified Full-Stack Data Orchestration and Observability
- BigID Reports Majority of Enterprises Lack AI Risk Visibility in 2025
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- Databricks Announces Data Intelligence Platform for Communications
- MariaDB Expands Enterprise Platform with Galera Cluster Acquisition
- Databricks Unveils Databricks One: A New Way to Bring AI to Every Corner of the Business
- Databricks Announces 2025 Data + AI Summit Keynote Lineup and Data Intelligence Programming
- FICO Announces New Strategic Collaboration Agreement with AWS
- More This Just In…