
Selecting a Data Lake ETL Platform? Here Are 6 Questions to Ask

(Alexander Supertramp/Shutterstock)
Not all data lakes are created equal. If your organization wants to adopt a data lake solution to simplify and more easily operate your IT infrastructure and store enormous quantities of data without requiring extended data transformation, then go for it.
But before you do, understand that simply dumping all your data into object storage such as AWS S3 doesn’t exactly mean you will have a working data lake.
The ability to use that data in analytics or machine learning requires converting that raw information into organized datasets you can use for SQL queries, and this can only be done via extract-transform-load (ETL) flows.
Data lake ETL platforms are available in a full range of options – from open-source to managed solutions to custom-built. Whichever tool you select, it’s important to differentiate data lake ETL challenges from traditional database ELT demands – and seek the platform that overcomes these obstacles.
Ask yourself which ETL solution:
1. Effectively Conducts Stateful Transformations
Traditional ETL frameworks allow for stateful operations like joins and aggregations to enable analysts to work with data from multiple sources; this is difficult to implement with a decoupled architecture.
Stateful transformations can occur by relying on extract-load-transform (ELT) – i.e., sending data to an “intermediary” database and using the database’s SQL, processing power and already amassed historical data. After transformation, the information is loaded into the data warehouse tables.
Data lakes, aiming to reduce cost and complexity by avoiding decoupled architecture, cannot depend on databases for every activity. You’ll need to look for an ETL tool that can conduct stateful transformations in-memory and needs no additional database to sustain joins and aggregations.
2 Extracts Schema from Raw Data
Organizations customarily use data lakes as a storehouse for raw data in a structured or semi-structured arrangement vs databases, which are predicated on structured tables. This poses numerous challenges.
One, in order to query data, can the data lake ETL tool draw out a schema (without which querying is not possible) from the raw data – and bring it up to date as changes in data and data structure come about? And two — this is an ongoing struggle — can the ETL tool effectively make queries with nested data?
3 Improves Query Performance Via Optimized Object Storage
Have you tried to read raw data straight from a data lake? Unlike using a database’s optimized file system that quickly sends back query results, doing the same operation with a data lake can be quite frustrating performance-wise.
To get optimal results, your ETL framework should continually store data in columnar formats and merge small files to the 200mb-1gb range. Unlike traditional ELT tools that only need to write the data once to its target database, data lake ETL should support the ability to write multiple copies of the same data based on the queries you will want to run and the various optimizations required for your query engines to be performant.
4 Easily Integrates with the Metadata Catalog
You’ve chosen the data lake approach for its flexibility — store large quantities if data now but analyze it later — and the ability to handle a wide range of analytics use cases. Such an open architecture should keep metadata separate from the engine that queries it, so you can easily change these query engines or use several simultaneously for the same data.
This means the data lake tool you select should reinforce this open architecture, i.e., be seamlessly merged with the metadata catalog. This allows the metadata to be easily “queryable” by various services because it is both stored in the catalog and still dovetails with every adjustment in schema, partition, and location of objects.
5 Replays Historical Data
Say you wanted to test a hypothesis by looking at stored data on a historical basis. This is difficult to accomplish with the traditional database option, where data is stored in a mutable condition, and in which running such a query could be prohibitive in terms of cost, stress, and tension between operations and analysis.
It’s easy to do with a data lake. In data lakes, stored raw data remains continuously available – it only transformed after extraction. Therefore, having a data lake allows you got “travel back in time,” seeing the exact state of the data as it was collected.
“Traditional” databases don’t allow for that, as the data is only stored in its transformed state.
6 Updates Tables Periodically
Data lakes, unlike databases that allow you to update and make deletions to tables, contain partitioned files that enable an append or add-only feature. If you want to store transactional data, implement change data capture in the data lake, or delete particular data for GDPR compliance, you’ll have difficulty doing so.
Make sure that the data lake ETL tools you choose have the ability to sidestep this obstacle. Your solution should be able to allow upserts, a system that lets you insert new records or update existing ones, in the storage layer and in the output tables.
About the author: Ori Rafael is the CEO and co-founder of Upsolver, a provider of a self-service data lake ETL platform that bridges the gap between data lakes and data consumers. Ori has worked in IT for nearly two decades and has an MBA from Tel Aviv University.
Related Items:
Merging Batch and Stream Processing in a Post Lambda World
April 25, 2025
- Denodo Supports Real-Time Data Integration for Hospital Sant Joan de Déu Barcelona
- Redwood Expands Automation Platform with Introduction of Redwood Insights
- Datatonic Announces Acquisition of Syntio to Expand Global Services and Delivery Capabilities
April 24, 2025
- Dataiku Expands Platform with Tools to Build, Govern, and Monitor AI Agents at Scale
- Indicium Launches IndiMesh to Streamline Enterprise AI and Data Systems
- StorONE and Phison Unveil Storage Platform Designed for LLM Training and AI Workflows
- Dataminr Raises $100M to Accelerate Global Push for Real-Time AI Intelligence
- Elastic Announces General Availability of Elastic Cloud Serverless on Google Cloud Marketplace
- CNCF Announces Schedule for OpenTelemetry Community Day
- Thoughtworks Signs Global Strategic Collaboration Agreement with AWS
April 23, 2025
- Metomic Introduces AI Data Protection Solution Amid Rising Concerns Over Sensitive Data Exposure in AI Tools
- Astronomer Unveils Apache Airflow 3 to Power AI and Real-Time Data Workflows
- CNCF Announces OpenObservabilityCon North America
- Domino Wins $16.5M DOD Award to Power Navy AI Infrastructure for Mine Detection
- Endor Labs Raises $93M to Expand AI-Powered AppSec Platform
- Ocient Announces Close of Series B Extension Financing to Accelerate Solutions for Complex Data and AI Workloads
April 22, 2025
- O’Reilly Launches AI Codecon, New Virtual Conference Series on the Future of AI-Enabled Development
- Qlik Powers Alpha Auto Group’s Global Growth with Automotive-Focused Analytics
- Docker Extends AI Momentum with MCP Tools Built for Developers
- John Snow Labs Unveils End-to-End HCC Coding Solution at Healthcare NLP Summit
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- Will Model Context Protocol (MCP) Become the Standard for Agentic AI?
- OpenTelemetry Is Too Complicated, VictoriaMetrics Says
- Thriving in the Second Wave of Big Data Modernization
- Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
- Google Cloud Fleshes Out its Databases at Next 2025, with an Eye to AI
- Can We Learn to Live with AI Hallucinations?
- Monte Carlo Brings AI Agents Into the Data Observability Fold
- AI Today and Tomorrow Series #3: HPC and AI—When Worlds Converge/Collide
- The Active Data Architecture Era Is Here, Dresner Says
- More Features…
- Google Cloud Cranks Up the Analytics at Next 2025
- New Intel CEO Lip-Bu Tan Promises Return to Engineering Innovation in Major Address
- AI One Emerges from Stealth to “End the Data Lake Era”
- SnapLogic Connects the Dots Between Agents, APIs, and Work AI
- Snowflake Bolsters Support for Apache Iceberg Tables
- GigaOM Report Highlights Top Performers in Unstructured Data Management for 2025
- Supabase’s $200M Raise Signals Big Ambitions
- Big Data Career Notes for March 2025
- GenAI Investments Accelerating, IDC and Gartner Say
- Dremio Speeds AI and BI Workloads with Spring Lakehouse Release
- More News In Brief…
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- MinIO: Introducing Model Context Protocol Server for MinIO AIStor
- Dataiku Achieves AWS Generative AI Competency
- AMD Powers New Google Cloud C4D and H4D VMs with 5th Gen EPYC CPUs
- CData Launches Microsoft Fabric Integration Accelerator
- MLCommons Releases New MLPerf Inference v5.0 Benchmark Results
- Opsera Raises $20M to Expand AI-Driven DevOps Platform
- GitLab Announces the General Availability of GitLab Duo with Amazon Q
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- Intel and IBM Announce Availability of Intel Gaudi 3 AI Accelerators on IBM Cloud
- More This Just In…