

Zaloni today rolled out Data Lake in a Box, a soup-to-nuts offering for getting a fully governed Hadoop cluster up and running in eight weeks or less. The offering includes Hadoop software, data management middleware, and implementation services. “Everything but the hardware,” Zaloni’s VP of marketing says.
While Hadoop clusters are powerful data storage and processing machines, they’re not easy to implement or manage. There are many configurations settings that require skill and experience to get right. And once the cluster is configured, getting the data ingested in a way that it can actually be worked with is not a trivial matter.
It’s not uncommon to hear about six-month Hadoop deployments. In these situations, much of the time spent is spent building and implementing data management processes that ensure the data is governed, discoverable, and accessible to the end-users who will (eventually) be allowed access into the cluster, or at least a part of it.
Zaloni is hoping to shortcut these extended deployments by bringing together all the software and services necessary to get a general-purpose and governed Hadoop cluster up and running in about two months.
“We’re helping companies get fully hydrated in under eight weeks,” says Zaloni vice president Kelly Schupp. “We’re reducing the time and effort it takes by up to 75%, and at the same time we’re providing the kind of visibility and governance support they’re going to need, because, as that data is getting ingested, it’s being tagged and cataloged.”
Data Lake in a Box combines Bedrock, its data lake management offering, and Mica, its self-service user access offering, with its Ingestion Factory software and users choice of Hadoop distribution, including plain vanilla Apache Hadoop or, for an extra fee, the Hadoop distributions from Cloudera or MapR.
It’s all about quickly creating a fully governed Hadoop cluster that will serve the needs of the business for many years, says Tony Fisher, Zaloni’s senior VP of strategy and business development.
While eight weeks is a big improvement over six months, it’s still not as quick as some offerings that promise to create ready-to-use Hadoop clusters in a matter of days. The key difference there is quality, says Tony Fisher, Zaloni’s senior VP of strategy and business development.
“There’s a big difference between creating a data lake and a data swamp,” Fisher says “You can ingest anything into a data lake in three days. But the fact of the matter is it doesn’t’ have the data quality, the rigor, or the types of things you’re going to need to do productive analytics on it.”
The offering doesn’t include analytics; it’s up to the user to bring those. That’s fine because most customers these days are developing their own analytics in Python or R using data science notebooks, or hooking Excel, Tableau, or Qlik BI tools to visualize and manipulate data.
Companies that adopt Hadoop are finding that it takes more time and effort than they expected to get good results out of Hadoop, says Nik Rouda, an analyst with Enterprise Strategy Group.
“Operationalizing data lakes has proven much harder and taken much longer than most enterprises would want,” he states in Zaloni’s press release. “This process typically involves manually cobbling together a large number of disparate tools, and then trying to support that mess going forwards. Zaloni integrates all the essential capabilities and best practices and packages them up, delivering quality and productivity right out of the box.”
Zaloni says it’s getting traction with Bedrock and Mica, which come together in a single offering for the first time with the new Data Lake in a Box offering. The company says bookings and revenues grew by 3x from 2015 to 2016, and it’s hoping the new offering continues that momentum.
One of the Durham, North Carolina company’s customers, Emirates Integrated Telecommunications Company (also known simply as du), will be in San Jose, California this week to present at the Strata + Hadoop World show. The company will discuss its experience with Zaloni’s products. Other prominent Zaloni customers include SCL Health, CDS Global, and Pechanga Resort and Casino.
Related Items:
Dr. Elephant Steps Up to Cure Hadoop Cluster Pains
IBM Taps Zaloni to Ride Herd on Hadoop
June 26, 2025
- Thomson Reuters: Firms with AI Strategies Twice as Likely to See AI-driven Revenue Growth
- DataBahn Raises $17M Series A to Advance AI-Native Data Pipeline Platform
- BCG Report: Companies Must Go Beyond AI Adoption to Realize Its Full Potential
- H2O.ai Breaks New World Record for Most Accurate Agentic AI for Generalized Assistants
- Foresight Raises $5.5M Seed Round to Bring Unified Data and AI to the Private Market
- Treasure Data Launches MCP Server: Let Your LLM Talk to Your Data
- Fujitsu Strengthens Global Consulting with Focus on AI, Data, and Sustainability
- HPE Expands ProLiant Gen12 with New AMD Servers
- Emergence AI Launches CRAFT for Natural Language Data Workflow Automation
- Overture Maps Launches GERS, a Global Standard for Interoperable Geospatial IDs, to Drive Data Interoperability
June 25, 2025
- Datadobi Launches StorageMAP 7.3 with Policy-Driven Workflow Automation
- Lucidworks Finds 65% of Companies Lack Foundation for Agentic AI
- OpenRouter Raises $40M to Scale Up Multi-Model Inference for Enterprise
- Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027
- Linux Foundation Launches the Agent2Agent Protocol Project
- Abstract Security Introduces LakeVilla for Scalable, Instant Access to Cold Data
June 24, 2025
- Honeycomb Announces Innovations in Telemetry Pipeline to Advance Enterprise Observability
- Teradata Delivers Private AI Innovation in New Offering: Teradata AI Factory
- Incogni Finds Leading LLMs Collect and Share Sensitive User Data
- NVIDIA AI Computing by HPE Gains Momentum with Key AI Tech Innovation Announcements
- Inside the Chargeback System That Made Harvard’s Storage Sustainable
- What Are Reasoning Models and Why You Should Care
- Databricks Takes Top Spot in Gartner DSML Platform Report
- It’s Snowflake Vs. Databricks in Dueling Big Data Conferences
- Top-Down or Bottom-Up Data Model Design: Which is Best?
- Snowflake Widens Analytics and AI Reach at Summit 25
- Why Snowflake Bought Crunchy Data
- Change to Apache Iceberg Could Streamline Queries, Open Data
- Stream Processing at the Edge: Why Embracing Failure is the Winning Strategy
- Agentic AI Orchestration Layer Should be Independent, Dataiku CEO Says
- More Features…
- Mathematica Helps Crack Zodiac Killer’s Code
- Solidigm Celebrates World’s Largest SSD with ‘122 Day’
- AI Agents To Drive Scientific Discovery Within a Year, Altman Predicts
- DuckLake Makes a Splash in the Lakehouse Stack – But Can It Break Through?
- ‘The Relational Model Always Wins,’ RelationalAI CEO Says
- Confluent Says ‘Au Revoir’ to Zookeeper with Launch of Confluent Platform 8.0
- The Top Five Data Labeling Firms According to Everest Group
- Who Is AI Inference Pipeline Builder Chalk?
- Toloka Expands Data Labeling Service
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- More News In Brief…
- Astronomer Unveils New Capabilities in Astro to Streamline Enterprise Data Orchestration
- Yandex Releases World’s Largest Event Dataset for Advancing Recommender Systems
- Astronomer Introduces Astro Observe to Provide Unified Full-Stack Data Orchestration and Observability
- BigID Reports Majority of Enterprises Lack AI Risk Visibility in 2025
- Databricks Unveils Databricks One: A New Way to Bring AI to Every Corner of the Business
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- Snowflake Openflow Unlocks Full Data Interoperability, Accelerating Data Movement for AI Innovation
- FICO Announces New Strategic Collaboration Agreement with AWS
- Zscaler Unveils Business Insights with Advanced Analytics for Smarter SaaS Spend and Resource Allocation
- Cisco: Agentic AI Poised to Handle 68% of Customer Service by 2028
- More This Just In…