
Meet David Flynn, a 2025 BigDATAwire Person to Watch

“The future is already here,” science fiction writer William Gibson once said. “It’s just not evenly distributed yet.” One person who’s looking to bring data storage into the future and make it widely distributed is David Flynn, who is the CEO and founder of Hammerspace as well as a BigDATAwire Person to Watch for 2025.
Even before founding Hammerspace in 2018, Flynn had an eventful career in IT, including developing solid-state data storage platforms at Fusion-iO and working with Linux-based HPC systems. But now as Hammerspace gains traction, Flynn is eager to build the next generation of distributed file systems and hopefully solve some of the toughest data problems in the world.
Here’s our recent conversation with Flynn:
BigDATAwire: First, congratulations on your selection as a 2025 BigDATAwire Person to Watch! Before Hammerspace, you were the CEO and founder of Fusion-io, which SanDisk bought in 2014. Before that, you were chief architect at Linux Networx, where you designed several of the world’s largest supercomputers. How did those experiences lead you to found Hammerspace in 2018?
David Flynn: It’s a really interesting trajectory, I think, that led to the creation of Hammerspace. Early on in my career, I was embedding alternate open-source software like Linux into tiny systems like TV set-top boxes, corporate smart terminals and the like. And then I came to design many of the world’s largest supercomputers in the high-performance computing industry that leveraged technologies like Linux clustering, InfiniBand, RDMA-based technologies.
Those two extremes – small embedded systems versus massive supercomputers – might not seem to have a ton in common, but they share the need to extract the absolute most performance from the hardware.
This led to the creation of Fusion-io, which pioneered the use of flash for enterprise application acceleration, which until that point was generally used for embedded systems in consumer electronics — for example, the flash on devices like iPods and early cell phones. I saw an opportunity to take that innovation from the consumer electronics world and translate into the data center, which created a shift away from mechanical hard drives towards solid-state storage. The issue then became that this transition towards solid-state drives needed extremely fast performance; the data needed to be physically distributed across a set of servers or across third party storage systems.
The introduction of ultra-high-performance flash was instrumental in addressing this challenge of decentralized data, and abstracting data from the underlying infrastructure. Most data in enterprises today is unstructured, and it’s hard for those organizations to find and extract the value within it.
This realization ultimately led to the creation of Hammerspace, with the vision to make all enterprise data globally accessible, useful, and indispensable, completely eliminating data access delays for AI and high-performance computing.
BDW: We’re 20 years into the Big Data boom now, but it feels as though we’re at an inflection point when it comes to storage. What do you see as the main drivers this time around, and how is Hammerspace positioned to capitalize on them?
DF: To really thrive in this next data cycle, we’ve got to fix the broken relationship between the data and the data infrastructure where it is stored. Enterprises need to think beyond storage and rather how they can transform data access and management in modern AI environments.
Vendors are all competing to offer the performance and scale that’s needed to support AI workloads. Except it’s not just about accelerating data throughput to GPU servers – the core problem is that data pathways between external storage and GPU servers get bottlenecked by unnecessary and inefficient hops in the data path within the server node and on the network, regardless of the external shared storage in use.
The solution here, which is addressed by Hammerspace’s Tier 0, is utilizing the local NVMe storage which is already included within GPU servers to accelerate AI workloads and improve GPU utilization. By leveraging the existing infrastructure and built-in Linux capabilities, we’re removing that bottleneck without adding complexity.
We do this by leveraging the intelligence that’s built into the Linux kernel which allows our customers to utilize the existing storage infrastructure they’re already using, without proprietary client software or other point solutions. This is in addition to providing global multi-protocol file/object access, data orchestration, data protection, and data services across a global namespace.
BDW: You stated at the HPC + AI on Wall Street 2023 event that we were all duped by S3 and object storage to give up the benefits of native access inherent with NFS. Isn’t the fight against S3 and object storage destined to fail, or do you see a resurgence in NFS?
DF: Let’s be clear—its not about object or file, nor, S3 or NFS. Storage interfaces needed to evolve to accomplish scale. S3 came about and became the default for cloud-scale storage for a good reason: older versions of NFS simply couldn’t scale or perform at the levels needed for early HPC and AI workloads.
But that was then. Today, NFSv4.2 with pNFS is a different animal—fully matured, integrated into the Linux kernel, and capable of delivering massive scale and native performance without proprietary clients or complex overhead. In fact, it’s become a standard for organizations that demand high performance and efficient access across large, distributed environments.
So this isn’t about picking sides in a file vs. object debate. That framing is outdated. The real breakthrough is enabling both file and object access within a single, standards-based data platform—where data can be orchestrated, accessed natively, and served through whichever interface a given application or AI model requires.
S3 isn’t going away—many apps are written for it. But it’s no longer the only option for scalable data access. With the rise of intelligent data orchestration, Tier 0 storage, and modern file protocols like pNFS, we can now deliver performance and flexibility without forcing a choice between paradigms.
The future isn’t about fighting S3—it’s about transcending the limits of both file and object storage with a unified data layer that speaks both languages natively, and puts the data where it needs to be, when it needs to be there.
BDW: How do you see the AI revolution of the 2020s impacting the previous decade’s big advance, which was separating compute and storage? Can we afford to bring big GPU compute to the data, or are we destined to go back to moving data to compute?
DF: The separation of compute and storage made sense when bandwidth was cheap, workloads were batch-oriented, and performance wasn’t tied to GPU utilization. But in the AI era, where idle GPUs mean wasted dollars and lost opportunities, that model is starting to crack.
The challenge now isn’t just about where the compute or data lives—it’s about how fast and intelligently you can bridge the two. At Hammerspace, we believe the answer is not to return to old habits, but to evolve beyond rigid infrastructure with a global, intelligent data layer.
We make all data visible and accessible in a global file system—no matter where it physically resides. Whether your application speaks S3, SMB, or NFS (including modern pNFS), the data appears local. And that’s where the magic happens: our metadata-driven orchestration engine can move data with extreme granularity—file by file—to where the compute is, without disrupting access or requiring rewrites.
So the real answer isn’t choosing between moving compute to data or vice versa. The real answer is dynamic, policy-driven orchestration that places data exactly where it needs to be, just in time, across any storage infrastructure, so AI and HPC workloads stay fed, fast, and efficient.
The AI revolution doesn’t undo the separation of compute and storage—it demands we unify them with orchestration that’s smarter than either alone.
BDW: What can you tell us about yourself outside of the professional sphere – unique hobbies, favorite places, etc.? Is there anything about you that your colleagues might be surprised to learn?
DF: Outside of work, I spend as much time as I can with my kids and family—usually on skis or dirt bikes. There’s nothing better than getting out on a mountain or a trail and just enjoying the ride. It’s fast, technical, and a little chaotic—pretty much my ideal weekend.
That said, I’ve never really separated work from play in the traditional sense. For me, writing software and inventing new ways to solve tough problems is what I’ve always loved to do. I’ve been building systems since I was a kid, and that curiosity never really went away. Even when I’m off the clock, I’m often deep in code or sketching out the next idea.
People might be surprised to learn that I genuinely enjoy the creative process behind tech—whether that’s low-level system design or rethinking how infrastructure should work in the AI era. Some folks unwind with hobbies. I unwind by solving hard problems.
You can read the rest of our conversations with BigDATAwire People to Watch 2025 honorees here.