24.9 C
New York
Sunday, June 29, 2025

Buy now

spot_img

Meet David Flynn, a 2025 BigDATAwire Particular person to Watch


“The long run is already right here,” science fiction author William Gibson as soon as stated. “It’s simply not evenly distributed but.” One one that’s seeking to convey knowledge storage into the long run and make it extensively distributed is David Flynn, who’s the CEO and founding father of Hammerspace in addition to a BigDATAwire Particular person to Look ahead to 2025.

Even earlier than founding Hammerspace in 2018, Flynn had an eventful profession in IT, together with creating solid-state knowledge storage platforms at Fusion-iO and dealing with Linux-based HPC programs. However now as Hammerspace beneficial properties traction, Flynn is keen to construct the subsequent era of distributed file programs and hopefully remedy among the hardest knowledge issues on the planet.

Right here’s our current dialog with Flynn:

BigDATAwire: First, congratulations in your choice as a 2025 BigDATAwire Particular person to Watch! Earlier than Hammerspace, you had been the CEO and founding father of Fusion-io, which SanDisk purchased in 2014. Earlier than that, you had been chief architect at Linux Networx, the place you designed a number of of the world’s largest supercomputers. How did these experiences lead you to discovered Hammerspace in 2018?

David Flynn: It’s a extremely attention-grabbing trajectory, I feel, that led to the creation of Hammerspace. Early on in my profession, I used to be embedding alternate open-source software program like Linux into tiny programs like TV set-top packing containers, company good terminals and the like. After which I got here to design lots of the world’s largest supercomputers within the high-performance computing business that leveraged applied sciences like Linux clustering, InfiniBand, RDMA-based applied sciences.

These two extremes – small embedded programs versus huge supercomputers – may not appear to have a ton in widespread, however they share the necessity to extract absolutely the most efficiency from the {hardware}.

This led to the creation of Fusion-io, which pioneered the usage of flash for enterprise utility acceleration, which till that time was usually used for embedded programs in shopper electronics — for instance, the flash on gadgets like iPods and early cell telephones. I noticed a chance to take that innovation from the patron electronics world and translate into the information middle, which created a shift away from mechanical arduous drives in direction of solid-state storage. The difficulty then turned that this transition in direction of solid-state drives wanted extraordinarily quick efficiency; the information wanted to be bodily distributed throughout a set of servers or throughout third occasion storage programs.

(ALPAL-images/Shutterstock)

The introduction of ultra-high-performance flash was instrumental in addressing this problem of decentralized knowledge, and abstracting knowledge from the underlying infrastructure. Most knowledge in enterprises immediately is unstructured, and it’s arduous for these organizations to seek out and extract the worth inside it.

This realization in the end led to the creation of Hammerspace, with the imaginative and prescient to make all enterprise knowledge globally accessible, helpful, and indispensable, fully eliminating knowledge entry delays for AI and high-performance computing.

BDW: We’re 20 years into the Huge Information growth now, however it feels as if we’re at an inflection level with regards to storage. What do you see as the principle drivers this time round, and the way is Hammerspace positioned to capitalize on them?

DF: To essentially thrive on this subsequent knowledge cycle, we’ve obtained to repair the damaged relationship between the information and the information infrastructure the place it’s saved. Enterprises have to suppose past storage and somewhat how they will remodel knowledge entry and administration in trendy AI environments.

Distributors are all competing to supply the efficiency and scale that’s wanted to help AI workloads. Besides it’s not nearly accelerating knowledge throughput to GPU servers – the core drawback is that knowledge pathways between exterior storage and GPU servers get bottlenecked by pointless and inefficient hops within the knowledge path inside the server node and on the community, whatever the exterior shared storage in use.

The answer right here, which is addressed by Hammerspace’s Tier 0, is using the native NVMe storage which is already included inside GPU servers to speed up AI workloads and enhance GPU utilization. By leveraging the prevailing infrastructure and built-in Linux capabilities, we’re eradicating that bottleneck with out including complexity.

We do that by leveraging the intelligence that’s constructed into the Linux kernel which permits our clients to make the most of the prevailing storage infrastructure they’re already utilizing, with out proprietary shopper software program or different level options. That is along with offering world multi-protocol file/object entry, knowledge orchestration, knowledge safety, and knowledge providers throughout a worldwide namespace.

BDW: You said on the HPC + AI on Wall Avenue 2023 occasion that we had been all duped by S3 and object storage to surrender the advantages of native entry inherent with NFS. Isn’t the battle towards S3 and object storage destined to fail, or do you see a resurgence in NFS?

(whiteMocca/Shutterstock)

DF: Let’s be clear—its not about object or file, nor, S3 or NFS. Storage interfaces wanted to evolve to perform scale.  S3 took place and have become the default for cloud-scale storage for a very good purpose: older variations of NFS merely couldn’t scale or carry out on the ranges wanted for early HPC and AI workloads.

However that was then. Immediately, NFSv4.2 with pNFS is a unique animal—totally matured, built-in into the Linux kernel, and able to delivering huge scale and native efficiency with out proprietary shoppers or complicated overhead. In reality, it’s turn into an ordinary for organizations that demand excessive efficiency and environment friendly entry throughout giant, distributed environments.

So this isn’t about selecting sides in a file vs. object debate. That framing is outdated. The actual breakthrough is enabling each file and object entry inside a single, standards-based knowledge platform—the place knowledge will be orchestrated, accessed natively, and served via whichever interface a given utility or AI mannequin requires.

S3 isn’t going away—many apps are written for it. However it’s not the one choice for scalable knowledge entry. With the rise of clever knowledge orchestration, Tier 0 storage, and trendy file protocols like pNFS, we will now ship efficiency and suppleness with out forcing a selection between paradigms.

The long run isn’t about preventing S3—it’s about transcending the boundaries of each file and object storage with a unified knowledge layer that speaks each languages natively, and places the information the place it must be, when it must be there.

BDW: How do you see the AI revolution of the 2020s impacting the earlier decade’s huge advance, which was separating compute and storage? Can we afford to convey huge GPU compute to the information, or are we destined to return to shifting knowledge to compute?

DF: The separation of compute and storage made sense when bandwidth was low cost, workloads had been batch-oriented, and efficiency wasn’t tied to GPU utilization. However within the AI period, the place idle GPUs imply wasted {dollars} and misplaced alternatives, that mannequin is beginning to crack.

The problem now isn’t nearly the place the compute or knowledge lives—it’s about how briskly and intelligently you may bridge the 2. At Hammerspace, we imagine the reply is to not return to outdated habits, however to evolve past inflexible infrastructure with a worldwide, clever knowledge layer.

We make all knowledge seen and accessible in a worldwide file system—regardless of the place it bodily resides. Whether or not your utility speaks S3, SMB, or NFS (together with trendy pNFS), the information seems native. And that’s the place the magic occurs: our metadata-driven orchestration engine can transfer knowledge with excessive granularity—file by file—to the place the compute is, with out disrupting entry or requiring rewrites.

So the true reply isn’t selecting between shifting compute to knowledge or vice versa. The actual reply is dynamic, policy-driven orchestration that locations knowledge precisely the place it must be, simply in time, throughout any storage infrastructure, so AI and HPC workloads keep fed, quick, and environment friendly.

The AI revolution doesn’t undo the separation of compute and storage—it calls for we unify them with orchestration that’s smarter than both alone.

BDW: What are you able to inform us about your self exterior of the skilled sphere – distinctive hobbies, favourite locations, and so forth.? Is there something about you that your colleagues could be shocked to be taught?

DF: Outdoors of labor, I spend as a lot time as I can with my youngsters and household—normally on skis or grime bikes. There’s nothing higher than getting out on a mountain or a path and simply having fun with the journey. It’s quick, technical, and a bit of chaotic—just about my perfect weekend.

That stated, I’ve by no means actually separated work from play within the conventional sense. For me, writing software program and inventing new methods to unravel powerful issues is what I’ve all the time liked to do. I’ve been constructing programs since I used to be a child, and that curiosity by no means actually went away. Even after I’m off the clock, I’m usually deep in code or sketching out the subsequent thought.

Folks could be shocked to be taught that I genuinely benefit from the inventive course of behind tech—whether or not that’s low-level system design or rethinking how infrastructure ought to work within the AI period. Some of us unwind with hobbies. I unwind by fixing arduous issues.

You possibly can learn the remainder of our conversations with BigDATAwire Folks to Watch 2025 honorees right here.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles

Hydra v 1.03 operacia SWORDFISH