
Tapping into idle assets through distributed networks
There are assets that sit idle or remain underutilized. Distributed networks can tap these resources to fundamentally alter existing markets.
The demand for computation and storage will rapidly increase as 3D digital media & image/video generation becomes easier. We’re seeing this happen real-time and it’s well-understood that today’s data centers don’t have nearly enough power to build our increasingly digital world(s).
A more efficient network of distributed nodes that tap into underutilized resources increasingly feels inevitable. Today, a lot of regular users (i.e. you & I) have latent GPUs that can be tapped; Render is an example of a project attempting to tap into this particular idle asset. Anyone with a laptop and a graphics card can rent out their card to do compute jobs, which is a more efficient way to handle the lumpy nature of supply/demand in this space.
This general principle — leveraging distributed nodes to improve performance (many dimensions to consider here) — is more a question of when, not if.
The two most active verticals here are distributed compute & storage. Plenty of teams are trying to crack this nut as the market for cloud compute & data storage is clear. The most glaring challenge today is on the demand side. Each of the existing projects struggle to generate meaningful fees, which leads to wonder what the underlying challenges are here. One argument is decision-makers were hyper-focused on “growth over everything” the past few years, pushing cost structure to the background. Another is all existing offerings are single-service (compute OR storage) when in reality, this piecemeal approach won’t drive behavior change away from AWS/Azure/etc.
Our thesis isn’t limited to compute & storage though these do feel most ripe for disruption.