Most storage systems ask you to trust a familiar sentence: “Your file is saved.” It sounds simple because the complexity is hidden behind a company, a contract, and a support ticket. But in Web3, the sentence needs a different shape. If no single operator is in charge, then “saved” cannot be a private promise. It has to become a verifiable fact that other people can check without asking permission.
Walrus is a decentralized storage protocol designed for large, unstructured content called blobs. A blob is simply a file or data object that is not stored like rows in a database table. Walrus supports storing blobs, reading them back, and proving they are still available later. It is built to keep content retrievable even if some storage nodes fail or behave maliciously. This kind of failure, where participants may lie or break in unpredictable ways, is often called a Byzantine fault. Walrus also uses the Sui blockchain for coordination and payments, while keeping the blob content itself off-chain. Only the blob’s metadata is exposed to Sui or its validators.
If you are a developer, a client builder, or someone building DeFi systems that need large amounts of data, the first thing to notice is what Walrus refuses to do. It does not try to store large files inside blockchain objects. Traditional on-chain storage is expensive because it relies on broad replication. Walrus instead treats the chain as a coordination layer. The blob content lives on Walrus storage nodes and optional cache infrastructure, while the chain records the small pieces of truth that matter: ownership of storage capacity, timestamps of responsibility, and events that attest availability.
To understand how Walrus does this, it helps to learn the shape of a blob inside the protocol. When a user wants to store a blob, Walrus does not simply copy it to one node. It erasure-encodes it. Erasure coding is a method of turning one piece of data into many pieces with redundancy so the original can be reconstructed even if some pieces are missing. Walrus uses a construction called RedStuff, based on Reed–Solomon codes. In plain terms, Reed–Solomon codes are a classic family of error-correcting codes that can recover missing pieces as long as enough correct pieces remain. Walrus describes that this encoding expands the size of a blob by a fixed multiple of about 4.5–5×. The point is to pay a predictable price for resilience, without needing full copies everywhere.
After encoding, the blob becomes many smaller parts. Walrus groups encoded symbols into slivers. Slivers are then assigned to shards. A shard is a logical bucket of responsibility. Storage nodes are assigned shards during a storage epoch, and those nodes store and serve the slivers that belong to their assigned shards. An epoch is simply a time window during which committee membership and shard assignments remain stable. On Mainnet, Walrus describes storage epochs lasting two weeks. This is a practical engineering choice. It gives the system a stable map of who is responsible right now, while still allowing the committee to change over time.
The word “committee” here is important. Walrus is operated by a committee of storage nodes that evolves between epochs. Committee membership is tied to delegated proof-of-stake using Walrus’s native token, WAL. WAL can be delegated to storage nodes, and nodes with high stake become part of the epoch committee. WAL is also used for payments for storage, and at the end of each epoch, rewards for selecting storage nodes, storing, and serving blobs are distributed to storage nodes and the people who staked with them. Walrus also defines a subdivision called FROST, where 1 WAL equals 1 billion FROST. This makes it easier to express small payments and precise accounting without awkward decimals.
At this point, a client builder might ask the question that matters most: if the blob is off-chain, how do we know it is really stored and will remain retrievable?
Walrus answers with two linked ideas: an identity for the blob, and a public moment when responsibility begins.
The identity is the blob ID. Walrus computes the blob ID as an authenticator of the set of shard data and metadata. It hashes the sliver representation in each shard, uses those hashes as the leaves of a Merkle tree, and uses the Merkle tree root as the blob hash. A Merkle tree is a structure that lets you commit to many pieces of data under one root hash, while still allowing later verification that a piece belongs to that commitment. In practical terms, the blob ID acts like a seal. If you receive slivers from a node or from a cache, you can check that what you received matches the blob ID’s authenticated structure. This lets clients verify integrity without trusting any single server.
The public moment of responsibility is the Point of Availability, or PoA. For each stored blob ID, Walrus defines PoA as the point when the system takes responsibility for maintaining that blob’s availability. PoA is not a vague statement. It is observable through an event on Sui, along with an availability period that specifies how long the system maintains the blob after PoA. Before PoA, the uploader is responsible for ensuring the blob is actually present and properly uploaded. After PoA, Walrus is responsible for the full availability period. This matters for DeFi and for applications with audits, because it turns “it’s stored” into something checkable from the chain.
The path that creates PoA is built to be verifiable rather than sentimental. A user first acquires storage space on-chain. In Walrus, storage space is represented as a resource on Sui. That resource can be owned, split, merged, and transferred. This is not only convenience. It creates a market-like structure around capacity, which matters when storage has real costs and real lifetimes.
Once the user has the storage resource, they encode the blob and compute its blob ID. They then update the on-chain storage resource to register the blob ID with the desired size and lifetime. This emits an event that storage nodes listen for. After that, the user sends blob metadata to all storage nodes and sends each sliver to the node that manages the corresponding shard.
When a storage node receives a sliver, it checks the sliver against the blob ID. It also checks that there is an on-chain blob resource authorizing storage for that blob ID. If everything matches, the node signs a statement that it holds the sliver and returns the signature to the user. The user collects enough signatures, aggregates them into an availability certificate, and submits that certificate on-chain. When the contract verifies the certificate against the current committee, it emits the availability event for the blob ID. That event marks PoA. At that moment, the system is publicly on record as responsible for availability for the specified period.

If you are building client software, the read side is just as important as the write side. Walrus allows reads either directly from storage nodes or through optional infrastructure like aggregators and caches. An aggregator is a client that reconstructs complete blobs from slivers and serves them over HTTP. A cache is an aggregator with additional caching functionality to reduce latency and reduce load on storage nodes. These are optional because end users can reconstruct blobs directly from storage nodes or run a local aggregator. Walrus emphasizes that caches and publishers are not trusted system components. They may deviate from protocol. What keeps the system honest is that reads can be verified against the blob ID commitments.
When reading directly, a client first gets the metadata for the blob ID from any storage node and authenticates it using the blob ID. Then the client requests slivers from storage nodes for the shards corresponding to that blob ID and waits for enough responses to reconstruct the blob. The system is designed so reconstruction is possible even when some nodes are unavailable or malicious, assuming the protocol’s Byzantine tolerance conditions hold.
Walrus also describes two levels of consistency checks for reads: default and strict. The default check verifies only the portion read, balancing security and performance. The strict check is stronger. It decodes the blob, then fully re-encodes it, recomputes all sliver hashes and the blob ID, and verifies the computed blob ID matches the requested blob ID. The reason strict verification exists is simple: clients are untrusted, and incorrect encoding can create edge cases where some sets of slivers decode to different results. Strict verification removes that ambiguity when stronger guarantees are required.
Walrus also has a clear way of handling incorrectly encoded blobs after PoA. If storage nodes later detect that a blob was inconsistently encoded and cannot be reconstructed consistently, they can produce an inconsistency proof and upload an inconsistency certificate on-chain. After an inconsistent blob event is emitted, reads return None for that blob ID. This may sound harsh, but it preserves a critical property: a blob ID should not silently mean different content to different readers. In DeFi terms, it is better to have a clean “does not resolve” than a world where evidence changes depending on who fetched it.
All of this is coordinated through Sui smart contracts. Walrus describes a system object holding committee information, total available space, and price per unit of storage. Users purchase storage for a duration, storage funds are allocated across epochs, and nodes are paid according to performance, with governance and resource management expressed on-chain. The chain is not carrying the blob content, but it is carrying the lifecycle rules that make availability legible.
Walrus Mainnet was announced as live on March 27, 2025, described as operated by a decentralized network of over 100 storage nodes, with Epoch 1 beginning on March 25, 2025. Alongside stability, the announcement described practical features like improved expiry-time handling in the CLI, the shift of RedStuff’s underlying coding to Reed–Solomon codes, TLS handling for storage nodes to support publicly trusted certificates, and JWT authentication options for publishers to manage real operating costs. These details matter to developers because they show Walrus is not only a paper design. It is shaped around operational reality: monitoring, authentication for paid services, and web compatibility.
If you are a client builder, Walrus is mostly about verifiability at the edges. You can use HTTP delivery without surrendering correctness, because the blob ID and metadata let you verify what you received. If you are a developer, Walrus is a way to store large artifacts and still have smart contracts reason about their availability and duration using on-chain events and objects. If you are building DeFi, Walrus is a storage substrate for the heavy things DeFi often needs but cannot practically put on-chain: audit packs, risk models, historical archives, ZK proofs, and other large evidence that should remain retrievable and verifiable over time.
Walrus is not trying to replace the web’s delivery infrastructure or become a full execution layer. It is trying to make one promise well: large data can live off-chain, but the truth about that data’s identity and availability can still be checked. In a world where “stored” is usually a private claim, that shift alone is a meaningful piece of engineering.



