$ARB is the quiet workhorse of Ethereum scaling, built to make using DeFi feel less like paying tolls on every click. The current price is around $0.20, while its ATH is about $2.39. Its fundamentals lean on being a leading Ethereum Layer-2 rollup with deep liquidity, busy apps, and a growing ecosystem that keeps pulling users back for cheaper, faster transactions.
$ADA moves like a patient builder, choosing structure over speed and aiming for longevity across cycles. The current price is around $0.38, and its ATH sits near $3.09. Fundamentally, Cardano is proof-of-stake at its core, with a research-driven approach, strong staking culture, and a steady roadmap focused on scalability and governance that doesn’t try to win headlines every week.
$SUI It feels designed for the next wave of consumer crypto, fast, responsive, and built like an app platform first. The current price is around $1.46, with an ATH around $5.35. Its fundamentals come from a high-throughput Layer-1 architecture and the Move language, enabling parallel execution that can suit games, social, and high-activity apps where speed and user experience actually decide who wins. #altcoins #HiddenGems
Split, Merge, Transfer: Why Walrus Treats Storage Like a Composable Building Block
Storage is usually sold as a room with fixed walls. You pay for a size. You accept the shape. When your needs change, you move to a bigger room or you start throwing things away. This model is familiar, but it is also rigid. It assumes one owner, one account, one simple lifecycle, and one provider who quietly enforces the rules. Walrus is built for a different environment. It is a decentralized storage protocol designed to store and retrieve large, unstructured blobs, meaning files or data objects that are not stored as rows in a database table. Walrus stores blob contents off-chain on decentralized storage nodes, and it uses the Sui blockchain for coordination, payments, and system orchestration. Walrus is explicit that only metadata is exposed to Sui or its validators; the blob content remains off-chain. This boundary shapes everything. If the chain is coordinating resources and responsibility, then “storage” must be described in a way that is clear, transferable, and verifiable. That is why Walrus represents storage space as a resource on Sui that can be owned, split, merged, and transferred. These three actions sound like simple mechanics, but in a protocol setting they become a way to manage real-world variability. They let users and applications match storage capacity to the size, ownership, and lifetime of the blobs they want to store, without depending on private agreements or manual coordination.
To understand why this matters, it helps to first remember what Sui is doing for Walrus. Sui smart contracts coordinate Walrus storage operations, resource lifetimes, governance for shard assignments, and payment flows. Users acquire storage resources and later attach them to blob IDs. Storage nodes watch for on-chain events to know what uploads are authorized. When enough storage node signatures are collected and submitted as an availability certificate, Sui emits an availability event. That event marks the Point of Availability, the moment when Walrus takes responsibility for maintaining availability for the availability period. These are protocol facts, not informal promises. And storage resources are one of the main ways those facts are expressed. Split: When one allocation must become many responsibilities Splitting a storage resource means taking a single capacity object and dividing it into smaller capacity objects. This has obvious convenience value, but the deeper reason is that real data rarely arrives as one neat package. A builder might be storing many blobs: images, videos, training data segments, model artifacts, web assets for a site, or archival chunks. Each blob may have a different size and a different intended duration. If storage could not be split, the builder would have to purchase separate storage allocations for each blob, or allocate a single large resource to one blob and waste the remainder. Splitting makes it possible to fit the resource to the job. Splitting also matters for governance and accounting. Walrus ties storage to time. A blob has an availability period after PoA, and users can extend availability by providing additional storage resources with longer expiry. If a single storage resource could not be split, it would be hard to manage separate renewal schedules. One part of your application might need long-lived data, while another part might need short-lived data that is updated frequently. Splitting allows those schedules to diverge cleanly. In many systems, “separating budgets” becomes a social process. Teams create internal rules and spreadsheets. In Walrus, splitting can make this separation a protocol-visible act. One account can hold multiple smaller storage objects, each attached to different blob IDs and different lifetimes. This does not remove the need for planning, but it gives planning an object-level shape the protocol can enforce and applications can reason about. Merge: When many small pieces must become one usable whole Merging is the inverse operation: combining multiple storage resources into one larger resource. In practice, fragmentation is a common problem. You might acquire storage in pieces over time. You might receive transfers from different sources. You might split resources earlier to fit small blobs, then later need to store a large blob that requires more capacity than any single remaining piece. Without merging, fragmentation becomes waste. You could have enough total capacity, but not in a form that can be assigned to a single blob. Merging solves that by letting you consolidate capacity into a resource large enough for the blob you want to store. Merging also helps with lifecycle management. Suppose you want to create one long-lived “archive resource” that will be used to extend availability for a set of foundational blobs. You may collect capacity from different operations, different accounts, or different time periods. Merging allows you to consolidate those pieces into a single resource object that can be managed, transferred, or attached as a unit. A subtle point here is that merging supports composability for applications. A dApp could choose to accept storage resources from users in various sizes, then merge them into a unified pool it uses to maintain the availability of shared assets. Another dApp might do the opposite: maintain a large storage object, then split and distribute portions to users as part of a workflow. The ability to move between “many small” and “one large” without leaving the protocol is what makes the resource model flexible. Transfer: When ownership must move without breaking verification Transfer is where the storage resource model becomes truly social. Transfer means moving a storage resource object from one owner to another. In familiar Web2 storage, transfer is usually informal: share credentials, add a collaborator, or send an invoice and hope the provider updates permissions correctly. In Walrus, transfer is explicit and on-chain. Ownership changes are recorded as part of the system’s public state. Walrus mentions that users can acquire storage resources either by buying them from the Walrus system object or through a secondary market. Transfer is the mechanism that makes that possible. But the importance is broader than market language. Transfer enables real application patterns that do not require a single custodian. Consider a few grounded scenarios that become simpler when storage is transferable: A team is building an application with a separate operational wallet for paying storage. They can transfer storage resources to the account that will attach them to blob IDs and manage renewals. An app wants to sponsor early users by giving them storage capacity for their first uploads. Instead of holding users inside a centralized publishing service, the app can transfer storage resources to users, and the users can store blobs directly under their own ownership. A creator wants to publish media for a limited time and later hand over the responsibility of keeping it available to a community or DAO. Transfer allows that shift in responsibility to be visible and verifiable, without re-uploading content. A service provider (or publisher) may help users store blobs, but users may still want to retain ownership of the storage resources that back those blobs. Transfer supports a clean separation between “who performs the operational steps” and “who owns the capacity.” In all these cases, transfer is not merely about convenience. It is about aligning responsibility with ownership in a way that the protocol can observe. How composable resources connect to Walrus’s broader lifecycle Split, merge, and transfer make the most sense when you place them inside the full Walrus lifecycle. Walrus uses Sui smart contracts to coordinate storage operations as resources that have a lifetime and payments. Storage nodes and clients rely on on-chain events to coordinate what happens off-chain. A user acquires storage resources on-chain. A user prepares a blob off-chain (including erasure coding and computing a blob ID) and then registers the blob ID with a storage resource on-chain, emitting an event that storage nodes listen for. The user uploads slivers to storage nodes. Nodes verify and sign receipts. The user submits an availability certificate on-chain. When verified, Sui emits an availability event that marks the Point of Availability. After PoA, Walrus is responsible for maintaining availability for the availability period. Now notice what split/merge/transfer allow you to do inside this flow. Before writing, you can reshape storage so that each blob gets a properly sized resource. During writing, you can ensure the owner of the resource is the entity that should be accountable for it, whether that is a user or an application. After PoA, you can manage long-lived availability by attaching additional storage resources to extend expiry. If you previously split resources into many pieces, you might merge them to create one extension resource. If responsibility should move to a new owner, you can transfer resources that will be used for renewal. In other words, composability is how the resource model stays human. It reflects that people and projects do not remain static across time. Teams change. Budgets change. Data sizes change. And storage obligations must be able to move with those changes.
Why this matters for decentralization without pretending complexity disappears Walrus supports interacting through CLI tools, SDKs, and HTTP technologies. It supports optional infrastructure like publishers, aggregators, and caches. Those layers exist because decentralization should not require every user to become a protocol expert. But when optional infrastructure exists, ownership becomes an important anchor. If an uploader uses a publisher, who owns the storage resource? If a cache serves content, who is responsible for keeping it available for the promised period? The resource model, combined with on-chain events like PoA and availability extensions, helps keep these questions answerable. Walrus also emphasizes that it does not reimplement a CDN and it does not reimplement a full smart contracts platform. It relies on Sui for coordination and uses traditional Web2 caching and delivery patterns where appropriate. In that design, split/merge/transfer are part of making storage composable enough to fit diverse application needs, without forcing the chain to store the bytes. A calm summary Split, merge, and transfer are not decorative features. They are a way to treat storage as a modular resource rather than a fixed subscription. Splitting helps match capacity to many blobs and many lifetimes. Merging helps undo fragmentation and support large or consolidated commitments. Transfer lets responsibility move between owners in a visible, verifiable way, supporting real application workflows where different parties store, renew, and maintain data over time. In a decentralized environment, these simple verbs reduce the need for private coordination. They make storage a thing that can be shaped, not just consumed. And they help turn “I hope this stays stored” into “this resource is owned, this blob is tied to it, and the system’s commitments can be checked.” @Walrus 🦭/acc #Walrus $WAL
Walrus treats time as part of the deal. When you store a blob, you do not just upload it. You also choose how long the network should keep it available. Walrus measures that time in epochs. The docs explain that an epoch is one day on Walrus Testnet and two weeks on Mainnet. You can pay for a fixed number of epochs when you store a file. If you need it longer, you extend the storage later by adding more epochs.
This sounds simple, but it changes habits. In Web2, storage often feels endless until a bill fails. In Walrus, duration is explicit. It is easier to reason about. It is also easier to design around. A builder can show users a clear expiry date in epoch terms. A contract can enforce rules like “keep this dataset available through the full evaluation period.”
Epochs also anchor other parts of the system. Rewards and protocol changes can be organized around epoch boundaries. So when you read Walrus docs, keep one thought nearby: storage is not only bytes. Storage is bytes plus time. It is a small design choice that makes responsibilities visible. That visibility helps teams plan ahead, not guess better.
Imagine you are mailing a fragile map across an ocean. If you send one envelope, you lose everything when it sinks. If you send ten full copies, you waste space. Walrus takes a third path. It uses erasure coding. A file is encoded into many smaller pieces, often described as slivers. Those pieces are spread across storage nodes. Later, the original file can be reconstructed from a sufficient subset of pieces.
This is why people compare Walrus to “blob storage.” A blob is just a large binary object. The protocol treats it as data to be stored and served, not as something that every blockchain validator must keep forever. The point is efficiency with resilience. If a few nodes go offline, enough pieces can still remain to rebuild the blob.
You do not need to memorize the math to understand the promise. Redundancy is built into the encoding, not into full duplication. That design can lower overhead compared with replicating entire files end to end, while still aiming for high availability. For an app, this means large media can live outside the chain, yet stay retrievable. You store once, and the network spreads the load. Retrieval becomes a reconstruction task. @Walrus 🦭/acc #Walrus $WAL
@Walrus 🦭/acc is a protocol that remembers without clinging. It doesn’t store every file on every machine. Instead, it breaks files into small coded pieces and spreads them across a network. Each piece is meaningless alone but powerful together. When enough are gathered, the original file returns. This design keeps data safe even when some nodes go offline. Onchain, Sui keeps track of who owns the data, how long it should last, and who gets paid. Walrus calls those pieces “blobs.” It’s a small word for a big idea, data that lives between permanence and flexibility. Storage becomes something you can reason about, not just rent. #Walrus $WAL
Some systems try to do everything in one place. That can work early on, but it becomes heavy when more people and more use cases arrive. Dusk has described a shift toward a modular structure, with different layers doing different jobs.
In Dusk’s multilayer design, DuskDS sits underneath as the consensus, data availability, and settlement layer. Think of it as the part that finalizes what happened and stores the core record. Above it sits DuskEVM, the execution layer where applications can run smart contracts with standard Ethereum-style tooling. Dusk also describes a forthcoming privacy layer called DuskVM.
This design matters because it reduces friction. If developers can reuse familiar tools, integrations with wallets and services can become faster. At the same time, the settlement layer stays focused on security and finality. Dusk frames this as a way to keep privacy and regulatory goals, while still meeting developers where they already are.
It is a reminder that infrastructure is often about separation of concerns, not just speed. @Dusk #Dusk $DUSK
When people talk about finance, they often focus on trading. But settlement is where trust becomes real. Settlement answers the question, “Did ownership actually change hands?” In traditional markets, that process can take time and involve layers of intermediaries.
In Dusk’s partnership narrative with NPEX, one of the promised benefits of on-chain finance is faster settlement. Dusk describes a world where settlement can move from days to seconds, and where counterparty risks in clearance can be reduced. It also highlights automation of corporate actions and easier interoperability between financial organizations when there is a single shared record.
None of this is guaranteed by magic. It depends on the design of the infrastructure and the rules around it. But it is useful to remember that the main value of blockchains in finance may not be “new speculation.” @Dusk #Dusk $DUSK
Blockchains are often described as “transparent ledgers.” That transparency can be useful, but it also creates a problem. If everything is visible, then your balance, transfers, and business activity can become public by default. In normal finance, we do not share a full bank statement with the world. We share it only when needed, and usually with specific parties.
Dusk is built around this tension. It focuses on financial use cases where privacy is not a luxury, but a requirement. At the same time, regulated markets need audit trails. Dusk’s idea is to support confidentiality while keeping transactions auditable when rules require it. One piece of this approach is Hedger, a privacy engine designed for DuskEVM. Hedger combines zero-knowledge proofs with homomorphic encryption. In simple terms, it aims to hide sensitive values like amounts and balances, while still proving that the math checks out.
This is not about making markets invisible. It is about reducing unnecessary exposure, while leaving room for accountability.
Dusk: When Finance Learns to Whisper, and Still Stay Accountable
Money is a strange tool. We want it to move fast, like information. We want it to be private, like a personal conversation. And we also want it to be accountable, like a public record when rules apply. Most systems only give you two of these at once. Dusk is trying to hold all three in the same hand. Dusk, founded in 2018, is a Layer 1 blockchain built for regulated and privacy-focused financial infrastructure. “Layer 1” simply means the base network, the ground layer where transactions are finalized and recorded. Dusk’s goal is not to be a general-purpose chain for everything. It is built with a specific direction in mind: financial applications that need privacy, but also need to work in environments where audits and compliance matter.
At a high level, Dusk is trying to become infrastructure for tokenized real-world assets and compliant DeFi. Real-world assets, often shortened to RWA, are traditional financial instruments represented on-chain. Tokenized assets are not magic. They are a way to represent ownership, rights, or value using digital tokens, so that systems can settle and manage them with the speed and composability of blockchains. “Compliant DeFi” is a careful phrase. It means decentralized finance that is designed to operate with regulatory requirements in mind, instead of pretending those requirements do not exist. One of the hardest problems in finance is not just moving value. It is controlling what different people are allowed to see. In many markets, privacy is not a luxury. It is a requirement. Traders do not want to reveal intent. Institutions cannot expose sensitive positions. Users do not want their balance to be a public billboard. But regulators and auditors still need a way to verify what happened when they are authorized to do so. Dusk’s design aims to treat privacy and auditability as built-in features, not optional extras. This is where Dusk’s modular architecture becomes central to its identity. “Modular” means the network is designed as multiple layers that each do a specific job, instead of forcing one layer to do everything. In Dusk’s current direction, it evolves into a three-layer stack. The base is DuskDS, which handles consensus, staking, data availability, a native bridge, and settlement. Consensus is the process nodes use to agree on what is true. Settlement is the moment a transaction becomes final. Data availability means the network makes sure transaction data is accessible so the system can be verified. This base layer is the part that provides the backbone and the final record. On top of that sits DuskEVM, an EVM-compatible application layer. EVM means Ethereum Virtual Machine, which is used standard for running smart contracts. Smart contracts run on-chain and execute rules automatically. Solidity is the most common language used to write those contracts. DuskEVM matters because it lets developers and institutions use standard Solidity contracts and familiar tools, while settling on Dusk’s Layer 1. In simple terms, it is meant to make building on Dusk feel closer to building in the Ethereum ecosystem, without giving up the compliance and privacy focus Dusk is aiming for. Dusk has stated that DuskEVM mainnet is planned to launch in the second week of January. Timelines are always worth treating as timelines, not promises, but the intention is clear: make the network easier to integrate with the tools and workflows many teams already use. That is not a small shift. It changes the day-to-day reality for builders. Instead of custom tooling and bespoke integrations, the idea is to reduce friction and shorten the time from concept to deployment. The third part of the stack is DuskVM, a forthcoming privacy application layer meant for full privacy-preserving applications. Here, “privacy-preserving” does not just mean hiding a number. It means designing entire applications so that sensitive data can stay confidential while still allowing the system to prove that rules were followed. Dusk’s privacy layer work includes concepts like Phoenix and Piecrust, which relate to how transactions and application logic can be structured to support deeper privacy. The important point for a reader is not the names. It is the intent: Dusk wants a dedicated place for full privacy applications, instead of forcing every privacy feature to fit into the same shape as general EVM execution. If modular architecture is the “how,” then Hedger is one of the most concrete examples of the “why.” Dusk describes Hedger as a new privacy engine built for the EVM execution layer.It provides a solution to private, yet verifiable transactions on EVM based blockchains by leveraging the power of zero-knowledge proofs and homomorphic encryption. Zero-knowledge proofs are a method of proving that something is true without exposing the underlying private data. Homomorphic encryption is a way to compute on encrypted data, so that you never have to first reveal the data. Dusk introduces Hedger’s model as a hybrid-mode, instead of using one cryptographic solution.
Dusk also makes a distinction between Hedger and an earlier approach called Zedger. The difference is structural. Zedger was built for UTXO-based layers, while Hedger is built for full EVM compatibility. UTXO is a transaction model where value is represented as discrete outputs, like digital “bills,” rather than account balances. EVM systems usually use an account-based model, more like a ledger with balances. Hedger is designed to live where developers already deploy Solidity contracts and use common tools, while still offering confidentiality features intended for regulated finance. Hedger Alpha is now live for public testing, deployed on Sepolia testnet. Sepolia is an Ethereum test network used for experimentation. It is not real money. It is a safe environment for testing. In this alpha, Dusk says you can send confidential transactions and keep your balance private, in a way that is private to everyone yet auditable by regulators. The alpha involves creating a Hedger wallet alongside a standard EVM wallet connection, then using shield and unshield actions for test ETH. “Shielding” here means moving value into the privacy system so balances and amounts are hidden. “Unshielding” means moving it back out to a standard EVM address. Dusk’s current alpha description also notes that sender and receiver remain visible on-chain, while amounts and balances stay hidden. That detail matters because it sets realistic expectations about what kind of privacy is being tested. All of this technology would be academic if it never met the real world. Dusk’s roadmap includes an application-level example: DuskTrade, planned to launch in 2026. Dusk describes DuskTrade as its first real-world asset application, built in collaboration with NPEX, a regulated Dutch exchange. The intent is a compliant trading and investment platform designed to bring tokenized securities on-chain, with a stated figure of €300M+ in tokenized securities. Dusk has also said the waitlist opens in January. The deeper meaning of this part is not the number. It is the direction. Instead of treating tokenization as a marketing theme, Dusk is placing its technology next to regulated rails and regulated venues, where compliance is not optional and privacy has practical stakes. So who is Dusk for? It is for builders who want to write standard smart contracts without reinventing everything, but who also want privacy and compliance to be first-class features. It is for institutions that cannot operate in a world where every transaction detail is public, yet still need auditability when required. It is for regulated markets where confidentiality is part of fair trading, and where settlement and reporting rules exist for good reasons. And it is for users who want modern digital finance without turning their financial life into public entertainment. In the end, Dusk is not just telling a story about speed or scale. It is telling a story about how financial systems behave when they grow up. A mature market does not only ask, “Can we do it?” It also asks, “Who should see it, when, and why?” Dusk’s answer is a network designed to let finance whisper in public, while still keeping a door open for accountability.
Privacy With Receipts: The Quiet Philosophy Behind Dusk
Money moves through stories we rarely see. A trade is agreed. A settlement is cleared. A record is stored. Then a quiet question follows: who is allowed to know what happened, and how much should the world be able to see? Public blockchains answered with radical sunlight. Everything is visible. That can be honest, but it can also be unsafe for real finance. In regulated markets, privacy is not a luxury. It is often a requirement. Counterparties may need confidentiality. Clients may need dignity. Firms may need to prevent information leakage. Yet regulators and auditors still need evidence. Not vibes. Not promises. Evidence. Dusk is designed around this tension. It presents itself as a Layer 1 blockchain for regulated and privacy-focused financial infrastructure. The idea is to support financial applications that need privacy, but also need compliance and auditability built into the system rather than added as an afterthought. This is what I mean by “privacy with receipts.” Privacy is the lived experience. Receipts are the verifiable proof, available when the right party has the right reason. Dusk approaches the problem with a modular architecture. Modular, in plain language, means the system is built from parts that can evolve without rewriting everything. Dusk separates settlement from execution. Settlement is the part that finalizes what happened and secures the ledger. Execution is the part where application logic runs, like smart contracts and financial workflows. This separation is not just engineering taste. It is a way to let the network stay stable at its core while different application environments can be added and improved over time. At the foundation, Dusk describes a settlement layer often referred to as DuskDS. This layer is where consensus, settlement, and data availability live. It is also where the network’s core transaction models are defined. In Dusk’s documentation, the reference node implementation is called Rusk, and the design references a proof-of-stake consensus approach and a networking layer built for propagation. Proof of stake, in simple terms, is a way to secure a blockchain where participants lock value to help run the network. If they behave well, they earn rewards. If they behave badly, the system can punish them. The deeper point is not the mechanism itself. The deeper point is what it is trying to buy: reliability. In finance, finality is not just technical. It is emotional. It is the moment you can stop worrying whether the past will change. On top of the settlement layer, Dusk describes multiple execution environments. One is DuskEVM, an EVM-equivalent environment. The EVM is the Ethereum Virtual Machine, the widely used runtime for executing smart contracts. EVM-equivalent means developers can often use familiar Ethereum tools and contract patterns while settling back to Dusk’s base layer. Another execution environment discussed is DuskVM, a WASM-based environment. WASM, short for WebAssembly, is a portable way to run code efficiently and safely. The existence of multiple environments reflects a practical belief: different financial applications may need different trade-offs, especially when privacy requirements vary.
Now we return to the heart of the design: privacy and proof. Dusk’s base layer supports two transaction models that express this tension in a concrete way. One model is public and account-based, closer to what people expect from transparent chains. The other model is shielded and note-based, built for confidential transfers using zero-knowledge proofs. A zero-knowledge proof is a method for proving something is true without revealing the private information underneath. Think of it like proving you have a valid ticket without showing the ticket number to everyone in the room. In a privacy system, the goal is not to hide whether rules were followed. The goal is to hide unnecessary details while still proving correctness. In Dusk’s shielded model, transactions are designed to prove that the sender had valid funds and that the transfer follows the rules, without exposing the sensitive details by default. But regulated finance cannot live on secrecy alone. It needs auditability. It needs a path for lawful review. That is why Dusk emphasizes controlled disclosure. The concept is that privacy is the default posture, but there are mechanisms for authorized parties to verify what must be verified. This is the “receipt.” Not a public confession, but a provable record that can be inspected when the rules demand it. Dusk extends this privacy-and-proof idea into smart contracts and application logic. In its materials, Dusk describes privacy tooling for computation that aims to support confidential workflows while preserving compliance requirements. The technical methods may include cryptographic techniques such as encryption and zero-knowledge proofs. You do not need to be a cryptographer to see the direction: if contracts can compute without exposing everything, more real financial workflows can move on-chain without turning into public theater. Compliance is not only about transactions. It is also about identity, eligibility, and rules that have legal meaning. Dusk’s documentation includes identity and permissioning primitives, including an identity approach described through Citadel. The philosophical shift here is subtle. Traditional compliance often forces people to reveal more than necessary. A privacy-first compliance approach tries to prove the needed fact, and only the needed fact. “This person is eligible.” “This account is authorized.” “This credential is valid.” The point is not to remove regulation. The point is to reduce unnecessary exposure while still meeting requirements. All of this becomes more tangible when you talk about real-world assets and securities.
Dusk positions itself as infrastructure for tokenized assets and regulated instruments. It draws a careful distinction between tokenization and native issuance. Tokenization can mean creating an on-chain representation of an off-chain asset. Native issuance means the asset itself is created and managed on-chain, with lifecycle rules and compliance logic embedded into the instrument from day one. This distinction matters because it changes where trust sits. If settlement and enforcement stay off-chain, you gain programmability but still inherit old frictions. If lifecycle and settlement move on-chain, you reduce reconciliation needs, but you also raise the bar for privacy, rule enforcement, and audit. Within this framing, Dusk discusses standards for issuing and managing privacy-enabled securities, including an approach referred to as XSC, a confidential security contract concept. The core idea is that regulated instruments often need more than “transfer from A to B.” They can require restrictions, roles, reporting, corporate actions, and governance actions. When a chain is built for those realities, the chain must treat compliance and confidentiality as first-class design constraints. Underneath the applications and standards sits the token that keeps the system running. Dusk’s tokenomics materials describe the DUSK token as part of the network’s incentives and fees. They describe supply parameters, emissions over time to reward stakers, and a maximum supply cap. They also describe DUSK’s role in staking, network fees, and paying for services. Gas is framed as the unit measuring computational work, and the documentation provides a denomination used for pricing. These details are not poetry, but they matter. Incentives shape behavior. Behavior shapes stability. And stability shapes trust. If you step back, Dusk reads less like a loud promise and more like a proposal. The proposal is that we can build systems where privacy does not mean hiding wrongdoing, and transparency does not mean exposing everyone. Where the ledger can protect the person, and still answer the auditor. Where you can have privacy, and still have receipts. @Dusk #Dusk $DUSK
Storage Resources on Sui: When Capacity Becomes an Owned Object
We usually think of storage as a background utility. You pay a bill. You get a quota. You upload files. The relationship feels informal, almost invisible. But in decentralized systems, informal promises break easily, because there is no single keeper who can quietly “make it work.” Coordination needs a clearer language. Walrus chooses a simple one: it treats storage capacity as an on-chain resource that can be owned and moved, like an object you can hold. Walrus is a decentralized storage protocol designed for large, unstructured data called blobs. A blob is simply a file or data object that is not arranged as rows in a database table. Walrus stores blob contents off-chain on Walrus storage nodes, using erasure coding for resilience, and it uses the Sui blockchain for coordination, payments, and public attestations of availability. Walrus states that metadata is the only blob element exposed to Sui or its validators. The blob content itself remains off-chain.
That decision—metadata on-chain, content off-chain—creates a design question. If the chain does not store your data, what does it store that still matters? Walrus answers: it stores the rights and obligations around storage. It stores who owns capacity, how long that capacity lasts, what blob ID it is attached to, and what events mark the system taking responsibility for availability. On Sui, Walrus represents available storage as objects. This is not a metaphor. In the Walrus model, storage space is a resource on Sui that can be acquired, owned, split, merged, and transferred. A user can purchase storage from the Walrus system object for a time duration, or obtain storage via transfer in a secondary market. Once a user holds storage capacity as an object, it becomes composable. If you have too much capacity in one piece, you can split it into smaller pieces to match different blobs. If you have small pieces and need a larger allotment, you can merge them. If you want another account or another application to use it, you can transfer it. This is a different way of thinking about storage. It is less like renting a mailbox and more like holding a timed entitlement. The “timed” part matters because Walrus treats storage as a resource with a lifetime. The system does not assume indefinite persistence as a default. It describes an availability period for stored blobs, and it provides on-chain mechanisms to extend that period. Time, in Walrus, is not only a billing detail. It is part of the public record. The Walrus system object on Sui holds key metadata for the current storage epoch. A storage epoch is a period during which the Walrus storage committee and shard assignments are stable. On Mainnet, Walrus describes storage epochs as lasting two weeks. The system object includes the committee of storage nodes for that epoch, the total available space on Walrus, and the price per unit of storage (1 KiB). Walrus notes these values are determined by 2/3 agreement among the storage nodes for the epoch. In plain language, this means the chain is used to publish the “current terms” of the storage network: who the committee is and what storage costs. When a user purchases storage space from the system object, the payment flows into what Walrus calls the storage fund. Walrus describes the storage fund as holding funds for storing blobs over one or multiple storage epochs. Payments are separated over multiple epochs and then paid out each epoch to storage nodes according to performance, with the process mediated by Sui smart contracts. This matters for the idea of a storage resource because it ties the object you hold to a long-running payment and service lifecycle. Storage is not only “capacity.” It is also the ongoing obligation of the network to keep encoded blob pieces retrievable during the agreed period, and the corresponding obligation of the payer to fund that service. Once a user has a storage resource, the next question is how that resource becomes connected to real data. Walrus connects storage resources to blobs through blob IDs and on-chain events. A blob ID is the identifier for a blob in Walrus. It is not just a random label. Walrus describes a blob ID as being cryptographically derived from the blob’s encoding and metadata. The blob’s encoded representation is split into pieces, and hashes of shard-specific representations become leaves of a Merkle tree, whose root becomes a blob hash that participates in the blob ID. A Merkle tree is a structure that lets you commit to many parts of data with one root hash, while still allowing later verification of individual parts. This is one reason Walrus can keep content off-chain and still support verification: correctness can be checked against commitments rather than trusted because “a server said so.” To store a blob, a user first prepares it off-chain: erasure coding and blob ID computation. Walrus uses an erasure code construction called RedStuff, based on Reed–Solomon codes. Erasure coding adds redundancy without storing full replicas of the blob everywhere. Walrus describes that the encoding expands blob size by about 4.5–5×, independent of the number of shards and storage nodes. Encoded pieces are grouped into “slivers,” assigned to “shards,” and stored by storage nodes that manage those shards during the epoch. But the chain still plays a crucial role before any bytes move. The user updates the on-chain storage resource to register the blob ID with the desired size and lifetime. This emits a Move resource event on Sui that storage nodes listen for. Conceptually, this event is an authorization signal. It tells the storage network: “this blob ID is intended to be stored, and there is a valid on-chain resource backing it.” This matters because it prevents the storage network from being forced into storing arbitrary data without corresponding resources and payments.
After that event, the user uploads off-chain: sending blob metadata to all storage nodes and sending each sliver to the storage node responsible for the corresponding shard. Storage nodes verify what they receive against the blob ID and check that an authorized on-chain blob resource exists. If correct, they sign statements attesting they hold the slivers. The user aggregates these signatures into an availability certificate and submits it on-chain. When the certificate is verified, Sui emits an availability event for the blob ID. That availability event marks what Walrus calls the Point of Availability, or PoA. PoA is the moment when the system takes responsibility for maintaining the blob’s availability for the defined availability period. Before PoA, the client is responsible for ensuring the blob is actually uploaded and retrievable. After PoA, Walrus is responsible for maintaining availability for the full period, under the protocol’s assumptions. Both PoA and the availability period are observable through events on Sui. Now the storage resource becomes more than an accounting unit. It becomes the anchor for a public, verifiable lifecycle. A third party can verify that the blob reached PoA and can check the availability window by reading chain events, without downloading the blob. This is especially useful when a blob is large, when many parties need to reference it, or when it is used by smart contracts that require a reliable signal that “this data is available.” Walrus also describes extension in a way that reinforces the “resource object” model. A certified blob’s storage can be extended by adding a storage object to it with a longer expiry period. Walrus notes that this can be used by smart contracts to extend availability in perpetuity as long as funds exist. The key practical point is that extending availability does not require re-uploading the blob content. Refreshing availability is conducted fully on-chain by providing an appropriate storage resource, and an event is emitted that storage nodes receive to extend how long they store the slivers. In human terms, you renew the obligation window using on-chain objects, not by pushing gigabytes again. There is also a boundary on how far into the future storage can be purchased. Walrus describes a maximum number of storage epochs in the future for which storage can be bought, approximately two years. This does not prevent ongoing renewal. It simply constrains how far the protocol’s standard purchase mechanism extends into the future at once. It is another example of making time explicit rather than implied. Because the chain is used to coordinate outcomes, it is also used to coordinate failures that need a clear public meaning. Walrus describes the case where a blob is incorrectly encoded. A correct storage node may fail to reconstruct a sliver after PoA due to inconsistency and can produce an inconsistency proof. Nodes can sign and aggregate this into an inconsistency certificate posted on-chain, which emits an inconsistent blob event. After this, correct nodes delete sliver data for the blob ID and record that reads return None for the availability period. This keeps the meaning of blob IDs stable. “Available” is not allowed to become “whatever bytes a dishonest node served.” The chain coordinates a clear, verifiable state: inconsistent blobs resolve to None. All of these mechanics are easier to see if you treat the storage resource as a hinge between two worlds. On one side is the off-chain world of encoded data: slivers, shards, and storage nodes. On the other side is the on-chain world of rights, time, and proof: ownership, transfer, events, and certificates. The storage resource object is how an application can speak about capacity and time with the precision the protocol requires. In more familiar systems, it is easy to confuse convenience with certainty. A dashboard shows “used storage,” and we assume the system will behave. Walrus tries to replace that assumption with verifiable structure. If a user holds a storage resource, they can show it. If they transfer it, the chain records it. If they attach it to a blob ID, an event is emitted. If PoA is reached, it is marked publicly. If availability is extended, it is marked publicly. If a blob is inconsistent, that is marked publicly. The chain does not carry the heavy bytes, but it carries the shared story that makes those bytes governable. This is what it means, in Walrus, for capacity to become an owned object. It is not merely a technical detail. It is a way of making storage legible in an open environment: who has the right to store, what is being stored, when responsibility begins, and how long the promise lasts. @Walrus 🦭/acc #Walrus $WAL
Metadata vs Content: Why Walrus Keeps Proof On-Chain and Data Off-Chain
There is a difference between a book and its catalog card. The book carries the full weight of meaning. The catalog card carries enough truth to help you find it, name it, and verify you are holding the right one. Many digital systems blur this line. They mix the heavy payload with the small facts that describe it. Walrus tries to keep the line clear, because clarity becomes a kind of safety when many parties share a system without fully trusting each other.
Walrus is a decentralized storage protocol designed for large, unstructured data called blobs. A blob is simply a file or data object that is not stored as rows in a database table. Walrus is built for settings where some storage nodes can fail or behave maliciously. In distributed systems this is often called a Byzantine fault. A node may be offline, buggy, or dishonest. Walrus aims to keep blobs retrievable anyway, and to make availability something that can be proven and checked. To do that, Walrus separates two kinds of information. The first is content. Content is the blob itself. It might be a dataset, an image, a video, a web bundle, or a model artifact. The second is metadata. Metadata is information about the blob, not the blob’s bytes. It includes identifiers and commitments that help the system verify what the blob is, how it is stored, and how long it should be maintained. This separation becomes concrete in how Walrus integrates with the Sui blockchain. Walrus uses Sui as a coordination layer for payments, resource management, and public events. But Walrus states that metadata is the only blob element ever exposed to Sui or its validators. The content of blobs is always stored off-chain on Walrus storage nodes and caches. This design choice is not cosmetic. It shapes who needs to handle large data, who needs to agree on what, and what can be verified without moving gigabytes around. If you ask why, the first answer is practical. Blockchains achieve strong integrity by replication. Many validators store and process the same information. That is useful for small pieces of state and transaction history. It becomes costly when the “state” is a multi-gigabyte blob. Walrus documentation contrasts its own erasure-coded storage costs, around a fixed multiple of the blob size, with the much higher overhead typical of storing large data directly in on-chain objects. It even notes that data stored in Sui objects can incur a very large storage multiple compared to the raw data size. Walrus is built to store large resources at substantially lower cost than storing them directly on-chain, while still providing verifiable availability. The second answer is architectural. Walrus does not want Sui validators to be part of a content delivery network. Walrus explicitly says it does not reimplement a CDN and instead is designed to work with caches and CDNs. That only works well if content stays off-chain, so it can be served through HTTP and familiar infrastructure, while still being verifiable through cryptographic commitments and chain events. So what exactly goes on-chain in Walrus? It is the “catalog card” layer. Walrus uses Sui smart contracts to manage the Walrus system object, which holds the committee of storage nodes for the current storage epoch, the total available space, and the price per unit of storage. Storage space is represented as Sui objects that can be owned, split, merged, and transferred. A storage fund holds funds for storing blobs over one or multiple storage epochs, and payments are made each epoch to storage nodes according to performance. When a user wants to store a blob, the user associates a blob ID with a storage resource on-chain, which emits an event that storage nodes listen for. Later, the user submits an availability certificate on-chain. When verified, the chain emits an availability event. That availability event marks the Point of Availability, or PoA, which is the moment Walrus takes responsibility for maintaining the blob’s availability for the availability period. Walrus also supports on-chain events for later states. If a blob is incorrectly encoded, storage nodes can produce an inconsistency proof and submit an inconsistency certificate on-chain. The contract emits an inconsistent blob event, signaling that reads return None for that blob ID, and storage nodes can delete the slivers while retaining an indicator to return None during the availability period. Walrus also supports refreshing availability fully on-chain by attaching new storage resources to extend expiry, without re-uploading content. All of this is metadata and coordination. It is about ownership of storage capacity, timing, committee membership, authorization, and public attestations. It is small enough to live comfortably on a blockchain. It is also the part that benefits most from being public and tamper-resistant. Now look at what stays off-chain. It is the blob content, but not as a single chunk stored in one place. Walrus stores content using erasure coding. Erasure coding is a method that adds redundancy without copying the full file many times. Walrus uses a bespoke erasure-code construction called RedStuff, based on Reed–Solomon codes. The blob is split into encoded symbols, and these are grouped into units called slivers. Slivers are assigned to shards, and shards are managed by storage nodes during a storage epoch. Walrus describes that this results in an expansion of blob size by about 4.5–5×, and that this expansion is independent of the number of shards and storage nodes. This is the “content layer” of Walrus: coded pieces distributed across the storage network, stored and served by storage nodes. Because the content is off-chain, the system needs a strong way to connect “the data you got back” with “the data that was meant to be stored.” That connection is built through the blob ID and authenticated metadata. Walrus describes how the blob ID is computed as an authenticator of shard data and metadata. It hashes the sliver representation in each shard, uses those hashes as leaves of a Merkle tree, and uses the Merkle tree root as the blob hash. A Merkle tree is a structure that lets you commit to many pieces of data with one root hash, while still enabling later verification of individual pieces against that root. This is what makes “content off-chain” compatible with “truth on-chain.” The chain does not need the content if the system can authenticate content against commitments. This is also why Walrus can support Web2 delivery paths without giving up integrity. Walrus supports interaction through CLI, SDKs, and HTTP technologies. It is designed to work with caches and CDNs. A cache might serve a blob quickly over HTTP. But a client can still verify the returned data using metadata and the blob ID. In this design, speed can come from caches, and correctness can come from cryptographic verification rather than trust in the cache. The write path shows the relationship between on-chain metadata and off-chain content in a clear sequence. A user acquires a storage resource on-chain. The user encodes the blob and computes its blob ID. Then the user updates the storage resource on Sui to register the blob ID, emitting an event that storage nodes listen for. Only after this does the user send metadata to all storage nodes and send slivers to the nodes managing the corresponding shards. Storage nodes check slivers against the blob ID and check that an authorized blob resource exists on-chain. If correct, they sign receipts. The user aggregates these signatures into an availability certificate and submits it on-chain. When verified, Sui emits the availability event, and PoA is reached. From that point, Walrus takes responsibility for maintaining availability during the availability period, and nodes sync and recover missing slivers without user involvement. The read path is similarly layered. A reader fetches metadata for the blob ID from any storage node and authenticates it. Then the reader requests slivers from storage nodes and waits for enough responses. The reader verifies the slivers using the authenticated structure tied to the blob ID, reconstructs the blob, and applies consistency checks. Walrus describes both default and strict consistency checks, where the strict check re-encodes the decoded blob and recomputes the blob ID to confirm the encoding is consistent. This is another way the system avoids “trusting the messenger.” The reader verifies against commitments.
Walrus also names optional actors that sit between users and the storage network: publishers, aggregators, and caches. A publisher can receive a blob over HTTP, encode it into slivers, distribute slivers to nodes, collect signatures, and perform on-chain actions. Aggregators reconstruct blobs from slivers and serve them over HTTP. Caches add caching to reduce latency and load. Walrus treats these as optional and not trusted system components. They can deviate from protocol. But they can be audited because the chain events and blob ID commitments exist independently of them. This again depends on the separation between metadata and content. If the content were “whatever the gateway says,” audits would be weak. If the content is authenticated against commitments, audits become meaningful. There is a quieter benefit to this separation as well. It keeps responsibilities honest. Sui does not pretend to be a file store. Walrus does not pretend to be a smart contract platform. Walrus explicitly says it does not reimplement a full smart contracts platform with consensus or execution, and it relies on Sui smart contracts to manage Walrus resources and processes such as payments and epochs. Walrus also says it can store encrypted blobs, but it is not a distributed key management system. In each case, the boundary reduces confusion. It tells builders what to expect from each layer. When a system draws boundaries like this, it becomes easier to reason about it. You can say, “the chain attests to responsibility and time,” and you can say, “the storage network holds the bytes,” and you can say, “clients can verify what they read.” Those sentences are simple, but they are valuable. They let different communities build together even when they do not share the same assumptions. In the end, Walrus treats storage less like a place and more like a relationship. Content is heavy, so it stays where it can be distributed and served. Metadata is light, so it goes where it can be agreed upon and verified. That division is not only a scaling strategy. It is a way to keep the truth of a file from depending on whoever happens to serve it today. @Walrus 🦭/acc #Walrus $WAL
The Optional Layer: Publishers, Aggregators, and Caches in Walrus
A storage system is often judged by what it promises. But it is shaped, quietly, by what it refuses to promise. In an open network, you cannot assume every participant will be kind, competent, or even present. You also cannot assume every user wants to speak the language of protocols. Most people simply want to store something and retrieve it later. Between these two realities, optional infrastructure often appears. It is not the core of a system, but it becomes the part people touch first. Walrus is a decentralized storage protocol designed for large, unstructured data called blobs. A blob is simply a file or data object that is not stored as rows in a database table. Walrus focuses on storing and reading blobs and proving their availability. It aims to keep content retrievable even if some storage nodes fail or behave maliciously. In distributed systems this possibility is often described as a Byzantine fault. It means a node can be offline, buggy, or dishonest.
Walrus separates two things that are often tangled. The blob content stays off-chain on Walrus storage nodes. The coordination, payments, and public signals of availability live on the Sui blockchain. Walrus states that metadata is the only blob element exposed to Sui or its validators. This matters because it sets the stage for the optional layer. If content is off-chain, then there must be practical ways for people to upload and read content without turning the blockchain into a file server. If availability is a verifiable claim, then there must be ways to serve data quickly without asking users to trust the server that served it. Walrus names three optional actors that help with this: aggregators, caches, and publishers. “Optional” here is not a marketing word. It is a security posture. Walrus does not require these actors for correctness. A user can read by reconstructing blobs directly from storage nodes. A user can write by interacting with Sui and storage nodes directly. Optional actors exist because real systems need bridges. They translate between the decentralized storage world and familiar Web2 tools such as HTTP. An aggregator is a client that reconstructs complete blobs from individual slivers and serves them to users over traditional Web2 technologies like HTTP. The word “sliver” comes from how Walrus stores data. Walrus uses erasure coding, a method that adds redundancy without copying the full blob many times. Erasure coding transforms a blob into many encoded parts. Walrus groups multiple symbols into a sliver and assigns slivers to shards. Storage nodes manage shards during a storage epoch and store the slivers for their assigned shards. When someone wants the blob back, enough slivers can be fetched and the blob can be reconstructed. This is the act an aggregator performs on behalf of a reader. It is not magic. It is reconstruction plus delivery. A cache is an aggregator with extra caching functionality. Caches reduce latency and reduce load on storage nodes by keeping reconstructed results around for reuse. Walrus also describes cache infrastructures that can act as CDNs. The key point is not the label. The key point is that the same blob, once reconstructed, can be served many times without repeating the full reconstruction work each time. This can make reads more practical for popular content, while keeping the storage network from being overwhelmed by repeated heavy operations. A publisher is a client that helps end users store blobs through Web2 technologies while using less bandwidth and offering custom logic. Walrus describes a publisher as a service that can receive a blob over HTTP, encode it into slivers, distribute slivers to storage nodes, collect storage node signatures, aggregate signatures into a certificate, and perform the required on-chain actions. This changes the experience for many users. Instead of needing to run local tools that talk to both Sui and many storage nodes, a user can upload once, in a familiar way, and the publisher can drive the multi-step protocol forward. It is important that Walrus does not treat these actors as trusted pillars. Walrus explicitly states that aggregators, publishers, and end users are not considered trusted system components, and they might arbitrarily deviate from the protocol. That sentence is easy to read quickly, but it carries a strong design intention. The system is built so that these actors can exist, can be useful, and can even be widespread, without becoming a single point of trust.
This is where Walrus’s verification model matters. Each blob has an associated blob ID that helps authenticate data. Walrus describes computing hashes of sliver representations for each shard, placing them into a Merkle tree, and using the Merkle root as the blob hash. A Merkle tree is a structure that lets many pieces of data be committed under one root hash, while still allowing later verification that individual pieces match the commitment. With this model, a reader can verify that what they received matches what the writer intended, using the authenticated metadata tied to the blob ID. This is why Walrus can say that a client can verify reads from cache infrastructures are correct. The cache can speed up delivery, but correctness does not have to depend on trusting the cache. The same principle applies to publishers. A publisher can make storage easier, but the user is not required to take the publisher’s word for it. Walrus describes a way for an end user to verify that a publisher performed its duties correctly. The user can check that an event associated with the point of availability exists on-chain. Then the user can either perform a read and see whether Walrus returns the blob, or encode the blob and compare the result to the blob ID in the certificate. This is a practical audit path. It does not rely on the publisher being honest. It relies on publicly observable events and verifiable content commitments. To see why these optional actors exist, it helps to look at the write path in Walrus. A user first acquires a storage resource on Sui. Storage resources can be owned, split, merged, and transferred. When the user wants to store a blob, the user erasure-codes it and computes its blob ID. The user then updates a storage resource on Sui to register the blob ID, emitting an event that storage nodes listen for. The user sends blob metadata to all storage nodes and sends each sliver to the node managing the corresponding shard. Storage nodes verify slivers against the blob ID and check authorization via the on-chain resource. If correct, they sign statements and return them. The user aggregates enough signatures into an availability certificate and submits it on-chain. When verified, an availability event is emitted. That event marks the point of availability, the moment when Walrus takes responsibility for maintaining availability for the availability period. A publisher can perform many of these steps on behalf of the user. It can receive the blob through HTTP, do the encoding, handle distribution, gather signatures, submit on-chain actions, and return results. This can reduce bandwidth for the user and simplify the process. But it also introduces a new place where things can go wrong. A publisher might be buggy. It might be overloaded. It might behave dishonestly. Walrus’s stance is that the publisher is helpful, but not trusted. The user can still check what happened by looking for on-chain events and by verifying blob IDs through reconstruction and hashing. The read path also explains the role of aggregators and caches. Walrus describes reading a blob by first obtaining metadata for the blob ID from any storage node and authenticating it using the blob ID. The reader then requests slivers from storage nodes and waits for enough to respond. Requests can be sent in parallel to ensure low latency. The reader authenticates returned slivers, reconstructs the blob, and decides whether the contents are valid or inconsistent. A cache can sit in the middle and serve reconstructed blobs over HTTP. If the cache does not have the blob, it behaves like an aggregator and performs reconstruction. If the cache has it, it can serve it directly, and the client can still verify correctness. This makes the optional layer feel less like a “shortcut” and more like a “lens.” It is a way of seeing the same system through interfaces people already know. Walrus states that it supports flexible access through CLI tools, SDKs, and Web2 HTTP technologies. It also states that it is designed to work well with traditional caches and content distribution networks, while ensuring all operations can also be run using local tools to maximize decentralization. That last phrase is important. It implies two futures that can coexist. One future is convenience through shared infrastructure. Another future is independence through local tooling. Walrus tries to keep both possible. There is also a boundary in what Walrus does not attempt to build. Walrus explicitly says it does not reimplement a CDN that is geo-replicated or has extremely low latency. Instead, it ensures that traditional CDNs are usable and compatible with Walrus caches. This is a pragmatic choice. It accepts that the web already has a performance layer, and it focuses on making that layer compatible with verification. Similarly, Walrus does not reimplement a full smart contracts platform. It relies on Sui smart contracts to manage Walrus resources and processes, including payments, storage epochs, and governance. If you step back, publishers, aggregators, and caches represent a simple philosophy: do not force every user to become an operator, but do not force every user to trust an operator. Offer services that make the system easier to use, then provide tools to audit them. Keep the core protocol strong enough that optional infrastructure can exist without being a hidden dependency for correctness. In the end, the optional layer is not just about speed or convenience. It is about giving decentralized storage a human shape. People want HTTP because it is familiar. They want caching because latency is real. They want simplified uploads because complexity is costly. Walrus acknowledges those needs, but it also keeps a disciplined boundary: intermediaries can help, but verification must remain possible. That balance is one of the most practical forms of decentralization. @Walrus 🦭/acc #Walrus $WAL
From Testnet to Mainnet: What Walrus Changed When It Became “Real”
A protocol can exist for months as an idea that behaves well in controlled weather. Then one day it is placed in the open. Real users arrive with real files, real expectations, and real consequences for mistakes. That moment is not only technical. It is moral. A system that says “store your data here” is making a promise that touches time. Walrus reached that kind of moment with its Mainnet announcement dated March 27, 2025. In that announcement, Walrus said the production Mainnet was live and operated by a decentralized network of over 100 storage nodes. It also stated that Epoch 1 began on March 25, 2025. These are operational facts, not slogans. They describe a network that is meant to be used for publishing and retrieving blobs, browsing Walrus Sites, and using the Mainnet WAL token for staking and committee selection. To understand why Mainnet is a threshold, it helps to remember what Walrus is trying to be. Walrus is a decentralized storage protocol designed for unstructured content, called blobs. A blob is simply a file or data object that is not stored as rows in a database table. Walrus aims to keep blobs retrievable even if some storage nodes fail or behave maliciously. It coordinates payments, attestations, and system state through the Sui blockchain, while keeping blob contents off-chain on storage nodes and caches. Walrus is explicit that metadata is the only blob element exposed to Sui or its validators.
Mainnet, in this context, is the environment where these security properties are meant to hold under stated assumptions, and where the economics are no longer simulated. Publishing blobs on Mainnet consumes real WAL and SUI, according to the same announcement. That simple sentence changes the meaning of everything. It forces the protocol to live with cost, incentives, and abuse resistance in a way that testnets often postpone. The Mainnet announcement listed a set of changes that reveal what the team prioritized as they crossed the line from “working prototype” to “public infrastructure.” One change was about how blobs can carry meaning beyond raw bytes. Walrus introduced “blob attributes,” describing that each Sui blob object can have multiple attributes and values attached to it to encode application metadata, and that the aggregator uses this for common HTTP headers. This is an example of a small bridge between the on-chain world of metadata and the off-chain world of web delivery. A browser does not only need the file. It often needs a content type, caching hints, and other headers that make the web behave normally. Attaching attributes at the blob-object level is a way to support those familiar semantics without placing the blob itself on-chain. Another change was about letting users clean up after themselves. The announcement described “burn blob objects on Sui,” saying the walrus CLI tool was extended with commands to burn Sui blob objects to reclaim the associated storage fee. In plain terms, this is an exit hatch. It acknowledges that storage is not always forever, and that users sometimes need to reclaim costs when a blob is no longer needed. A storage system becomes more trustworthy when it also gives you a clear way to stop paying for something you no longer want. Some changes were about time, because time is a constant problem in storage. The announcement described CLI expiry time improvements: blob expiry can be expressed more flexibly using an epochs maximum, an RFC3339 date with an “earliest expiry time,” or a concrete end epoch. This reflects how Walrus frames storage as a lifetime, coordinated on Sui. If a system uses epochs and availability periods, users need a clear language to express “how long.” Improving expiry controls is not a cosmetic change. It is a way to make the protocol usable without forcing every user to think like a protocol designer. A major technical change was about the core erasure coding foundation. The announcement said RedStuff changed from RaptorQ to Reed–Solomon codes. It also said benchmarking suggested similar performance for their parameter set, and that Reed–Solomon provides “perfect robustness” in the sense that blobs can always be reconstructed given a threshold of slivers. Walrus already frames itself as a system that trades full replication for erasure coding, aiming for predictable overhead (Walrus documents an expansion around 4.5–5×) while keeping availability strong. So a change in the underlying code family is not a minor refactor. It is a statement about the shape of reliability the protocol wants. Reed–Solomon codes are widely known as deterministic and threshold-based, which fits Walrus’s emphasis on verifiable, reconstructible storage. Some Mainnet changes were about making the network accessible to normal web clients without weakening security. The announcement described TLS handling for storage nodes, saying nodes can be configured to serve publicly trusted TLS certificates such as those issued by cloud providers and public authorities like Let’s Encrypt. It also stated that this allows JavaScript clients to directly store and retrieve blobs from Walrus. This is significant in a quiet way. The browser is the most common runtime in the world, but it is strict about security boundaries. If a storage network cannot speak modern TLS, it is pushed behind gateways and proxies, which can become trust choke points. Supporting publicly trusted TLS on storage nodes helps reduce friction for direct client interaction.
Another change addressed a different kind of boundary: who is allowed to use a publishing service. The announcement described JWT authentication for the publisher, saying the publisher can be configured to only provide services to authenticated users by consuming JWT tokens distributed through any authentication mechanism. This is not about turning Walrus into a closed system. Walrus still describes publishers as optional. It is about operational safety when real costs exist. If publishing consumes real WAL and SUI, then an open publisher endpoint can be drained by abuse. Adding authentication and accounting is a way to make “helpful infrastructure” viable without becoming an unlimited subsidy. This theme of operational maturity appears again in “extensive logging and metrics,” described as available across services including storage nodes, aggregator, and publisher. The announcement also mentioned a health endpoint and a CLI health command to check status and basic information of storage nodes, which is useful for allocating stake and monitoring the network. In a decentralized system with committees and delegated stake, observability is not optional. People make decisions based on it. Operators diagnose failures with it. Delegators need it to evaluate reliability. Without it, decentralization becomes guesswork. Walrus Sites also received attention in the Mainnet update. The announcement said the public portal for Walrus Sites would be hosted on the wal.app domain. It also said Walrus Sites support deletable blobs to make updates more capital efficient. It listed several Walrus Sites on Mainnet, including Staking, Docs, Snowreads, and Flatland. This matters because hosting is a practical test of the protocol. A site is many blobs: HTML, CSS, JavaScript, images. It is accessed through HTTP. It is updated over time. If a storage protocol cannot support these web patterns, it risks remaining a niche tool for archival storage only. Walrus seems to be positioning itself as compatible with the web’s delivery habits while keeping verification possible through blob IDs and authenticated metadata. The Mainnet announcement also described a subsidies contract operated by the Walrus foundation to acquire subsidized storage, with the CLI using it automatically when storing blobs. This is another sign of transition. Mainnet needs real economics, but it also needs adoption. Subsidies are a bridge for early users who want to try the system without immediately managing full cost exposure. Importantly, this is described as a foundation-operated contract rather than an invisible discount, which fits Walrus’s general preference for explicit, on-chain mediated processes. Then there is the social infrastructure of code. The announcement said Walrus is open source, that the protocol’s health is overseen by an independent foundation, and that the codebase is open sourced under the Apache 2.0 license and hosted on GitHub. It also said the walrus-docs repository was being retired because the main Walrus repository now contains documentation and smart contracts. Open sourcing is not proof of correctness, but it is part of how a protocol invites auditing and community contribution. In a system where availability and verification matter, visibility into implementation is a form of accountability. The announcement also spoke about testnet plans: the current testnet would be wiped and restarted to align the codebase to Mainnet, and testnet would be wiped regularly every few months going forward. It also said developers should use Mainnet for stability, and that there would not be a public portal to serve Walrus Sites on testnet to reduce costs and incident response associated with free web hosting. These are not exciting details, but they are honest ones. They acknowledge that testnet is a tool for experimentation, and that stability is a property of Mainnet, not something promised everywhere. If you put all of these changes together, you can see a consistent story. Walrus did not become “more complicated” for its own sake. It became more explicit about lifetimes, more careful about operational abuse, more compatible with web delivery realities, and more measurable for node operators and delegators. It also sharpened its core reliability foundation by moving RedStuff to Reed–Solomon codes. In a decentralized storage protocol, Mainnet is the moment when the system must handle disagreement as a normal condition. Users might refuse to pay. Publishers might be abused. Nodes might fail. Some nodes might be malicious. Walrus’s architecture anticipates Byzantine faults and organizes storage nodes into epoch-based committees, coordinated by Sui smart contracts. Mainnet is where these ideas meet the world’s messy incentives. None of this is a promise of perfection. It is a choice of what kind of imperfection the system is designed to survive. Walrus’s Mainnet release reads like a set of decisions that accept the ordinary burdens of production: cost, abuse, monitoring, upgrades, and time. In that acceptance, a protocol stops being only an idea. It becomes a place where people might responsibly store something they do not want to lose. @Walrus 🦭/acc #Walrus $WAL
Availability Periods: What “Lifetime” Means for Stored Data in Walrus
Time is the one resource every system must eventually acknowledge. A file can feel permanent when it is on your screen, but permanence is not a property of pixels. It is a property of continued care. Someone must keep the data. Someone must pay the cost. Someone must decide how long that responsibility lasts. In many digital products, this truth is hidden behind a friendly interface. In decentralized systems, it needs a clearer shape, because the people who store data and the people who rely on it are often not the same, and they may not fully trust each other.
Walrus is a decentralized storage protocol designed for large, unstructured data called blobs. A blob is simply a file or data object that is not stored as rows in a database table. Walrus aims to keep blobs available and retrievable even when some storage participants fail or act maliciously. In distributed systems this is often called a Byzantine fault. A node may be offline, buggy, or dishonest. Walrus designs for that reality by distributing encoded data across many storage nodes and allowing reconstruction from a subset, rather than depending on a single trusted keeper. But “distributed” is only one part of the story. The other part is the lifetime of the promise. Walrus does not treat storage as an indefinite hope. It treats storage as a responsibility tied to time. This is where the idea of an availability period becomes central. Walrus defines a Point of Availability, or PoA, for each stored blob ID. PoA is the moment when the system takes responsibility for maintaining the blob’s availability. Before PoA, the client that is storing the blob is responsible for ensuring it is properly uploaded and available. After PoA, Walrus is responsible for maintaining availability for the duration of the availability period. Walrus makes both the PoA and the availability period observable through events on the Sui blockchain. This is a quiet but important shift in how storage is described. Many systems talk about storage as a place. Walrus talks about storage as a time-bounded obligation, with a public marker for when that obligation begins. The availability period is the length of time Walrus commits to keeping a blob available after PoA. It is not a vague expectation. It is part of the protocol’s observable record. That means other parties can verify it. They do not need to ask the uploader. They do not need to trust a cache. They can look at the Sui chain events that represent PoA and the blob’s availability window. Walrus uses Sui smart contracts to coordinate storage operations as resources that have a lifetime, as well as payments. Walrus also uses Sui for governance that determines which storage nodes hold each storage shard. Yet Walrus keeps a strict boundary: metadata is the only blob element exposed to Sui or its validators. The content of blobs is stored off-chain on Walrus storage nodes and optional caches. Storage nodes or caches do not need to overlap with any Sui infrastructure components, such as validators. Walrus storage epochs can also differ in length and timing from Sui epochs. This separation helps explain why lifetime is handled on-chain while content is not. A blockchain is good at making small facts public and hard to rewrite. It is not meant to replicate large files at scale. Walrus uses the chain to record the facts that matter for coordination: who is responsible, for how long, and under what resource constraints. The data itself stays off-chain, where it can be served and reconstructed without burdening validators. To see how a lifetime is created, it helps to look at what Walrus calls storage resources. Storage space is represented on Sui as a resource that can be owned, split, merged, and transferred. A user can purchase storage space for some duration from the Walrus system object, or acquire it from others through a secondary market. This already hints at a different model of storage. Instead of buying “a plan,” you acquire a resource object that represents capacity and time. Because it is an object, it can be composed. It can be divided to fit a smaller blob. It can be merged to fit a larger one. It can be transferred to another owner. When a user wants to store a blob, they first erasure-code it and compute its blob ID. Walrus uses erasure coding through a construction called RedStuff, based on Reed–Solomon codes. Erasure coding is a method that adds redundancy without copying the entire file everywhere. It turns a blob into many pieces such that the blob can be reconstructed from a sufficient subset of those pieces. Walrus groups encoded pieces into slivers and assigns them to shards. Storage nodes manage shards during a storage epoch and store the slivers for their assigned shards. Walrus describes the encoding expansion as about 4.5–5× the original blob size, and it notes that this overhead is independent of the number of shards and the number of storage nodes. This encoding and distribution support availability under failures, but the availability period defines the time window during which the system is obligated to maintain it. So the user must connect the blob ID to a storage resource with a lifetime. They do this by going on-chain and updating a storage resource to register the blob ID with the desired size and lifetime. This emits an event that storage nodes listen for, which signals both expectation and authorization for off-chain storage operations.
After that event, the user sends blob metadata to all storage nodes and sends each sliver to the storage node that currently manages the corresponding shard. Each storage node checks that what it receives matches the blob ID and checks that there is an on-chain blob resource authorized to store that blob. If these checks pass, the node signs a statement that it holds the sliver for that blob ID and returns it to the user. At this stage, the blob exists in pieces across the storage network, but the system has not yet publicly committed to maintaining it. Walrus makes that commitment through certification. The user aggregates the storage node signatures into an availability certificate and submits it to the chain. When the certificate is verified on-chain against the current Walrus committee, an availability event is emitted for the blob ID. That event marks the PoA. From that moment onward, Walrus is responsible for maintaining the blob’s availability for the availability period that was defined. This is the moment when lifetime becomes legible to others. It is no longer “the user says it is stored.” It is “the system emitted the event that marks responsibility.” In many applications, that difference matters more than the raw act of upload, because it creates a shared timeline that multiple parties can reference. The availability period also helps clarify what Walrus is promising and what it is not. Walrus is not a CDN. Walrus explicitly says it does not reimplement a geo-replicated CDN that aims for extremely low latency everywhere. Instead, Walrus is designed to work well with traditional caches and CDNs. That means availability and delivery are not the same thing. Availability is about the ability to retrieve the blob from the system within the period, even if some nodes fail. Delivery speed can be improved through caches, aggregators, and CDNs. Those components help with latency and bandwidth. But they do not define the lifetime of the promise. The lifetime comes from the availability period and the on-chain events that describe it. This is why Walrus treats extension as part of the protocol. A blob’s storage can be extended by adding a storage object with a longer expiry period. Walrus notes that this facility can be used by smart contracts to extend the availability of blobs stored in perpetuity, as long as funds exist to continue providing storage. The word “perpetuity” here does not mean “free forever.” It means a pattern in which the system can keep extending the availability period as long as resources are supplied. In human terms, it is like renewing a lease. The house stays the same, but the agreement is extended. Walrus also describes refresh availability as an on-chain process that requires no content data. To request an extension to the availability of a blob, a user provides an appropriate storage resource. Upon success, an event is emitted that storage nodes receive, and they extend the time for which each sliver is stored. This detail matters because it means you do not need to re-upload gigabytes to extend time. You renew the responsibility window through the coordination layer. From the perspective of application builders, the availability period becomes a planning tool. If you are storing a dataset or a media asset, you can choose a duration that matches the expected use. If your asset is seasonal, you might store it for a shorter period and renew if demand persists. If your asset is foundational, you might build an automated renewal process, potentially in smart contract logic, so that the availability period is extended as long as funding remains. Walrus makes this possible because stored blobs are represented by objects on Sui, which means smart contracts can check whether a blob is available and for how long, extend its lifetime, or optionally delete it. The “optionally delete it” part is another way Walrus treats lifetime as a real concept rather than a marketing claim. Some data should not live forever. Sometimes you want deletability for updates, especially for web resources that change. Walrus Sites, for example, are described as supporting deletable blobs on Mainnet to make updates more capital efficient. Deletion does not mean the system forgets it ever existed. It means the ongoing responsibility to keep it retrievable is no longer in effect after the defined actions. This is consistent with the larger idea: storage is a managed obligation. Lifetime also connects to Walrus’s handling of inconsistency. Walrus describes the case where a blob ID is not correctly encoded. In that case, an inconsistency proof certificate can be uploaded on-chain later. This emits an inconsistent blob event, signaling that reads return None for the blob ID. Storage nodes can delete slivers belonging to inconsistent blobs, except for an indicator to return None. This is an important kind of honesty about lifetime. The system may still be within the availability period, but it can publicly declare that the blob does not resolve to valid content under the protocol’s rules. In other words, “available” is not allowed to mean “some bytes from somewhere.” It is tied to verifiable consistency. That protects the meaning of the blob ID over time. Walrus also describes two levels of client consistency checks when reading: default and strict. The default check verifies the read data using authenticated metadata, and the strict check re-encodes the blob and recomputes the blob ID to ensure the encoding is consistent. This matters for lifetime because it influences what “retrievable” should mean for a reader throughout the availability period. Walrus’s strict check provides stronger guarantees that any correct client attempting to read during the blob’s lifetime will always succeed and read the same data, assuming the blob was encoded correctly. In practice, this gives builders a way to choose the assurance level that matches their risk tolerance, without changing the lifetime mechanics themselves. If you look at Walrus through the lens of time, you begin to see why epochs appear in the design. Walrus is operated by a committee of storage nodes that evolves between storage epochs. A Sui smart contract controls how shards are assigned to storage nodes within storage epochs. On Mainnet, Walrus describes storage epochs as lasting two weeks. Walrus assumes that more than two-thirds of shards are managed by correct storage nodes within each epoch, tolerating up to one-third Byzantine shards. This epoch structure gives the system a cadence for governance and operational processes. But it is distinct from the availability period, which is the lifetime of a particular blob’s storage commitment. An availability period can span multiple epochs. That is why Walrus also describes a storage fund that holds funds for storing blobs over one or multiple storage epochs, and that payments are made each epoch to storage nodes according to performance. In plain language, epochs organize who is responsible right now and how the system pays and governs storage nodes. Availability periods organize how long a particular blob is supposed to remain retrievable after PoA. Both are time concepts, but they answer different questions. A final detail from Walrus’s on-chain governance section helps clarify the shape of long-term planning. Walrus notes that there is a maximum number of storage epochs in the future for which storage can be bought, approximately two years. This is another example of making time explicit. It sets a boundary on how far into the future a user can prepay for storage within the protocol’s standard mechanism. It does not prevent a user from renewing repeatedly, but it prevents indefinite precommitment far beyond the system’s governance horizon. Walrus’s Mainnet announcement also mentions improvements to how expiry time can be expressed when storing blobs via the CLI, including options such as specifying a maximum number of epochs, an RFC3339 date, or a concrete end epoch. This is practical evidence that lifetime is meant to be used by real operators, not just described in theory. People need simple ways to say, “keep this until then.” When you put all these pieces together, the availability period becomes a kind of contract between the system and its users. It is not a legal contract. It is a protocol contract. It has a visible start, PoA. It has a visible duration. It has on-chain events that make it checkable. It has renewal mechanisms that do not require re-upload. It has failure states, like inconsistency, that are expressed clearly rather than hidden. And it works without putting blob contents on-chain, because Walrus stores content off-chain while using Sui to coordinate responsibility and proof. In a philosophical sense, this is a modest design. It does not promise eternity. It promises a window, and a method to keep opening new windows if you choose to. It acknowledges that storage is always an ongoing act. In a technical sense, it is also a useful design. It gives builders a reliable way to reason about time, responsibility, and verification in a decentralized environment. And it gives communities a shared reference for what it means to say, “this data will be there.” @Walrus 🦭/acc #Walrus $WAL
@Walrus 🦭/acc introduces the WAL token as part of its economics and incentive mechanisms. On the official token page, WAL is described as the payment token for storage. One specific design detail: users pay upfront for a fixed storage time, and the WAL paid is distributed across time to storage nodes and stakers as compensation. This “streaming” concept matches the idea that storage is an ongoing service, not a one-time action. The same page frames the mechanism as aiming for storage costs that are stable in fiat terms, to reduce long-term price fluctuation effects. That’s a design goal, not a price promise. #Walrus $WAL
PoA as a Product Primitive: Building with a Verifiable Moment
Every useful system has a few simple building blocks. They are not always glamorous. They are often small ideas that quietly make everything else possible. In money, it might be settlement. In communication, it might be delivery. In storage, it is often availability. Not the vague feeling that a file “should be there,” but a concrete answer to a concrete question: when does the system take responsibility for keeping this data retrievable, and how can anyone verify that responsibility? Walrus treats that question as a first-class concept through something called the Point of Availability, or PoA. Walrus is a decentralized storage protocol designed for large, unstructured data called blobs. A blob is simply a file or data object that is not stored as rows in a database table. Walrus is built for settings where some participants can fail or act maliciously, which is often described as Byzantine faults. A node might be offline, buggy, or dishonest. Walrus aims to keep blobs retrievable despite such conditions, using erasure coding and distributed storage nodes. PoA is the moment when Walrus moves from “someone is trying to store this” to “the system is now responsible for maintaining it.” That shift matters because it draws a clear boundary between effort and obligation. Before the PoA, the client is responsible for ensuring the blob is properly uploaded and available. After the PoA, Walrus is responsible for maintaining availability for a defined availability period. Walrus makes both the PoA and the availability period observable through events on the Sui blockchain. This is why PoA can be treated as a product primitive. A primitive is a basic unit that other things can be built from. It is not a full application. It is a reliable piece of logic that can be composed into many applications. PoA works this way because it is public, verifiable, and tied to time. Walrus achieves that by separating content from coordination. Blob contents are always stored off-chain on Walrus storage nodes and caches. Walrus states that metadata is the only blob element exposed to Sui or its validators. Sui is used for coordination, governance, and payments. Storage capacity exists as a resource on Sui that can be owned, split, merged, and transferred. A user can acquire storage for a duration and then associate it with a blob ID. When a user wants to store a blob, they erasure-code it and compute a blob ID. Walrus uses an erasure code construction called RedStuff based on Reed–Solomon codes. The blob is encoded into pieces, grouped into slivers, and distributed across shards managed by storage nodes during a storage epoch. The encoding expands the blob size by about 4.5–5×, independent of the number of shards and storage nodes. This redundancy is part of how Walrus remains robust when some nodes fail. PoA is reached through a flow that intentionally bridges off-chain work and on-chain accountability. The user registers the blob ID on Sui by updating a storage resource, which emits an event that storage nodes listen to. The user then sends blob metadata and slivers to storage nodes according to shard assignment. Storage nodes verify slivers against the blob ID and verify that an authorized on-chain storage resource exists for that blob ID. If correct, they sign statements attesting they hold the slivers. The user aggregates enough signatures into an availability certificate and submits it on-chain. When the certificate is verified, Sui emits an availability event for the blob ID. That event marks the PoA. If you are building an application, this changes how you design your workflow. You do not have to treat storage as a hopeful side effect. You can treat it as a checkable condition. Instead of saying, “we uploaded the file, now proceed,” you can say, “we saw the PoA event, now proceed.” This is especially useful for systems that involve multiple parties, where one party needs to prove to another party that a blob is available, without sharing the full blob. Walrus explicitly supports proving and verifying availability. Because PoA is observable on Sui, a third party can verify the existence of the event as evidence that the system took responsibility for availability. PoA also pairs naturally with the idea of time-bound promises. The availability period is not an afterthought. It is part of the on-chain record. This makes it easier for applications to reason about expiration, renewals, and long-term guarantees. Walrus supports refreshing availability without re-uploading the content. A user can extend the duration fully on-chain by providing an appropriate storage resource, which emits an event that storage nodes use to extend how long slivers are stored. This turns “keep it alive” into a repeatable, auditable action rather than an informal agreement. It also helps that blobs are represented by objects on Sui, allowing smart contracts to check whether a blob is available and for how long, extend its lifetime, or optionally delete it. This creates a composable interface for applications. A contract does not need to understand erasure coding. It can reason about a blob through the on-chain object and events that describe its availability window. PoA becomes even more meaningful when you consider verification. In decentralized environments, performance layers often become trust layers by accident. People use caches, gateways, or CDNs because they are fast, then slowly begin to trust them. Walrus is designed so that caches and aggregators can exist without needing to be trusted. Reads can happen through Web2 HTTP technologies, and clients can still verify correctness using the blob ID and authenticated metadata. This means an application can use PoA as its “go” signal while still keeping integrity checks at the edges. There is also an honest edge case that PoA helps handle cleanly: inconsistency. Walrus acknowledges that clients are untrusted and may encode blobs incorrectly. If a correct storage node cannot recover a sliver past PoA due to incorrect encoding, it can produce an inconsistency proof, coordinate signatures from other nodes, and submit an inconsistency certificate on-chain. The contract emits an inconsistent blob event. After that, reads return None for that blob ID. This matters for application logic, because it provides a clear failure mode that can be checked and handled. The system does not silently return different data to different readers. It moves to a stable outcome. If you take PoA seriously as a primitive, you begin to design systems with fewer assumptions. You can build AI-related workflows where a dataset or model artifact is referenced only after PoA. You can build NFT or media workflows where a token points to content that reached PoA for a known period. You can build L2 availability workflows where parties need to attest that blobs are stored and retrievable. You can host decentralized web resources through Walrus Sites while using PoA and on-chain objects to reason about what is live and for how long. In each case, the pattern is similar: encode off-chain, certify on-chain, then treat the event as the shared truth you can point to. In the end, PoA is not a marketing concept. It is a design choice about responsibility. It says that availability is not just a technical property of storage nodes. It is a publicly checkable commitment tied to time. When a system can make that commitment legible, builders can create applications that behave more like contracts and less like hopes. @Walrus 🦭/acc #Walrus $WAL
@Walrus 🦭/acc is presented as both storage and data availability. Storage means keeping content retrievable over time. Data availability, in blockchain contexts, often means ensuring data needed to verify or execute something is accessible. Mysten’s announcement notes decentralized storage can double as a low-cost data availability layer for rollups, where sequencers upload transactions and executors reconstruct them when needed. You don’t have to build a rollup to appreciate the idea: availability is about verifiable access, not just archiving. When protocols separate execution from data, a dedicated availability layer can reduce cost and increase flexibility. #Walrus $WAL
@Walrus 🦭/acc uses the Sui blockchain as a coordination layer. Coordination includes tracking objects, managing payments, and mediating protocol actions through smart contracts. In the Walrus docs, rewards and processes are mediated by smart contracts on Sui. This is a key architectural choice: storage nodes handle the heavy data, while Sui handles the onchain logic for ownership, duration, and incentives. For builders, it means you can integrate storage into app logic using the same chain you already use for other state. The upside is composability; the tradeoff is that onchain interactions still require transactions and fees. #Walrus $WAL
ب
WAL/USDT
السعر
0.1492
سجّل الدخول لاستكشاف المزيد من المُحتوى
استكشف أحدث أخبار العملات الرقمية
⚡️ كُن جزءًا من أحدث النقاشات في مجال العملات الرقمية