AVS, The Tech Leving Up Restaking: Panel Recap

AVS, The Tech Leving Up Restaking: Panel Recap

Missed our latest space event on the EigenLayer ecosystem? Don't worry, we've got the highlights for you!

We brought together the brightest minds in the field - Vlad from P2P Validator, Edgar of Kiln Finance, Yannick from Chorus One, Michael from Blockless, and Mohak of ClayStack

The discussion focused on unique challenges and breakthrough opportunities in EigenLayer, especially around slashing conditions, operator strategies, and the management of AVSs.

This conversation is a must-listen for anyone keen on understanding the future dynamics of staking and security within the blockchain space.

For those unfamiliar, could you elaborate on what exactly an AVS and Operator is and how it interacts with EigenLayer? How do operators differ from AVSs?

Michael: For sure, when we're talking about AVS, it's an abbreviation for Actively Validated Services. What exactly does that mean? I think people nowadays are quite familiar with how smart contracts interact with each other on-chain, and then we have this default lego, we have NFTs, but within the wider ecosystem, there are many things going on off-chain. For instance, professional swap projects might want to have a community-powered oracle system, or there's a rollup, or there's a cross-chain bridge, or there's the option for general-purpose computing networks. 

All of that is happening outside of the realm of smart contracts and is living outside of Ethereum or any other layer-one blockchain. And those things have to achieve security independently of the Ethereum blockchain as well. 

So, even though, as a product or as an application, everything functions together with smart contracts, the off-chain components have to find security and performance on their own. 

This is actually a major headache for all the developers out there trying to expand the use cases of their smart contract applications by going off-chain.

So, EigenLayer is coming out, ready to try to solve this issue from the angle of, "Okay, so if you launch a network off-chain, and you want to have a proof-of-stake mechanism to secure it, say, use an economic stake. 

If you want to do that, then I would offer you the already established economic security from Ethereum to secure your own network. So, instead of each single network having to spin up your own token and spin up your own liquidity and then use a proof-of-stake mechanism to secure your off-chain network, now, you don't have to do any of that.”

That is very hard to do and not very feasible, especially for smaller-scale applications. So, they can now just simply use Ethereum to stake within their network to bootstrap liquidity and security.

Yannick: I agree that you don't need your own token at the beginning for bootstrapping in this economic security model. However, for the long term, the protocol does not necessarily use the term 'security' as well. 

EigenLayer, using dual-staking here, can optionally use it to stake their native token. So, its staking gives the AVS protocol additional security in the beginning that is accessible directly in their native areas of token staking, so it can be used as a bootstrapping mechanism as previously described. 

Moreover, it reduces protocol inflation and sell pressure on the AVS stock, since you don't need to issue a lot of new tokens to cover your APR for the debtors. But in the long term, and the last point where we're hosting here, is that there is a continuous choice between AVS token staking and Ethereum staking. 

So, this gives you the optionality to choose between how much external security to acquire and how much native AVS security to keep until the total AVS market cap becomes huge enough, as having the optionality to remove which are mistaken at all.

Why join EigenLayer instead of operating independently? More specifically, how does EigenLayer simplify the process for projects to benefit from its network and validator sets, and what are the long-term outcomes of this simplification? 

Michael: I'll talk about this from an AVS company standpoint first. So, I think it's mainly about two things when you are having an AVS. Bootstrapping liquidity on your own.

And then you have your native token liquidity as a second step. All of this takes a lot of effort. As I think, we previously talked quite a lot about this. So if, as a network, you're able to quickly establish security and make sure your network is functioning, then you can engage in all kinds of community engagements and growth activities from thereon.

So, it really makes the company's job easier. So this is not just from me; I've talked to various other projects, too. And especially considering the nature of these AVSs. It's not your regular on-chain, smart contract, and application raise; those things live off-chain, and often the off-chain stuff, community acquisition for them, has been a problem or has been difficult since the very beginning. Because the hurdle for people to understand what is going on in this abstract network, is inherently higher than, say, if you're simply building on-chain, and then something that people are very familiar with. So, from the community standpoint, for those AVSs, you have to do a really good job in educating your crowd and acquiring a crowd.  And often, that simply takes a lot of effort, and there's only so much community attention out there. 

So, I think, for a lot of the AVSs out there that really want to join an ecosystem, is that, like previously, they're all, you know, off-chain. They're all not a part of, say, the Ethereum ecosystem. But you know, all the major crypto users are in the Ethereum ecosystem. So, what do they do? Right, so EigenLayer on the community side is also providing them with a bridge here. 

So risk-takers, they can, you know, stake their Ethereum and get to know and get to participate in those protocols outside of Ethereum. But also, those protocols might grow and acquire a community for their network

Mohak: I wanted to add one point, Mike. You mentioned before as well that bootstrapping the security without the token from the restaked ETH, right? 

But in the beginning, for most AVSs, why would any users rehypothecate their stake to our AVSs? Because in the beginning, the yield from an AVS is not going to be very high. 

And the assumption is that most AVSs will give their tokens for airdrops, or points or whatever. So, doesn't that contradicts the fact that you can build your own security without having a native token? 

Rather, you actually need the token itself to bootstrap itself. You can't avoid the bootstrapping mechanism without the token.

Michael: I think this is pretty much a two-stage process. And for right now, originally, when EigenLayer came out, we were talking about how you don't need your native token. But as the ecosystem evolves, everyone seems to want those staking, at least I feel this is the case at the moment. 

And I'm still sticking to mine right now. It's kind of like, as you said, people are trying to do airdrops, people are finding new ways to acquire and establish a community, a risk-taking community. 

When AVSs are doing this, I feel this is a two-stage approach where they will initially offer you an airdrop. But later on, from there, AVSs are wishing to acquire their own. Basically, lets say there are 30 to 40 different AVSs. Then, if they don't have something very special to offer to the rest of the community, then it becomes very difficult for the AVSs to acquire any meaningful attention.

So, I think that's partly why everyone wants to have a token airdrop nowadays. But I think going forward, especially after EigenLayer, we're going to see more and more of a hybrid model where people are able to stake not only their own token but also say Ethereum in the process. So that people who initially didn't, who say missed out on the airdrop, etc. Later on, that part of the crowd can be converted to a particular project right according to how the project wants to grow.

Yannick: I think a lot of this is, and you kind of alluded to that already as well, right, it's always a bit about risk-reward ratios, as well. And I think a lot of teams, there's obviously some, and I think we'll talk about this in a little bit more detail. 

But there's obviously added risk if you use the same ETH stake that you have to then also offer security to another project. And so, I think a lot of projects are realising that they may need to do some sort of dual token model because if the rewards aren't great enough, then you won't get enough of that security for your project. 

And I also think it really depends on there being different types of stakers out there. I mean, we deal a lot with more institutional clients as well, and they're looking at a lot of this very reservedly and trying to first understand the risk fully before they kind of move into the EigenLayer space. 

And then you have more crypto-native people that may be a little bit differently placed on the risk curve. And for them, it's a lot more about optimising for rewards. 

So, I think initially, what we're seeing right now, as well with points and all these things, a lot of current ETH stakers that give their stake into the EigenLayer smart contracts, they're very much about optimising rewards. 

And so I think, initially, to bootstrap things, that's kind of the way to go, but it's a very fine line. And I think you alluded to that there will need to be a change at some point because, obviously, it's not super sustainable in the long run. 

So, finding that fine balance between incentivizing people early on and then building something sustainable in the long run, I think it's going to be very interesting to see how that plays out.

What role does slashing play within EigenLayer, and how will the rules be determined for slashing conditions? Could you provide some examples of slashing conditions and how they are enforced within the EigenLayer ecosystem?

Vlad: Like any other proof of stake protocol prevents malicious actions here, and any new AVSs won't have any exceptions. But it's too early to talk about slashing right now because it's projected that EigenLayer will lead to poor slashing and rewards for AVSs only in theory. 

For now, I think it's a challenge for the EigenLayer team and AVSs to develop a robust system for slashing because it is something like it's a very undesirable function of any network, you know, and nobody wants slashing, but everyone wants rewards. 

So, the system will need careful development and research before enabling, and I'm happy that the EigenLayer team postponed enabling the slashings and rewards by half a year because it's very important to do it properly.

Yannick: I mean, I know the EigenLayer team is working very hard on this and understands the importance of this quite well. I think from an operator's perspective, the good thing, in a sense, is that we can decide which AVSs we onboard. Especially coming back to the kind of clients and stakers you have, we can choose AVSs that fit our risk criteria and that have session conditions that make sense to us. 

So, there you also have this safeguarding a bit because I think, Kiln, P2P and also ourselves, of course, this is what we do for a living. We basically select protocols, we try to understand them, we try to see okay, how can we run them without getting slashed? You know, how can we optimize rewards of course, as well. So, this is one function that we all have in the company, to look at new protocols and figure out, does this make sense? Is this a good protocol to run? 

And so, I think from that perspective, you do have an additional safeguard with regards to the slashing conditions by operators onboarding AVSs that fit the criteria that we deem as safe or reasonable.

Michael: There are multiple ways to avoid unintentional slashing and make the job for operators and the lives of risk-takers easier. There are mainly three areas, right? 

One area is through node software. One thing you can do is to incorporate a secure runtime. For instance, if you're using, say, WebAssembly as your runtime, then WebAssembly comes with native resource limitations. Basically, it acts as a secure sandbox that isolates whatever application logic you're running within the software from the wider host machine environment. 

So, this way, whatever other things that are happening on the host machine, say on a p2p.org server, is not going to interfere with your own resource consumption. So, this way, you can make sure your services are actually getting the resources they need, and also, it would prevent any kind of ops flow if you fluctuate. That's one area, but that's a lower-level solution. Another thing we can do is if you have some kind of off-chain network, one thing you need to do in putting this peer-to-peer network is make sure you have native failover. 

So that, say, for instance, one compute request comes into your network, whatever that may be, then basically, some nodes of your network are selected to do work, or they're all doing work together, right? This is a case-by-case scenario, but you want to have native failover as in, say, if I call a node to do a particular work, then the node falls offline, right? 

So, what do I do? I immediately find another set of workers or nodes to do the work for me. So, in this whole flow, the work is not considered failed, or no one is getting slashed in the process. It's very common, especially later on, if we have more community operators, for people to go online and offline. 

So, these kinds of conditions don't need to trigger a slashing. But rather, you just have to find another operator or node for your particular compute. And that's going to take one second. Native failover is very important to implement. 

Another thing is, before any result goes to a final state goes on-chain, and if you're trying to decide, 'Oh, should I slash this or not?' there should be a native layer of consensus and or verification going on within the network itself. 

So, for instance, if I have, say, 100 nodes, and they're doing a PBFT logic on a particular execution result, the result being pulled in yes or no answers. So, I want the network to achieve fault tolerance on its own. Basically, people will come to vote and make sure the majority is actually voting A instead of B. And, you know, for people that vote B, we don't necessarily have to slash them. 

Sometimes people fall offline or people receive false information. And a lot of times it's not their fault. You don't necessarily have to slash those people. And in this case, basically even if there are malicious players, the malicious intent or actions can be resolved natively within the BFT mechanism. It doesn't have to escalate to a slashing event. 

In such a system, you can design the slashing events to be only about, say, if someone constantly falls offline and is not participating in voting, and basically, you can set the bar to be super high. So, this way, especially as more community operators join the pool, we can make the lives of risk-takers much, much easier.

What are the potential risks associated with slashing for AVSs and operators within EigenLayer?

Edgar: In many ways, the risk can be the same as in other networks, where you can lose a piece depending on the model. But you could imagine a model where maybe it's only the operator's stake that's slashed. So, that would be lost or burned. 

Actually, it's a good point because we don't necessarily know right now what will happen to the slashed Ethereum. Will it be burned? Will it go to a community pool? That will be interesting to follow. 

But yeah, you could, for example, maybe you're not active enough. So, you could be, for example, not earning rewards for a bit, or you could do something very hurtful for the consensus of the service you are securing and get slashed a very significant portion. 

And that could also incur risks in the consensus layer, because what if, for example, an operator gets slashed a huge portion of its restake in the AVS layer, it could also potentially have impacts on the beacon chain.

Michael: I feel like no AVS wants slashing to happen. And my personal take on this is that each AVS will have to try to minimise slashing on their own. So, basically, there are three ways they can do it within the network: by implementing a secure runtime, by implementing execution to override native failover within the P2P layer, and also by implementing a consensus mechanism and/or verification mechanism. 

That will basically get you the right result, and people who submit the wrong result, you don't have to slash them, right? You know, if there are malicious parties, they are basically blocked out, and you don't have to escalate to slashing. 

So I think, as an AVS, if you do all those three things, the risk of getting an unintentional slashing or any slashing to happen at all, is going to be very, very low, unless something goes seriously wrong with a particular node. So, I would just personally say that all the AVSs are, I think, pushing in this direction. Because if you get slashed, the next time, people will have very little confidence in risk-taking with you, right?

How do users select operators to stake their Ethereum with, and what factors should they consider in this decision-making process?

Mohak: First of all, we have been thinking a lot about how we give users this ability because, as you know, csETH is fungible, right? We want to make sure it remains fungible. You can't have some users who want to vote for AVS X and operator Y, and then you have another set of users who want to vote for AVS A and operator B. 

That's the reason we took quite some time to build this modular architecture. Our focus is that the core protocol becomes immutable - we don't have to make massive changes in the core protocol as new updates come in, like upgrades with EigenLayer and all of that. 

The effect of that is that we can make it highly programmable. We have been waiting to get more specs - it's a little bit early, but what we want to make sure is that once the capabilities come out to be able to choose the AVSs and the operators, then we want to give that ability to users. It could be either by a direct vote or csETH holders. 

There are lots of creative ways we've been thinking, but it's just until the specs are out, until more information is out there, it's a little early to think about how we do it. But as I said, we want to make it very user-centric but help them in the decision-making process.

Vlad: From my perspective, there are only two points here that make sense for users, and users usually take them into consideration. The first is trust and the second is APR. 

Users often choose operators based on brand, performance, non-slashing history, and a record of smooth upgrades across different networks for a long time. And I think that particularly for EigenLayer, the number of AVS onboarded by a particular operator will make sense because, at least in the beginning, the amount of AVS will significantly affect APR.

Yannick: Maybe one point to add - I think you guys are using SSV and Obol as well. I think this is also where potentially something like distributed validator technology (DVT) can be quite helpful. 

We ran a cluster with P2P, for example, a while ago, and you could have something like that as well, but you use, for example, an Obol or something to create a cluster of multiple very good operators as an LST or LRT project, that kind of work with together to have these validators work together, but then also have the additional kind of fault tolerance that DVT technology allows you.

Mohak: There's one more thing to add. And that's the reason we have kept the DVT module modular because for a lot of users, even if DVT has already evolved much, I see massive updates are coming in the entire DVT space itself.For example, cluster immutability and a lot of other things, and potentially DVT for AVS operators as well. And that's another advantage that again, by keeping it modular, we don't have to change the core protocol - we will be able to support the changes that come in DVT.

From an operator's perspective, what considerations go into choosing which AVSs to run on EigenLayer?

Michael: For sure, I think this, my part has been an ongoing process, talking to various operators and people who want to run a node for a decent trust network. 

And to me, it seems like it's just risk and reward where people want to make sure that people understand how your slashing conditions work. People don't want to get slashed in any case. On top of that, because, well, maybe you guys can speak more about this, but I kind of feel, you know, we're adding a layer, I mean, that goes live, each operator will be running many different AVSs altogether. 

So, I feel compared to, say, in the pure community model where one community member only has the hardware resource and right to run one node for one project. Here, the dynamic is a bit different, saying the operators will actually be running many projects in their cloud or bare-metal environments. 

So, the reward, I feel Eigen operators may be less sensitive to it, right? Because, you know, even if the reward is lower, we can just throw many things together.

Edgar: I think it all comes back to the risk-reward ratio. You need to understand the areas that you want to secure and what they are providing to the market. 

For example, we've seen a lot of AVSs trying to service wallets. The question you need to ask yourself is what value this AVS is actually providing to the workspace. Do they have competition? What is their competitive advantage? It all comes down to how much revenue this AVS is going to generate for my risk-takers' community. 

And what you also have to look at is the heaviness of the infra setup and the slashing risks, and basically, the combination between the two. You have to make an overall risk assessment of running these AVSs. So, you make sure that you're not going to get slashed. 

And I think that if an AVS can provide good revenue and offer mitigated risk, most AVSs will run it, basically. 

Yannick: I fully agree with what you just said. I think for us as well. It's kind of the question also of how good the community is and how responsive and experienced the team is because, especially early on when something launches new, you always have some growing pains, right? There will always be some issues that happen all the time. 

So, I think you mentioned the complexity of running the infrastructure already, the risk ratio when it comes to slashing versus the actual rewards. And yeah, I do think the general responsiveness and collaboration with the team, so we like to onboard protocols that we believe in that we see potential in, and if there's good communication with the team, and they go about it the right way, then that makes everyone's life a lot easier, especially from an operator perspective as well.

Vlad: I wanted to partially disagree with previous speakers around. So, there are statements that we should take into consideration all risks about these particular AVSs, timbre grounds and so forth. 

However, very often launching a new network is a subject of good consideration. You need to research, and understand if you believe in it or not, if you want to allocate your resources or not, but particularly for AVSs. In our strategy, we will likely build a launch machine here. 

Because see, while selection isn't enabled in EigenLayer, the risk ratio is near to zero, it's a very good time to launch as many AVSs as possible and to allow all the team to check all their marketing policies. While it's like risk-free, and then we will understand, and we will make a decision if we want to go further with these AVSs or not. So, we will likely adopt this strategy more.

Could you discuss the level of complexity involved in managing AVSs as an operator within the EigenLayer ecosystem?

Mohak: I think for the first year or so, there were definitely tons of airdrops. One analogy I was thinking about was the protocol called Solidly, which came on Phantom a few years back, I think Edgar was running it. 

A lot of protocols were fighting to get a piece of that NFT, similar to how AVSs might compete with each other for bigger and bigger airdrops to get that buy-in. In the short term, that's the scenario, but in the long term, the economics has to work out for risk-taking because generating revenues in crypto is not easy. 

A fellow panellist gave a good example of dydx, where the fee comes in the form of stables, and on GMX, it comes in the form of ETH. But both protocols took quite a few years to reach a stage where they found the right product-market fit, generating enough revenue for stakers, which is not easy. 

For example - an Oracle, even Chainlink, in its first year or so, it was very hard to generate a lot of revenue unless fees were paid in the native token. Right now, the only sector where revenues have been fairly higher is perpetual, but outside that, it's comparatively low. But again, that's burning. 

That's the challenge I see: once these airdrop frenzies are over, how do we find sustainable yields for people to rehypothecate their stake so that it makes sense?

Looking ahead, what are the key challenges and opportunities you see ahead in this ecosystem for the coming years? 

Michael: I think if more airdrops are what we're all looking for, then there might be two challenges to it. One thing is that, right now, most of the AVSs are major companies or big teams, established teams. 

And, you know, you don't see many AVS developers in the wild just yet. Well, this is partly because of the EigenLayer ecosystem, we're still in the very early stage. But also, because overall, in the web3 ecosystem, people know how to develop smart contracts, but not how to develop a network. 

People know how to build on layer 1, but don't know how to be layer 1. So, it's a different skill set to be AVS, compared to using a smart contract application. Generally, people in the industry are not too familiar with booting an off-chain network. There needs to be a lot of tooling and education to ensure the developer community can build more AVSs. 

This is going to be an ongoing process and will take a long time until there are hundreds of AVSs launching on an active layer. The other challenge, if we get to a point where there are hundreds of AVSs, is how they will individually get any attention at all. If they're doing airdrops and everyone's just farming them, viewing their token as yields, they don't care about the projects they're receiving the token from, which defeats the purpose for that particular AVS because it's not able to get any attention. 

Going forward, AVSs need to come up with more innovative ways to engage with the risk-taking crowd instead of doing a simple airdrop, which will become less effective when there are, say, 40, 50, or 100 AVSs out there.

Yannick: I think from our side as an operator, the two challenges we see are generally kind of the complexity of AVS, kind of us than onboarding them. So, it's definitely one risk we see and one challenge. 

Then also kind of the current delegation model. So right now, if we, for example, run an AVS for a liquid restaking protocol, and then also run that same AVS for ourselves as operators. We need to run two instances of that AVS. So, it adds a lot of complexity on that side as well. 

I do think when it comes to opportunities, one thing that I personally would be excited about as you know, is maybe an AVS wants a collection of tokens at some point. So, it's not just ETH security, but maybe other forms of security as well. And collateral in that sense. 

So, maybe, there's more innovation to be had on that side. But yeah, overall, I mean, I think generally, we're still quite early. And we're, you know, keeping our eyes on everything to see what projects like BlockLess, and others are coming up with and the exciting things that they're doing on the AVS side.

Closing thoughts

We're immensely grateful to everyone who participated in today's X (Twitter) Space. A special thanks to the speakers, each of your presence and active engagement have significantly contributed to the depth and richness of our discussions. Keep an eye out for future such discussions!

ClayStack is a modular DVT-based Liquid Restaking Protocol currently live on the Ethereum mainnet. With ClayStack, you can now mint csETH by depositing native ETH and LSTs. csETH holders will automatically accumulate CLAY Points and EigenLayer Restaked Points, unrestricted by EigenLayer deposit caps. All while earning rewards on your staked assets.