Dev's Journal 5

Reviewing Audit Results, Learning from Formal Math, and Olympus Bonds

·

11 min read

I spent a good chunk of the past week traveling with my dog up the east coast of the US. Thanks to everyone who suggested content for the 20+ hours of driving. That being said, I did more thinking and less coding this week, and the newsletter is going to somewhat abridged. Today, I'm going to cover reviewing audit results, learning from formal mathematics, and some thoughts about Olympus bonds in the future.

Reviewing Audit Results

In my book, there are two types of major vulnerabilities that show up during a contract audit.

The first are tiny slip-ups that would be major errors if missed, but, in general, reflect a minor typo or oversight, not a design flaw. These are fixed by modifying a single line of code or a particular function at most. Given enough time, you would probably stumble across these yourself, but getting an audit from a fresh set of eyes short circuits that process to help you get to production faster. I mostly make these when rushing to complete a project and shrug them off.

The second are gut punches. Brutal errors in the core contract logic or design that you thought about for awhile and missed. You generally have to re-work through the entire contract logic, or at least a significant component to resolve these. These can leave you feeling like an idiot for a whole day until you resolve them. Most of the time, auditors are able to provide a recommendation to fix the issue, but sometimes they are too gnarly. Even if there is a recommendation, the auditor has spent a fraction of the time you have with the codebase and may have missed some secondary or tertiary effects of the issue. While there are a lot of findings in an audit, this is where auditors really earn their keep. If you didn't see these issues in the design after getting to this point, odds are you wouldn't catch them prior to production. Although it might bring my technical self-esteem down a rung each time one of these is found, I'm grateful in the long-term because it's much better to be wrong and fix it than be wrong and get rekt. This is also where you learn the most. For example, I've spent a lot of time avoiding re-entrancy bugs and ensuring the appropriate access control is enabled to avoid loss of funds on contracts, but I haven't thought about denial of service attacks much. I was aware of them, but it wasn't usually top of mind. That won't be the case anymore 😊.

Learning from Formal Mathematics

Mathematics is a rigorous discipline based on deductive reasoning and developed over a base set of axioms. However, it wasn't always treated this way. Originally, it was viewed as an extension of other natural sciences, like physics, but as more abstract concepts began to be developed, mathematicians began formalizing it as a self-contained discipline. Several individuals worked on developing a solid foundation for mathematics by developing existing knowledge from a prior work using formal logic, but the pinnacle work in this area was published in 1910: Principia Mathematica by Alfred North Whitehead and Bertrand Russell (not to be confused with the physics work by Isaac Newton).

There are minutia and different schools of thought about how the theory should be developed (PM is based on a logistic theory), but the point is that mathematicians have spent a long time developing logical arguments. As a result, they are taught to develop arguments in a systematic way so that their claims show a consistent application of deductive logic from known results to the new results they develop. Developers have a lot they can learn from this, namely in how they present and articulate their designs. Mathematical arguments are assumed to be false unless proven true. Therefore, it is a researchers job to lay out their arguments so that critical readers are convinced.

Specifically, a researcher seeking to prove a new result will setup their argument by describing its structure, provide definitions and assumptions used to derive the result, establish intermediary or subsidiary findings as lemmas, state the result(s) as a theorem, and then establish proof of the result. They may additionally provide connections to practical problems, results of computations, and reference to other results in the field.

I talked about this some when discussing formal specification and verification of systems in Dev's Journal 2, and I've continued on the path of diving into this area deeper with the goal of adding more rigor to the way I design and develop systems. As such, I've been working on a paper that formally defines the pricing model used in the OlympusDAO bond markets and provides some claims about its behavior. Part of sharing of my goal in sharing this is to put some pressure on myself to finish it. Another reason is to motivate others to challenge themselves to do the same. Since I haven't done this type of writing in awhile, I've found this advice helpful for structuring the arguments.

The Future of Olympus Pro and Bonds at OlympusDAO

A Request For Comment (RFC) was posted on the OlympusDAO forum this week around spinning off the new permissionless Olympus Pro system (aka Bonds V3) as its own protocol. As one of the developers of the new system and having thought about this a bit, I believe this is a good idea for both Olympus and the OP team moving forward. Here are the points that most resonate with me:

  1. The new system is essentially a permissionless OTC market creator. At its core, it enables anyone to create a market to exchange one asset for another without requiring external liquidity. When viewed from this lens, there are a number of different products that can be built on top of it, not just the "Olympus-style" Bonds as we know them (I call these Repeating Dutch Auctions or Continuous Dutch Auctions - CDA was previously used by Paradigm to refer to a different mechanism so may be less clear). Examples include a dutch auction NFT mint, token launches, collateralized bonds, etc. However, a number of these products are not related to Olympus' core mission of creating a decentralized reserve currency; therefore, it doesn't make sense for Olympus to use cycles/pay to develop these kinds of solutions.
  2. As its own protocol, the Bond system could be similar to a "hyperstructure" with very low or zero fees at the platform level and allow front-end operators to charge referral fees to markets that they route traffic to, similar to Liquity. A neutral protocol that provides infrastructure for this type of financial primitive will allow tokenized payouts to concentrate in the same asset for equal products. A different way of saying this is that allowing anyone to build off the protocol with no fees removes the risk of forks/vampire attacks and providing network effects for tokenized payouts like more liquidity. Similar to UniswapV2, the permissionless nature of the system makes it a potential target for forking. Instantiating the system as a neutral protocol with no/low protocol level fees removes two motivating factors to fork it and focuses on network growth. The current 3.3% fee model is not likely to be competitive in the future. Other offerings that achieve similar outcomes to OP have come out since that structure was put in place, e.g. Porter Finance and Solv Protocol, and others are being built, e.g. Concave Finance. Therefore, past/existing revenue streams from OP should not be taken as a given for Olympus. Significant work will need to be put into continued innovations on front-end applications (as one in the larger ecosystem), business development, and ongoing differentiated services (such as white glove operations and marketing) for partners to continue justifying that type of fee in the market. This will likely require more investment in the near-term than revenues generated by the current service given market conditions (not including development of new products). A separate Bond Protocol allows the team to raise funding to support these efforts instead of Olympus paying for them.
  3. The protocol has been developed with Olympus' requirements in mind. All of the types of bonds that Olympus issues (reserve, inverse, and OHM-OHM bonds as well as both fixed-term and fixed-expiration vesting) can be handled by the system. Additionally, the "callback" functionality of the Bond system would allow Olympus to update custom payout handling logic as needed without changes to the core protocol. Since Olympus operates its own interface for bonds and the protocol will not charge Olympus a fee at the infrastructure level, Olympus would not be charged to use the system. Essentially, Olympus doesn't have any operational disadvantage by spinning off the system and has ownership in the new protocol to benefit from the up-side of growth in existing or new market offerings.

Bond Pricing Parameter Optimization

Olympus-style bonds have a number of parameters which dictate how the pricing mechanism will behave. For Olympus Pro, the team has used many different types of contracts and developed intuitions about the best way to parameterize markets based on expected participation by users and simulations run for each market. Recently, I've been thinking about how to analyze and determine optimal parameters for market pricing. While the values the team has observed in practice over time are a great starting point, there are probably not optimal. And so, I've been trying to come up with methods for deriving better parameter values from the inputs for a market, especially since the new bond system will be permissionless (though that doesn't mean support won't be available to assist with setting up markets).

My basic intuition is that external data such as the trading volume, volatility, and liquidity of a token or the token pair may be useful for some markets to estimate parameters. More specifically, it may help with setting appropriate capacities over a duration and with determining how efficient the market is likely to be. These types of analyses could be built-in to the off-chain components of the system to augment a specific front-end application, providing intelligent market suggestions and a better market creator experience..

One example is appropriately configuring and setting bounds for the speed that price decays at. The debt decay interval in an OTC market determines the speed at which price decreases in the dutch auction style pricing system. The shorter your debt decay interval the faster price decreases and vice versa (I've started thinking about this concept as the velocity of price decay). See the below chart for reference. It shows the speed of the market decaying from starting price to 0 over 3 days (blue), 5 days (red), and 7 days (green). The purple line is a notional value for the minimum price of a market which would be a hard floor.

Assuming there is sufficient external liquidity and an efficient market, an OTC market that is set to instant swaps (no vesting) would be purchased at a discount that offsets the gas fees for making the purchase. The premise here being that there would be different actors (likely bots) competing to arbitrage the market against another liquidity source and those actors would compete away the arbitrage profit.

If we look at the two assumptions I made, the first is that there is external liquidity for the asset being purchased. If it has a lot of liquidity, you could set a faster decay speed with confidence that market actors will not let it drop below the target price, which, all else being equal, would let you sell more capacity over a given period of time. If there is little liquidity, you may need a slower decay interval or to set lower OTC market capacities, so you don't flood the market with more than the external liquidity can absorb. Ideally, not everyone buying from a market will sell, but in a worst case, bot vs. bot scenario, that's what we need to consider.

In the second assumption, we require an efficient market. Here, that means that there are actors that will ensure price parity across different liquidity sources for an asset since they are incentivized to do so with arbitrage profits. It could also mean there are active traders who want to buy good deals and will compete with each other for discounts (essentially the same result).

The current practical estimate for debt decay interval is 5 times the deposit interval. In other words, if you have a deposit interval of 1 day, debt decay interval will be 5 days. Effectively, this means the market price will decay by 20% over the course of a deposit interval. However, we introduce a global minimum of 3 days for the debt decay interval to avoid decaying too quickly (no more than 33% in a day) based on the concept that normal users probably won't be able to react or efficiently utilize a market decaying faster than that whereas bots can react much quicker. The debt decay interval needs to be analyzed in the context of how efficient the market is since steep drops may happen without market action if it is not and the issuer would get worse execution.

One way to increase the liquidity directed at OTC markets, especially instant swap variants, would be to have them integrated into a DEX aggregator or off-chain solver algorithm like CoWSwap (likely better than a regular aggregator). The same concepts can be applied to markets with a vesting period where the types of actors willing to participate would be updated based on the period of illiquidity.

Currently Reading