/LiquityProtocol/status/1691480138708324352/photo/1 Oracles in DeFi.

15 Aug 2023, 16:01
/LiquityProtocol/status/1691480138708324352/photo/1 Oracles in DeFi In the DeFi universe, smart contracts make the rules, executing transactions based on data. But where does this data come from? How do smart contracts gather real-world, off-chain information? This is where oracles flex their muscles, connecting the on-chain, and off-chain worlds. A post on some of the approaches taken in the industry, and how it ties into choosing the right oracle for Liquity v2 🧵 The three pillars of modern oracles: Push-based, Low-latency, and TWAP 1) Push-based oracles (eg: @chainlink ): Push-based oracles follow a simple mechanism where they regularly "push" price data on-chain at specific intervals, or at various deviation thresholds. @chainlink is a well-known example of a push-based oracle. At predetermined time intervals, or if the price of a said asset deviates by more than 0.5% since the last update, Chainlink 'pushes' the latest market price data to the blockchain. Chainlink uses a decentralized network of nodes to collect and feed data to smart contracts; this proxy architecture also ensures smooth upgrades. However, push-based oracles don’t come without any tradeoffs. Chainlink's reputation-based system, where only whitelisted nodes contribute data, guarantees the accuracy and security of the feed, but also raises questions on decentralization. There might also be occasional delays in updating price data, leading to potential latency issues (eg., if the ETH price moves by 0.49%, there won’t be a trigger for an update, and it might update after 1 hour). 2. Low-latency / pull-based oracles (eg: low-latency Chainlink, @redstone_defi , @PythNetwork ) As the name suggests, low-latency oracles promise near-instantaneous data updates. These oracles require keepers or users to first pull the price, and then send the price data to the dAPP as part of the transaction the user wants to perform. For products requiring real-time data, low-latency oracles are a godsend. Sub-second updates can be crucial in volatile markets, or for specific financial instruments (eg., derivatives, margin trading, etc). The reduced chances of lag also reduce arbitrage opportunities within the protocol. While offering quick data retrieval, low-latency oracles can complicate the user experience due to their pull-based approach (especially when it comes to the front-end experience, and composability). Furthermore, elements of centralization in these oracles also raise concerns about their 'true' decentralization. 3. Time-Weighted Average Price oracles (eg: @Uniswap v3) TWAP oracles, like Uniswap v3, offer a moving average of an asset’s price over a specified period. The inherent design of TWAP oracles, relying on liquidity depth, ensures market prices are consistent and protected from short-term manipulation or extreme volatility. However, price lags, and liquidity guarantees are a concern. By their very nature, TWAPs don’t give real-time prices, but an average. This time lag, especially in rapidly changing markets, might lead to less optimal executions. There is also the case of liquidity guarantees; what happens if LP's pull their liquidity away from Uniswap v3 to v4, and an immutable protocol opts for a TWAP oracle that is based on Uniswap v3? So with all this being considered, what’s our criteria in choosing the right oracle for v2? In the next post, we'll delve into Liquity v2 and its oracle objectives, outlining the challenges that come with choosing the right oracle. Notifications on 🔔

Same news in other sources

8
Liquity USD
Liquity USDLUSD #411
Twitter
18 Aug 2023, 16:23
📢 Low-latency oracles & planning around contingencies for v2 🔵 When prices on chain reflect real-world prices near-instantaneously, the window of opportunity for front-runners shrinks dramatically. Let's dive into the two main approaches low-latency oracles take when it comes to mitigating front-running 👇 /LiquityProtocol/status/1692572917773615185/photo/1 Self-serve: Users pull the price themselves. This approach is especially interesting as prices can be published with sub-second frequency; in the worst case, the latency is only one block time (12 seconds). Self-serve oracles utilize the same set of node operators and multi-layered data aggregation mechanism currently deployed in the existing oracle providers (eg. @chainlink , @redstone_defi , @PythNetwork ) reference feeds, but with low latency! Next up is the Deferred Settlement approach👇 /LiquityProtocol/status/1692572917773615185/photo/2 Deferred Settlement: The deferred settlement approach involves a keeper pulling the price data from the oracle after the user commits to an operation. This introduces a delay between the user's commitment to the operation, and the finalization of it by the keeper. This delay pretty much eliminates front-running, as it becomes challenging for malicious actors to anticipate price movements within this timeframe. While this approach counters front-running, it does come with a downside. It reduces composability, which is the seamless interoperability between different applications within DeFi. In this case, the need for two transactions (user commitment and keeper finalization) breaks the composability on the stablecoin side of things within v2. All these approaches offer solutions to front-running issues, but what happens if the primary oracle fails or freezes 👇 v2's system needs to handle potential oracle failures automatically since we plan to make the protocol as immutable as it can be. From our experience of having designed quite a complex fallback logic around our backup @WeAreTellor oracle in Liquity v1, we have learned some lessons. If we do decide to have a fallback oracle in case the primary one fails or freezes, we want to favor simplicity! What does this mean exactly? - Simple fallback logic: if the primary oracle fails, fall back once (and don’t return) - Simple conditions for fallback: Eg. if the oracle has a frozen price for >12 hours / or bad data, revert - Protect against technical failures, and not manipulation Despite this, determining usable signals for malfunction within low-latency oracles is challenging. With low-latency oracles, where a data blob is pulled on-chain and verified by the oracle's contract, detecting an oracle failure becomes tricky because a data fault can look similar to an oracle malfunction. What about circuit breakers?🤔 /LiquityProtocol/status/1692572917773615185/photo/3 @GyroStable 's innovative approach uses @chainlink as a primary oracle. The protocol pauses minting & redemptions if Chainlink prices jump too much, or deviate from the median of signed prices. The problem here comes from the need for human intervention to set a new oracle and unpause the system, which doesn’t align with the design in mind we have for v2. In essence, our guiding light on contingency will be based around simplicity, paired with robust automation. As our quest for the ideal oracle solution continues, we're prioritizing scalability, decentralization, and unwavering reliability. We're eager for your insights and suggestions; engage with us on our Discord's #v2channel, and don't hesitate to shill us some innovative oracles! #StablecoinRevolution
Low-latency oracles & planning around contingencies for v2.
📢 Low-latency oracles & planning around contingencies for v2 🔵 When prices on chain reflect real-world prices near-instantaneously, the window of opportunity for front-runners shrinks dramatically. Let's dive into the two main approaches low-latency oracles take when it comes to mitigating front-running 👇 /LiquityProtocol/status/1692572917773615185/photo/1 Self-serve: Users pull the price themselves. This approach is especially interesting as prices can be published with sub-second frequency; in the worst case, the latency is only one block time (12 seconds). Self-serve oracles utilize the same set of node operators and multi-layered data aggregation mechanism currently deployed in the existing oracle providers (eg. @chainlink , @redstone_defi , @PythNetwork ) reference feeds, but with low latency! Next up is the Deferred Settlement approach👇 /LiquityProtocol/status/1692572917773615185/photo/2 Deferred Settlement: The deferred settlement approach involves a keeper pulling the price data from the oracle after the user commits to an operation. This introduces a delay between the user's commitment to the operation, and the finalization of it by the keeper. This delay pretty much eliminates front-running, as it becomes challenging for malicious actors to anticipate price movements within this timeframe. While this approach counters front-running, it does come with a downside. It reduces composability, which is the seamless interoperability between different applications within DeFi. In this case, the need for two transactions (user commitment and keeper finalization) breaks the composability on the stablecoin side of things within v2. All these approaches offer solutions to front-running issues, but what happens if the primary oracle fails or freezes 👇 v2's system needs to handle potential oracle failures automatically since we plan to make the protocol as immutable as it can be. From our experience of having designed quite a complex fallback logic around our backup @WeAreTellor oracle in Liquity v1, we have learned some lessons. If we do decide to have a fallback oracle in case the primary one fails or freezes, we want to favor simplicity! What does this mean exactly? - Simple fallback logic: if the primary oracle fails, fall back once (and don’t return) - Simple conditions for fallback: Eg. if the oracle has a frozen price for >12 hours / or bad data, revert - Protect against technical failures, and not manipulation Despite this, determining usable signals for malfunction within low-latency oracles is challenging. With low-latency oracles, where a data blob is pulled on-chain and verified by the oracle's contract, detecting an oracle failure becomes tricky because a data fault can look similar to an oracle malfunction. What about circuit breakers?🤔 /LiquityProtocol/status/1692572917773615185/photo/3 @GyroStable 's innovative approach uses @chainlink as a primary oracle. The protocol pauses minting & redemptions if Chainlink prices jump too much, or deviate from the median of signed prices. The problem here comes from the need for human intervention to set a new oracle and unpause the system, which doesn’t align with the design in mind we have for v2. In essence, our guiding light on contingency will be based around simplicity, paired with robust automation. As our quest for the ideal oracle solution continues, we're prioritizing scalability, decentralization, and unwavering reliability. We're eager for your insights and suggestions; engage with us on our Discord's #v2channel, and don't hesitate to shill us some innovative oracles! #StablecoinRevolution
Liquity USD
Liquity USDLUSD #411
Twitter
18 Aug 2023, 02:29
Front-running solutions for v2
Front-running solutions for v2.
Front-running solutions for v2
Liquity USD
Liquity USDLUSD #411
Twitter
17 Aug 2023, 17:42
The challenges of front-running, and solutions for v2🔵📢 Front-running occurs when a trader sees the market price move and exploits that information with an on-chain operation, before the on-chain price has caught up with the market price. With regards to v2, front-running is an essential challenge to solve as it can potentially be an issue on the leveraged operations, and the stablecoin operations. This can work in a couple of ways: /LiquityProtocol/status/1692230479611646409/photo/1 Leverage position front-running: a user can open a position as a front-run to an anticipated price rise, then quickly close the leverage position after the price updates on-chain, and gain leverage-amplified profits. Leverage loss evasion: if a user already has a position open, they can close it as a front-run to an expected price drop, immediately reopen it after the price updates on-chain, and avoid a leverage loss. On the stablecoins operations side, this issue can also persist as users can front-run price updates with stablecoin mint and redeem operations. This would in turn extract ETH from the reserve (upon minting) or extract extra stablecoins (upon redeeming). However, the extractable value from these would not be as large as the leveraged operations, as they are not amplified. So what are some solutions to this? 👇 Addressing front-running challenges /LiquityProtocol/status/1692230479611646409/photo/3 Though numerous strategies exist to tackle front-running, given our criteria for v2, we've identified four primary methods to address these concerns in our system. Let's dive into each 👇 1) Minimum Delay: this introduces uncertainty for attackers attempting to front-run by sandwiching operations around price updates. By introducing this delay, attackers won't know the exact price they will be executing at when they close their position, making their front running efforts non risk-free. However, this solution is only effective in one direction, as attackers could still evade a leverage loss by quickly closing their position during an anticipated price drop, and reopening it from a different account. For mitigating loss evasion, a two-step commit & confirm process would work. For example, to close their position, a user must send one transaction committing to closing, and then after some minimum delay, send a second transaction to actually close it. They could be penalized for not closing in time, to remove excess optionality. The small delay between commit and confirm would introduce price uncertainty, and make loss evasion risky. However, this breaks composability of closing positions by requiring two transactions. 👇 2) Pausing during high volatility: By pausing operations during high volatility periods, the worst front-running opportunities can be prevented without interrupting regular system functionality. Pausing the system during extreme volatility, even for a few seconds per week, can prevent the worst front-run attempts. In v2’s case, pausing would work if the system is in a healthy range (i.e. when the backing ratio is not low). 👇 3) Using the 'worst' price approach to prevent front-running The 'worst' price approach is an interesting strategy to counter front-running, pioneered by @synthetix_io . It involves using two oracle prices: the current price, and a lagged price. The key idea is to pick whichever price is worse for the user in each operation. When a user opens a position, the system uses the maximum price from the current and lagged prices. Conversely, when they close their position, the system goes with the minimum price from the two. By using the 'worst' price for both opening and closing positions, an 'effective fee' is established. This reduces the profits front runners could make by exploiting price differences. This approach has a nice property - the effective fee is proportional to volatility. Let’s look at the graph below to understand how this works: /LiquityProtocol/status/1692230479611646409/photo/2 During low volatility (first half of the graph), the gap (delta) between the ETH price and lagging ETH price (constant lag) remains relatively small. When the market gets more volatile (second half), this gap widens significantly, and the delta increases greatly! Why is this important? 🤔 Front runners thrive on volatility. As prices swing more, they can make larger profits. The effective fee created using the worst price approach follows the same pattern. It becomes larger as volatility increases, hitting front runners where it hurts the most - their potential profits. 👇 4) Low-latency oracles The last approach we see where front-running can be mitigated is through using low-latency oracles: through either the self-serve approach, or the deferred settlement approach. Tune in tomorrow as we conclude our oracle series, delving into both of these approaches, and our oracle contingency plans👀.. . Notifications on 🔔
The challenges of front-running, and solutions for v2.
The challenges of front-running, and solutions for v2🔵📢 Front-running occurs when a trader sees the market price move and exploits that information with an on-chain operation, before the on-chain price has caught up with the market price. With regards to v2, front-running is an essential challenge to solve as it can potentially be an issue on the leveraged operations, and the stablecoin operations. This can work in a couple of ways: /LiquityProtocol/status/1692230479611646409/photo/1 Leverage position front-running: a user can open a position as a front-run to an anticipated price rise, then quickly close the leverage position after the price updates on-chain, and gain leverage-amplified profits. Leverage loss evasion: if a user already has a position open, they can close it as a front-run to an expected price drop, immediately reopen it after the price updates on-chain, and avoid a leverage loss. On the stablecoins operations side, this issue can also persist as users can front-run price updates with stablecoin mint and redeem operations. This would in turn extract ETH from the reserve (upon minting) or extract extra stablecoins (upon redeeming). However, the extractable value from these would not be as large as the leveraged operations, as they are not amplified. So what are some solutions to this? 👇 Addressing front-running challenges /LiquityProtocol/status/1692230479611646409/photo/3 Though numerous strategies exist to tackle front-running, given our criteria for v2, we've identified four primary methods to address these concerns in our system. Let's dive into each 👇 1) Minimum Delay: this introduces uncertainty for attackers attempting to front-run by sandwiching operations around price updates. By introducing this delay, attackers won't know the exact price they will be executing at when they close their position, making their front running efforts non risk-free. However, this solution is only effective in one direction, as attackers could still evade a leverage loss by quickly closing their position during an anticipated price drop, and reopening it from a different account. For mitigating loss evasion, a two-step commit & confirm process would work. For example, to close their position, a user must send one transaction committing to closing, and then after some minimum delay, send a second transaction to actually close it. They could be penalized for not closing in time, to remove excess optionality. The small delay between commit and confirm would introduce price uncertainty, and make loss evasion risky. However, this breaks composability of closing positions by requiring two transactions. 👇 2) Pausing during high volatility: By pausing operations during high volatility periods, the worst front-running opportunities can be prevented without interrupting regular system functionality. Pausing the system during extreme volatility, even for a few seconds per week, can prevent the worst front-run attempts. In v2’s case, pausing would work if the system is in a healthy range (i.e. when the backing ratio is not low). 👇 3) Using the 'worst' price approach to prevent front-running The 'worst' price approach is an interesting strategy to counter front-running, pioneered by @synthetix_io . It involves using two oracle prices: the current price, and a lagged price. The key idea is to pick whichever price is worse for the user in each operation. When a user opens a position, the system uses the maximum price from the current and lagged prices. Conversely, when they close their position, the system goes with the minimum price from the two. By using the 'worst' price for both opening and closing positions, an 'effective fee' is established. This reduces the profits front runners could make by exploiting price differences. This approach has a nice property - the effective fee is proportional to volatility. Let’s look at the graph below to understand how this works: /LiquityProtocol/status/1692230479611646409/photo/2 During low volatility (first half of the graph), the gap (delta) between the ETH price and lagging ETH price (constant lag) remains relatively small. When the market gets more volatile (second half), this gap widens significantly, and the delta increases greatly! Why is this important? 🤔 Front runners thrive on volatility. As prices swing more, they can make larger profits. The effective fee created using the worst price approach follows the same pattern. It becomes larger as volatility increases, hitting front runners where it hurts the most - their potential profits. 👇 4) Low-latency oracles The last approach we see where front-running can be mitigated is through using low-latency oracles: through either the self-serve approach, or the deferred settlement approach. Tune in tomorrow as we conclude our oracle series, delving into both of these approaches, and our oracle contingency plans👀.. . Notifications on 🔔
Liquity USD
Liquity USDLUSD #411
Twitter
17 Aug 2023, 12:46
The oracle criteria for Liquity v2
The oracle criteria for Liquity v2.
The oracle criteria for Liquity v2
Liquity USD
Liquity USDLUSD #411
Twitter
16 Aug 2023, 16:11
📢 The oracle criteria for Liquity v2 🔵 Given their pivotal role in providing reliable, timely, and tamper-proof external data, choosing the right oracle for Liquity v2 is of paramount importance. Let's break down the key aspects to consider when choosing the right oracle for v2, and contextualize them with the incumbent providers in the space. 👇 Decentralization and Immutability: A decentralized oracle minimizes single points of failure and reduces the risk of data manipulation. Immutability ensures that once data is added, it cannot be changed, guaranteeing the integrity of the data. For the level of decentralization we aim for in v2, an oracle with strong decentralization is preferred; i.e. as little admin control as possible. Compatibility: Liquity v2's ideal immutable nature necessitates oracles with guaranteed endpoints that do not change. Latency: In DeFi, protocols that have an element of leverage (like v2), real-time or near-real-time data is crucial. High latency can lead to exploitable inefficiencies, like front-running, which can erode trust and financial stability (more on this later). Track record: A reliable and battle-tested oracle is preferable to ensure accuracy and prevent potential vulnerabilities. Crypto-economic guarantees: The oracle’s consensus mechanism needs to have guarantees in place in order to avoid potential attacks. Attack costs and trust assumptions: The oracle's security is vital, along with trust assumptions in the network where the system will be deployed (e.g., Mainnet). Node decentralization & presence on Ethereum mainnet: It is essential that the nodes are as decentralized as possible. The oracle should also have a presence on Ethereum mainnet. So taking all of this into account, is there one oracle that can solve it all? /LiquityProtocol/status/1691845067198112218/photo/1 As you can see from the infographic above, there is no oracle that is perfect - they all have different trade-offs. In our initial research, there were two that stood out when it came to the decentralization and immutability aspects - Tellor & Uniswap v3 TWAP. Both however, have two 'deal breakers' which wouldn’t work for Liquity v2. In Uniswap’s case, the concern stems from liquidity guarantees; once v4 launches, will the liquidity on Uniswap v3 stick? Considering the goal for us is to build a protocol that stands the test of time, it definitely is a huge concern. In Tellor’s case, there is the 'price dispute' period. Tellor’s price feeds have a dispute policy where prices can be disputed for a period between 10-20 minutes, to allow time for any 'fake' prices to be weeded out. As a reminder, v2 will have a component of leverage built into it, which requires price feeds to have low latency; waiting for 10-20 minutes unfortunately becomes a deal-breaker. So considering all this, why is having a low latency oracle & a solution to front-running critical to v2? Join us tomorrow to find out! Notifications on 🔔
The oracle criteria for Liquity v2.
📢 The oracle criteria for Liquity v2 🔵 Given their pivotal role in providing reliable, timely, and tamper-proof external data, choosing the right oracle for Liquity v2 is of paramount importance. Let's break down the key aspects to consider when choosing the right oracle for v2, and contextualize them with the incumbent providers in the space. 👇 Decentralization and Immutability: A decentralized oracle minimizes single points of failure and reduces the risk of data manipulation. Immutability ensures that once data is added, it cannot be changed, guaranteeing the integrity of the data. For the level of decentralization we aim for in v2, an oracle with strong decentralization is preferred; i.e. as little admin control as possible. Compatibility: Liquity v2's ideal immutable nature necessitates oracles with guaranteed endpoints that do not change. Latency: In DeFi, protocols that have an element of leverage (like v2), real-time or near-real-time data is crucial. High latency can lead to exploitable inefficiencies, like front-running, which can erode trust and financial stability (more on this later). Track record: A reliable and battle-tested oracle is preferable to ensure accuracy and prevent potential vulnerabilities. Crypto-economic guarantees: The oracle’s consensus mechanism needs to have guarantees in place in order to avoid potential attacks. Attack costs and trust assumptions: The oracle's security is vital, along with trust assumptions in the network where the system will be deployed (e.g., Mainnet). Node decentralization & presence on Ethereum mainnet: It is essential that the nodes are as decentralized as possible. The oracle should also have a presence on Ethereum mainnet. So taking all of this into account, is there one oracle that can solve it all? /LiquityProtocol/status/1691845067198112218/photo/1 As you can see from the infographic above, there is no oracle that is perfect - they all have different trade-offs. In our initial research, there were two that stood out when it came to the decentralization and immutability aspects - Tellor & Uniswap v3 TWAP. Both however, have two 'deal breakers' which wouldn’t work for Liquity v2. In Uniswap’s case, the concern stems from liquidity guarantees; once v4 launches, will the liquidity on Uniswap v3 stick? Considering the goal for us is to build a protocol that stands the test of time, it definitely is a huge concern. In Tellor’s case, there is the 'price dispute' period. Tellor’s price feeds have a dispute policy where prices can be disputed for a period between 10-20 minutes, to allow time for any 'fake' prices to be weeded out. As a reminder, v2 will have a component of leverage built into it, which requires price feeds to have low latency; waiting for 10-20 minutes unfortunately becomes a deal-breaker. So considering all this, why is having a low latency oracle & a solution to front-running critical to v2? Join us tomorrow to find out! Notifications on 🔔
Liquity USD
Liquity USDLUSD #411
Twitter
16 Aug 2023, 14:51
Oracles in DeFi
Oracles in DeFi.
Oracles in DeFi
Liquity USD
Liquity USDLUSD #411
Twitter
16 Aug 2023, 07:36
the stable $LUSD from @LiquityProtocol is now a collateral on Aave V3. Have fun.
the stable $LUSD from @LiquityProtocol is now a collateral on Aave V3. Have fun.
the stable $LUSD from @LiquityProtocol is now a collateral on Aave V3. Have fun.
Liquity USD
Liquity USDLUSD #411
Twitter
15 Aug 2023, 22:06
another one 📢 @BaoCommunity will be swapping the DAI in their PSM for LUSD 🔵 They also have a proposal open to swap the USDC in their treasury to LUSD ✨ Check out their thread to learn more 👇
another one. @BaoCommunity will be swapping the DAI in their PSM for LUSD.
another one 📢 @BaoCommunity will be swapping the DAI in their PSM for LUSD 🔵 They also have a proposal open to swap the USDC in their treasury to LUSD ✨ Check out their thread to learn more 👇