Skip to main content

Big O Notation and Time Complexity Analysis for Uniswap V3 Smart Contracts

Based on the content by Gayle Laakmann Mcdowell

Big O time is the language and metric we use to describe the efficiency of algorithms. In the context of Uniswap V3, Big O helps us reason about how factory operations, pool lookups, pool creation, and fee configuration scale as the protocol grows. Not understanding it thoroughly can hurt you when designing smart contracts or analyzing gas costs.

An Analogy (Uniswap V3 Factory)

Imagine the following scenario: You have a decentralized exchange protocol, and you need to let users trade between two ERC‑20 tokens. To do this, you must either find an existing Uniswap V3 pool or create a new one using the factory.

Your first instinct might be to scan through all existing pools and check whether one already matches the token pair and fee tier. That thought is reasonable—but only half correct.

If the number of pools were very small, this approach would be fine. But Uniswap V3 is designed to support many tokens, many fee tiers, and many pools.

Instead, the factory uses direct mappings (for example, token pair + fee → pool address). With this design, finding a pool takes essentially the same amount of time regardless of how many pools already exist.

In other words, as the protocol grows, the structure of the solution matters more than raw speed for small inputs.

Time Complexity

This is what asymptotic runtime, or Big O time, means in the context of Uniswap V3.

  • Direct pool lookup (getPool): O(1). Given tokenA, tokenB, and fee, the factory performs a constant-time lookup. The time does not grow with the number of pools.
  • Naive pool search (hypothetical): O(N), where N is the number of pools. The more pools that exist, the longer the lookup would take.

No matter how small the constant cost of an O(N) scan might be, a constant-time lookup will eventually outperform it as the protocol scales.

Common runtimes you’ll encounter when reasoning about protocol logic include O(1), O(log N), and sometime O(N). Exponential runtimes are almost always unacceptable on-chain.

You can also have multiple variables in a runtime. For example, iterating over A fee tiers and B token pairs would be O(A × B).

Recursive Runtimes (Protocol Design Perspective)

Consider a hypothetical function that recursively initializes state for a pool, where each step triggers two additional initialization steps until some base condition is reached.

function init(uint256 n) {
if (n <= 1) return;
init(n - 1);
init(n - 1);
}

Many people see two recursive calls and jump to O(N²). This is incorrect.

Instead, imagine the call structure as a tree. Each call creates two more calls, and this continues until the depth reaches N.

The tree has:

  • Depth: N
  • Branching factor: 2

The total number of calls is:

2⁰ + 2¹ + 2² + … + 2ᴺ = 2ᴺ⁺¹ − 1

This gives a runtime of O(2ᴺ).

In smart contract development, this kind of growth is completely impractical. Even small values of N would exceed gas limits. This is why recursive designs with branching behavior are avoided in Uniswap V3 contracts. If you see Uniswap V3 NFT position creation contracts in https://vscode.blockscan.com/ethereum/0xC36442b4a4522E871399CD717aBDD847Ab11FE88, there is no O(N) call in core logic, the only place I found a for loop was within Multicall.sol.

The space complexity here would be O(N), because only one branch of the recursion exists on the call stack at any given time.

Amortized Time (Factory Configuration)

The Uniswap V3 Factory supports enabling new fee amounts via enableFeeAmount. This is not something that happens frequently, but when it does, it may involve additional storage writes.

Most interactions—such as querying the owner, fetching tick spacing, or looking up pools—are constant time.

Amortized analysis allows us to say: although some operations are more expensive when they occur, they happen rarely enough that the average cost per operation remains small.

This is similar to how dynamic arrays resize: occasionally expensive, but cheap on average.

In protocol terms, administrative operations may be costly, but user-facing operations remain efficient.

Log N Runtimes (Price and Tick Reasoning)

O(log N) commonly appears in Uniswap V3 when reasoning about ticks and price ranges.

Ticks represent discretized price points. Searching for the next initialized tick or navigating price ranges conceptually resembles repeatedly halving a search space.

Each step narrows the possible range of prices until the correct tick is found.

Whenever the problem space is reduced by a constant factor at each step, the runtime is likely O(log N).

This is a valuable mental model when reasoning about why Uniswap V3 can efficiently handle very fine-grained price ranges.

Drop the Non-Dominant Terms

Suppose a factory-related operation takes O(N + 1) time, where N is the number of pools and the constant work includes validation and event emission.

We drop constants and non-dominant terms, leaving O(N).

Examples:

  • O(N + 1) → O(N)
  • O(N + log N) → O(N)
  • O(2ᴺ + N) → O(2ᴺ)

However, if two variables are independent—such as tokens (A) and fee tiers (B)—an expression like O(A + B) cannot be reduced further.

Multi-Part Algorithms: Add vs. Multiply (Factory Operations)

Suppose you perform two independent factory operations:

  • Iterate over all fee tiers
  • Then iterate over all pools

The runtime is O(F + P).

If instead, for each fee tier, you iterate over all pools, the runtime becomes O(F × P).

In other words:

  • “Do this, then do that” → add runtimes
  • “Do this for each time you do that” → multiply runtimes

This distinction is critical when estimating gas usage and avoiding nested loops in smart contracts.

Big O, Big Theta, and Big Omega (Protocol Analysis)

In academic terms:

  • Big O gives an upper bound on runtime.
  • Big Omega gives a lower bound.
  • Big Theta gives a tight bound.

For example, getPool can be described as O(1), Ω(1), and Θ(1).

In industry—and in protocol discussions—we usually say O(1) and mean the tight bound. This is the convention followed here.

Best Case, Worst Case, and Expected Case

Consider a hypothetical pool-creation flow:

  • Best case: The pool already exists. The factory performs a lookup and reverts immediately. O(1).
  • Worst case: Additional validation or state checks are required before creation. Still bounded and predictable.
  • Expected case: Most calls are simple lookups or straightforward creations with constant complexity.

Best, worst, and expected cases describe how runtime behaves under different scenarios.

Big O, Big Omega, and Big Theta describe bounds on that runtime.

Space Complexity

Time is not the only concern in Uniswap V3. Storage usage is equally important.

Factory functions are designed to use a constant amount of additional storage per operation. There is no unbounded growth tied to user input size within a single call.

This careful control of space complexity is essential for keeping gas costs predictable and the protocol scalable.