Fair Play or Foul Play? Ordering a Better Blockchain Game
Secret random oracle technology enforces fairness in transaction ordering, reduces hidden costs, and builds trust in decentralized systems.
When we talk about blockchain systems, much of the conversation is about transparency, decentralization, and immutability. But one of the most overlooked aspects is order: the sequence in which transactions are confirmed and written onto the ledger. At first glance, this may sound like a technical detail. In practice, however, transaction ordering has become one of the most consequential (and controversial) features of blockchain networks.
Most blockchains today rely on deterministic processes to decide which transactions come first. On paper, this seems neutral: the network picks an order and applies it consistently. But in reality, this “neutrality” masks a systemic bias. The order in which transactions are confirmed is often dictated by factors that are irrelevant to the business or user submitting the transaction (for example, whether they happen to live closer to the data center, or whether their internet connection is slightly faster).
The result is a situation where some participants consistently get better outcomes, not because they are earlier or more legitimate, but simply because of geography or network latency. This creates a form of structural unfairness.
Worse, it opens the door to highly profitable manipulations. In financial markets, front-running and arbitrage strategies have long been understood risks. In blockchain networks, these behaviors fall under what is now called Maximal Extractable Value (MEV). Here, specialized actors manipulate transaction ordering to “sandwich” user trades (buying before, selling after), capture liquidation profits in lending protocols, or arbitrage across decentralized exchanges. Even if every player in the system is acting “honestly” by protocol rules, the system design itself unintentionally enables unfairness.
For businesses and policymakers, this is more than a technicality. It threatens trust in the infrastructure, creates hidden costs for end-users, and exposes the entire ecosystem to reputational and regulatory risks. If fairness is the foundation of market integrity, then transaction ordering in blockchains represents a crack in that foundation.
The research at hand approaches this problem by doing something deceptively simple yet powerful: it defines what “fair ordering” actually means in formal, measurable terms.
Two specific guarantees are introduced:
- ϵ-Ordering equality: Imagine two transactions that are identical in all the relevant ways—they were submitted at the same time and contain the same type of request. In a fair system, neither transaction should have a structural advantage. The research formalizes this as ϵ-ordering equality, meaning the probability of either transaction being placed ahead of the other should differ by no more than a very small margin (ϵ). In plain terms, it ensures a “coin-flip level of fairness” among equals.
- Δ-Ordering linearizability: Now imagine two transactions submitted at different times. If one is clearly earlier, fairness requires it to appear first in the final order—provided the gap is meaningful enough. This is Δ-ordering linearizability: if two transactions are separated by at least Δ in real time, the earlier one must always precede the later one.
Together, these two guarantees strike a balance: randomization where it’s fair to randomize, determinism where time clearly matters.
How do you enforce these guarantees in a distributed system? The answer lies in carefully introducing randomness, but doing so in a way that is both auditable and tamper-proof.
The researchers from Microsoft, Cornell, UW, etc. propose a mechanism called the Secret Random Oracle (SRO). It works by generating random values in secret, revealing them only after commitments are locked in, and making them publicly verifiable. This ensures no participant can game the randomization process.
Two technical implementations are suggested:
- One using trusted hardware enclaves (like Intel SGX) to generate randomness securely.
- Another using cryptographic tools (threshold verifiable random functions) that require no hardware trust, only distributed agreement.
Finally, they integrate this into a new consensus protocol called Bercow, which builds on widely used blockchain consensus designs. Bercow essentially inserts randomness at the ordering stage, achieving the fairness guarantees without rewriting the entire consensus mechanism.
The result is a framework that formalizes fairness, enforces it through auditable randomness, and packages it into a protocol design that existing blockchain systems can adopt with minimal disruption.
Once the research team had defined fairness and proposed a method to enforce it, the next logical step was to test whether the idea could work under real-world conditions. Rather than relying on theory alone, they built and deployed their protocol in a distributed network designed to mimic the complexities of global blockchain infrastructure.
The experiments were set up to capture two realities of today’s blockchain networks. First, that participants are not neatly clustered in one data center but are spread across continents. Second, that different ordering rules can favor certain geographies, creating a de facto advantage for some users and a disadvantage for others.
To test this, the researchers created a network of dozens of nodes placed in locations that mirrored the actual geographic distribution of Ethereum participants. This ensured the testbed reflected the same latency differences that occur in practice—for example, the time it takes for a transaction from New York to reach Asia versus one sent locally within Europe.
With this environment in place, they compared their fairness-enhanced protocol against existing consensus approaches that are widely used today. The focus was not on throughput—the number of transactions the system could process—but on the subtler question of who wins and who loses in the ordering race.
The results were telling. Baseline systems showed clear geographic bias: transactions originating closer to key nodes consistently ended up earlier in the ledger. In contrast, when randomness was introduced through the new protocol, those structural advantages were sharply reduced. The distribution of transaction ordering began to look much closer to a level playing field.
The experiments also showed that fairness improvements did not come at the cost of breaking the system’s core functions. Transaction processing remained stable, and the ability to handle normal volumes was preserved. While the injection of randomness did add measurable overhead, it was modest—an incremental delay rather than a wholesale slowdown.
What these findings highlight is that by narrowing the gap in ordering advantages, the protocol changes the incentive structure for participants. Strategies like front-running or geographic positioning become less reliable as profit centers, since they no longer guarantee consistent priority. The result is not the elimination of all competitive behavior, but a shift: instead of exploiting the system’s mechanics, participants must compete on fundamentals such as liquidity provision, risk management, or trading strategy.
This is crucial for long-term trust. By reducing the payoff from gamesmanship, the system lowers the hidden tax that ordinary users pay when their transactions are routinely disadvantaged. At the same time, it signals to regulators and institutional stakeholders that fairness is not just a talking point but something that can be operationalized and verified.
The researchers approached evaluation with the mindset of operators and risk managers, not just engineers. They asked: how would one know, in practice, whether fairness was being achieved?
Three criteria emerged as benchmarks:
- Fairness metrics: Did the ordering probabilities between “equally qualified” transactions converge toward parity? In other words, were structural advantages reduced to an acceptable margin?
- Fidelity to time: When transactions were separated by a meaningful real-world interval, did the system consistently respect the earlier one? This ensured randomness did not blur legitimate time-based priority.
- System performance: Could the fairness enhancements be integrated without undermining the baseline requirements of consensus—namely safety, reliability, and throughput?
These criteria reflect a pragmatic balance: fairness must be more than a theoretical aspiration, but it cannot come at the expense of the operational stability that underpins trust in a financial system.
In essence, success was defined not simply as building a more “just” system in the abstract, but as proving that fairness could coexist with the economic and technical realities of blockchain networks. The experiments demonstrated that this balance is achievable, and that it can be measured in ways meaningful to both engineers and decision-makers.
The way this solution was evaluated underscores a broader lesson: fairness in transaction ordering is not an abstract goal but a measurable performance standard. By grounding success in observable outcomes, the researchers provide a roadmap for how industry actors—whether protocols, exchanges, or infrastructure providers—can benchmark themselves against meaningful fairness criteria.
Unlike many academic exercises that stop at “proof of concept,” this work emphasized evaluative clarity. Fairness was assessed along lines that could matter to operators and stakeholders alike:
- Probability gaps: How much better off one transaction was compared to another when they were equally qualified. Narrowing these gaps meant the system was achieving its fairness mandate.
- Ordering fidelity: Ensuring that when a transaction clearly came first in time, the protocol respected that fact, regardless of network noise.
- Operational resilience: Measuring whether these fairness gains were secured without undermining consensus reliability or overburdening system latency.
This triad of metrics provides a template for others. It turns fairness from a subjective aspiration into something quantifiable, testable, and, critically, comparable across competing platforms.
Of course, no solution is perfect, and the research is transparent about where compromises must be made.
The most important trade-off is between fairness and timeliness. Introducing randomness reduces structural bias, but it also means that the system sometimes waits longer to ensure fairness. In practical terms, this translates to additional fractions of a second in latency. For most users, this is imperceptible; but in markets where milliseconds can shape strategy, this trade-off is real and must be acknowledged.
Another limitation is trust in the randomness mechanism. The version using trusted hardware relies on vendors like Intel, raising questions of centralization and supply-chain trust. The cryptographic version avoids that dependency but comes with higher computational overhead. Industry participants will need to decide which model aligns better with their governance philosophy and risk tolerance.
Finally, the framework has so far been tested primarily around time as the relevant ordering feature. While time is foundational, other features—such as fee levels or transaction type—could be integrated into the fairness model. This represents both a limitation and an opportunity for expansion.
Looking forward, several directions emerge. One is broader integration into existing blockchain stacks. Because the proposed solution is modular—focused on the ordering layer rather than consensus itself—it can, in principle, be layered onto many existing systems without a full rebuild. This lowers barriers to adoption.
Another direction is generalization to multi-chain and cross-chain environments. As activity fragments across rollups, sidechains, and bridges, fairness at the level of a single ledger will not be enough. Applying the same principles across domains will be an essential next step.
Finally, there is room for policy and standards alignment. By formalizing fairness in measurable terms, this research provides a foundation for regulators and industry consortia to establish benchmarks. Instead of vague commitments to “equitable treatment,” protocols could report on quantitative fairness metrics the way banks report on capital adequacy or liquidity ratios.
At its core, the impact of this work is to show that fairness in blockchain transaction ordering is not a lost cause or a philosophical aspiration. It can be engineered, audited, and continuously improved.
For businesses, this offers a pathway to reduce hidden costs, improve user experience, and preempt regulatory criticism. For policymakers, it demonstrates that decentralized systems can evolve toward standards of fairness recognizable from traditional markets. And for the broader ecosystem, it strengthens the legitimacy of blockchain as a financial and transactional infrastructure.
In sum, by translating fairness into something that can be defined, enforced, and measured, this research reframes a long-standing vulnerability into an addressable design choice. The implications go beyond blockchains: they suggest that any distributed system where order matters can—and perhaps must—be held to a fairness standard that is both operational and strategic.
Further Reading
- Mallari, M. (2025, September 13). Shuffling the deck to deal fair. AI-First Product Management by Michael Mallari. https://michaelmallari.bitbucket.io/case-study/shuffling-the-deck-to-deal-fair/
- Zhang, Y., Ni, H., Basu, S., Cohen, S., Yin, M., Alvisi, L., Robbert, V. R., Chen, Q., & Zhou, L. (2025, September 11). Ordered consensus with equal opportunity. arXiv.org. https://arxiv.org/abs/2509.09868