EVM is made to run safe, deterministic and synchronous code for smart contracts. It has not changed a lot in its foundation since originally conceived in 2014, and now powers millions of smart contracts written mostly in Solidity.

The core features are:

  • Safety: every operation is executed in a controlled environment to prevent access to the underlying host machine. Memory and stack usage are strictly limited to avoid exploits.
  • Determinism: the EVM ensures that, given the same input and blockchain state, every node in the network will execute the smart contract and produce exactly the same result. This determinism is critical to consensus across the decentralized network.
  • Synchronicity: Unlike traditional multithreaded or asynchronous systems, the EVM runs instructions in a single-threaded, step-by-step fashion. This eliminates race conditions and makes it easier to reason about the contract’s state.

This setup has worked well for the most part. The EVM isn’t suited for high-performance or compute-heavy tasks, not only because it’s not optimized for them, but also because such operations are expensive in gas.

To handle these cases, Ethereum uses precompiles: native functions written in system-level languages like C or Rust. Though they run outside the EVM, they can be called from smart contracts using a special pseudo-address via staticcall. To the developer, they behave like smart contracts, but they execute far more efficiently.

As you can see, precompiles can be designed to leverage full access to the CPU—or even the GPU—making them ideal for running high-compute workloads. Imagine precompiles executing neural networks using libraries like Transformers; this would effectively bring high-performance AI models into the EVM ecosystem, enabling complex computations to run seamlessly within smart contract logic.

The problem

The main issue with the current precompile model is that it can violate block time constraints, leading to transaction timeouts. For example, on Load Network, the block time is set to 1 second with single-slot finality (1s), meaning each transaction must be fully processed within that window. While this works for most standard operations, a precompile performing intensive or repetitive computations may not return a result fast enough, causing the transaction to fail.

The solution

As part of our research program, we’ve been exploring the concept of Asynchronous Precompiles. Unlike traditional precompiles, these would execute asynchronously, returning immediately with a promise-like acknowledgment. The actual response would arrive later via a hook mechanism, allowing the system to continue processing while the high-compute task runs in the background.

The Response

From the user’s perspective, the precompile would behave like a contract with its own address. Once triggered, its response would be sent as a transaction back to the original caller (msg.sender). This response is internally injected into the transaction pool—bypassing gas fees and external validation, since it’s part of the system’s internal communication.

The response data would be included in the transaction’s calldata, encoded in raw bytes. Whether the result is a string, number, JSON, or other format, it would be delivered as a byte array, leaving it to the smart contract to decode and process it as needed. Load.network only supports 10M responses for the moment. These responses belong to bytes type provided by solidity.

For example, the following hex-encoded byte array “0x74657374696e67206461746120666f722064796e616d6963206279746573

When decoded into a string, it shows for ‘testing data for dynamic bytes

Current Obstacles

Smart contracts can’t initiate transactions
One of the core issues is that smart contracts aren’t able to trigger transactions themselves. Trying to work around this by making the contract behave like an EOA means you'd need to distribute a private key to some operators — which is obviously a serious security risk, or alternatively, make use of multi-sig signatures, which is possible but complex in decentralized networks. So in practice, some kind of off-chain component becomes necessary. That breaks the assumption of full on-chain execution and adds operational overhead.

Handling async on-chain is non-trivial
Lazy execution means you need to track the state of computations that don’t finish immediately. There’s no built-in way to do that cleanly on-chain. In systems programming, you’d have structures like callbacks or something like epoll to manage pending tasks. On-chain, you'd need to implement that from scratch: store the state, track completion, manage retries, etc. It’s doable, but it’s complicated and comes with cost and coordination trade-offs.

No consistent way to validate external compute
If you rely on off-chain computation, you eventually need to prove that it was done correctly. But the ecosystem today is fragmented — some use ZK, others rely on trusted hardware (TEE), some on optimistic verification. Each of those has different assumptions and interfaces. Without a standardized mechanism for compute verification, plugging these into a general-purpose system like RETH becomes messy and fragile.

Asynchronous Precompiles & HyperBEAM

Given the HyperBEAM milestone 3 beta 1 release (Milestone-3-BETA-1), it's very possible to leverage the distributed network of HyperBEAM nodes to scale the offchain implementation of these asynchronous precompiles, adding an additional layer of multi-threading and parallelization.

M3 introduces the ~greenzone@1.0 device: A confidential, trust-minimized execution environment that allows nodes operating inside an AMD TEE to securely share a private key. A potential solution to Open Questions #1.

Additionally, the modularity of HyperBEAM's design with the devices will allow us to custom-encapsulate each async precompile in a device that is interoperable with the rest of the HyperBEAM device stack, most importantly, the AO network (AO-Core Protocol - hb_ao device).

If you zoom out a bit, we can have TEE'd asynchronous precompiles compute (their implementation), interoperable with EVM and AO Network, moving computation offchain with on-chain verifiability, interoperability and receipt permanence and provenance via Arweave.

For example, developing a general-purpose EVM Asynchronous Precompiles network leveraging these characteristics, with additional financial security (e.g. staking, subledger consensus on HyperBEAM) can be seen as a new generation of EigenLayer Actively Validated Services (AVS).

Open questions

  1. If smart contracts and precompiles can’t make transactions by themselves, would we be able to modify the Reth engine in a way that at least precompiles are treated as transaction-makers?
  2. How could we verify the responses of precompiles across nodes if they have to run on multiple nodes?

Conclusion

This has been a quick peek into async precompiles with blocktime guarantees. Our primary motivation is to improve the excellent Reth Engine in a way that we can extend its functionality to ultimately evolve the paradigm of how EVM works or should work.