...well that's if Spotify was artist-controlled, transparent, and built for participation, not passive streaming.
In an industry where art means monkey jpgs and web3 means DEXs, it's exciting to see a project with a huge real-world vision that uses Arweave and AO to do what would be impossible in web2:
Global-scale, real-time interactive experiences that last for hundreds of years.
Event spaces, music venues and galleries have a dead screen problem. Even art-focused digital culture events struggle to fill the hundreds of high-end screens at their disposal with compelling content, often resorting to static images or looped adverts that don’t engage attendees.
Running high quality experiences is usually technically demanding and expensive. And for artists, who find it hard to get royalties when their art is displayed in the wild, the incentive to cooperate isn’t there.
There was no “Spotify for interactive art” that venues could easily plug into their existing setups to provide memorable experiences, but with OLTA that changes.
The state of OLTA today
After a hiatus while migrating, OLTA's persistent state artworks are now back online and ready for exhibition.
The OLTA app wraps all deployed artworks in a gallery-friendly playlist UI so that venues can remotely control which experiences go out to which screens, and track interactions.
OLTA has worked with 15 creators on 26 interactive art projects. Artists and creative studios with portfolios spanning Sotheby’s, Unit London, Samsung and Artblocks have used OLTA to power events in Bristol, Lisbon, Paris and Berlin.
The OLTA app has an 89% engagement rate and an average session time of 35 minutes – numbers almost unheard of in the digital realm, and very rare in physical spaces.
When we first encountered OLTA in the wild, it was powering an interactive artwork (Connect) on a large screen at an Arweave event.
Connect is a canvas of vertices, each representing an interaction from the audience - or anyone in the world. Users could scan a QR code to load the artwork in parallel on their phones and add a new vertex to the piece in real-time. Every interaction saved the whole evolving state of the artwork to Arweave, and was powered by the Warp smart contract layer.
When Warp closed down in 2024, OLTA needed to rebuild the interaction layer on different tech. We met Terence, OLTA's founder, at Arweave Day Berlin earlier this year and spent hours talking about how HyperBEAM can not only be an exact replacement for Warp, but add new alien functionality on top. With HyperBEAM, OLTA can evolve to provide GPU compute, IoT integrations and handle intensive workloads at near-instant speeds rather than the ~5 seconds of latency we found with Warp.
In September, we partnered with OLTA to provide HyperBEAM infrastructure, AO processes, Load S3 integrations, and a brand new real-time sync engine: Olta VM.
The Olta VM
When we first started talking with Terence on the bottlenecks that Olta was facing while building on SmartWeave’s Warp, and the design that was enforced on Olta’s backend engine given several Warp constraints, we directly saw the problem, and the solution.
For example, Olta was dependent on Warp’s finality and throughput, it was taking Olta ~3-10s to read a collective’s contract state, >1-2s write latency, with no possibility to implement real-time update channels, and reach a real-time PvP game experience.
Yes, Olta engine should be as fast as a PvP AAA game, real time data streaming, real time state update, parallelism, and scales to hundreds of thousands of users concurrently. And with those performance goals, we built (and actively building) the Olta VM, on ao, HyperBEAM and its engine as devices.
We will not go deeply in the olta-vm implementation details in this blog post - stay tuned for vol.2- however, let’s do a quick dive. olta-vm is build on 3 principles:
- Performance: AAA PvP game
- Compute & verifiability: ao network
- Modularity: HyperBEAM device’ification
olta-vm is a Rust workspace, split across different crates each for its utility. The main 2 reasons to write olta-vm in Rust are Rust’s features (type safety, concurrency, performance, etc) and NIF stack alignment with HyperBEAM (Native Function Interface).
So, if you dive in the workspace, you will see that the `vm` crate handles the Lobby and Document actions (CRUD), it’s the instructions calculation of Olta’s engine. And then you have the `server` crate, where it’s the I/O channel of the vm (think of JSON-RPC for EVM), it handles websocket connections, messages propagations, message lifecycle (server -> vm -> storage -> server) and more.
And last but not least, the `storage` crate, which is a fast optimistic persistent state provider for olta-vm built on Postgres. This crate is the multiplexer for the upcoming compute & settlement crate: ao.
The reason we decided to go first with the `server` & `storage` (optimistic) crates is to achieve near-instance updates (sub 100-200ms), there once we add the `ao` crate, those 2 crates (server, storage) will be hoisted above `ao` - acting as near-instant optimistic compute-simulation provider, while the `ao` crate reads from the optimistic DB the Lobbies instructions and settle them chronologically on ao (creating and submitting ao messages), which positions ao network as the source of truth in this system, running in a non-blocking thread, ensuring the necessary latencies that Olta should have to bring interactive art, museums and ao altogether physically.
Last but not least, olta-vm will be operated under the HyperBEAM’s micro-modularity concept, emphasising on Sam Williams’ “HyperBEAM is an AppStore” metaphor (watch Sam’s latest appearance with Drew on X) - Olta will be the first Interactive Arts platform running on HyperBEAM & ao.
The future of OLTA
Built on the HyperBEAM stack, OLTA can expand its tech in modules. OLTA’s vision is to become the general purpose experience engine that artists can tap into to build advanced art, and that venues can use to deploy next-gen experiences enabled by VR and IoT.
OLTA is exploring partnerships with hardware manufacturers, from screens to wearables, and integrating Decent Land Labs’ GPU compute device as a way for artists to access resources for intensive graphics remotely.
We’re proud to partner with OLTA and push the permaweb towards a weirder, more interactive future.