Crypto Market Maker -- Prove us wrong

Delays in data-intensive process raise risk in ability to respond to market changes

When one of the largest crypto market makers approached Prospective about their analytics platform’s performance challenges, we were excited to work with them. With over ten years of experience in the space, they've built an impressive technology stack in-house and is embedded in every major sector of the digital asset ecosystem.

The market maker was using Perspective for visualization of real-time trading data, but ran into difficulty in visualizing a large real-time, risk-related trading dataset. This dataset, which updated every 15 minutes, consisted of 800,000 x 200 columns, and took over two minutes to update. This data-intensive process caused delays, raising concern about data staleness and risking the ability to respond promptly to market changes.

Prospective's journey began in Python, a language known for its ease of use but often criticized for its speed limitations, particularly in data-intensive tasks. Over 25 years, Python became a standard in the data science community, thanks to its 'glue language' properties allowing it to integrate efficiently with libraries primarily written in C or C++. Data science became accessible to Python's vast user base, but it came with its own set of complexities, especially when dealing with parallelism.

We discovered that the firm's application had saturated the main Python thread, leading to a bottleneck where updates were queuing, and throttling was inevitable. The application was hampered by Python’s Global Interpreter Lock (GIL), which serialized the execution of threads, limiting the system to use only one CPU core at a time. This situation was untenable for a trading platform where speed and responsiveness are paramount.

First, we identified that each update was processing all 200 columns, even when only a few pieces of data changed, and as a result, delays occurred. Prospective optimized the data update mechanism to modify only the columns that changed. This optimization reduced the computational load drastically and streamlined the update process.

The Perspective updated the internal architecture to allow multiple read operations (queries) to execute in parallel, while maintaining a controlled environment for write operations (updates) to prevent data inconsistency. This change enabled the firm to leverage the full potential of its multi-core server, allowing for true parallel processing of multiple data requests and improved throughput.

Perspective's core engine, built in C++, avoids Python's Global Interpreter Lock (GIL), enabling significant parallelism. This approach improves performance and enables Perspective’s versatility as a platform. Initially designed to operate within a single browser, the codebase extends to server-hosted environments. With minimal modifications, it supports a vast network of users, all interacting with the same application originally intended for individual browser use.

The enhancements accelerated the system's performance and expanded its scalability. With the parallelism improvements, the market maker can use their server's full potential, efficiently distributing tasks across all available resources, regardless of the number of cores or amount of RAM. This adaptability ensures that as it's user base expands or data demands increase, their system can scale, effectively using additional hardware resources. This capability eliminates the issue of performance stagnation even with server upgrades, ensuring consistent, optimal performance irrespective of system size.

Prospective Challenge

Is your problem harder than this? 

If you’re enabling customers and teammates to access, reason about and visualize their data across similar scale and performance expectations, we’d love to chat with you about how we could simplify and enhance your user experience. We’re always happy to chat @ https://prospective.co/meet-eric