We'll respond shortly.
Based in Chicago, IL., the Traffic Content team ingests tens of thousands of data points every second from various sources around the globe including fixed sensors, toll tag systems, trucking logistics information, incident reports (manually entered through a call center), and eventually weather. This data is collected and run through proprietary algorithms of HERE and traffic models to deliver accurate, real-time traffic data and transit time products to their customers.
Traffic is just one example of the type of mobility analytics that HERE conduct and it deals with not a trivial amount of data. And, that data has to be aggregated real-time. After all, there’s no use if your GPS shows traffic from 2 hours ago. This service has become an important tool that we have become to rely on. So, it has also become important that this information is consistent, and reliable—hundreds of millions of people, including emergency services personnel depend on it to quickly get them to a specified location where lives could be at risk. Over time HERE will be crunching all of this location related data to make predictions that help people not just navigate traffic but all aspects of how they move through their day.
With an ever increasing number of data sources, as well as the incentive to keep ahead of their competition, the Traffic Content team realized that they’d have to re-design their systems to be more scalable and distributed—in effect, they’d have to move to an event-driven architecture (EDA).
At the heart of every EDA is a messaging system that queues and routes high volumes of messages to various application services.
For HERE (still referred to as NAVTEQ in this early architecture diagram), their traffic models push 800,000 1KB messages to vFabric RabbitMQ every minute—that’s 15,000 messages per second. The diagram to the left shows a simplistic view of how the data flows:
Predicting their data needs would only grow, the Traffic Content team decided to build the new system with room to grow. They accommodated for message sizes ranging from 1kB to 1.5MB, and a message throughput of up to 60,000 messages per second—all with minimal latencies.
vFabric RabbitMQ has a heritage of scaling to meet the most demanding messaging needs of any application and has been deployed at HuffingtonPost Live, Indeed.com, First National Bank, Mercado Libre, 15below.com, and Roblox.com, as well as with ocean data and the world’s largest biometric database.
After trying out a couple of products that failed to deliver the robustness and reliability under their requirements, HERE turned to RabbitMQ. They found RabbitMQ easy to configure and deploy in their virtual environment, and it handled the larger message sizes without issues.
The development team responsible for product delivery used Spring-AMQP to quickly create a simple messaging wrapper library that was plugged into the modeling algorithms. Initially, the team ran into an issue with the spikes in message volumes, but the VMware RabbitMQ Engineering team was able to provide a timely patch to resolve the problem before it impacted the HERE project schedule. VMware’s Professional Services also worked closely with the Traffic Content team to provide additional recommendations around the design of message consumers, producers, routing, and disaster recovery.
The Traffic Content team is continuing to build out their new architecture, and will be performing a phased roll-out over the coming next months of the new RabbitMQ based EDA that will scale to meet their needs for the next several years. Next time you look at the traffic on your phone or GPS, think of all the real-time data it takes to make your application work.
For more information on vFabric RabbitMQ: