Flex Consumption delivers quick and enormous scale-out options on a serverless mannequin and helps lengthy perform execution instances, personal networking, occasion measurement choice, and concurrency management.
GitHub is the house of the world’s software program builders, with greater than 100 million builders and 420 million whole repositories throughout the platform. To maintain every little thing working easily and securely, GitHub collects an amazing quantity of information by an in-house pipeline made up of a number of parts. However although it was constructed for fault tolerance and scalability, the continuing development of GitHub led the corporate to reevaluate the pipeline to make sure it meets each present and future calls for.
“We had a scalability downside, at present, we gather about 700 terabytes a day of information, which is closely used for detecting malicious conduct towards our infrastructure and for troubleshooting. This inner system was limiting our development.”
—Stephan Miehe, GitHub Senior Director of Platform Safety
GitHub labored with its father or mother firm, Microsoft, to discover a answer. To course of the occasion stream at scale, the GitHub crew constructed a perform app that runs in Azure Capabilities Flex Consumption, a plan lately launched for public preview. Flex Consumption delivers quick and enormous scale-out options on a serverless mannequin and helps lengthy perform execution instances, personal networking, occasion measurement choice, and concurrency management.
In a latest check, GitHub sustained 1.6 million occasions per second utilizing one Flex Consumption app triggered from a network-restricted occasion hub.
“What actually issues to us is that the app scales up and down based mostly on demand. Azure Capabilities Flex Consumption may be very interesting to us due to the way it dynamically scales based mostly on the variety of messages which are queued up in Azure Occasion Hubs.”
—Stephan Miehe, GitHub Senior Director of Platform Safety

A glance again
GitHub’s downside lay in an inner messaging app orchestrating the move between the telemetry producers and customers. The app was initially deployed utilizing Java-based binaries and Azure Occasion Hubs. However because it started dealing with as much as 460 gigabytes (GB) of occasions per day, the app was reaching its design limits, and its availability started to degrade.
For finest efficiency, every client of the outdated platform required its personal atmosphere and time-consuming handbook tuning. As well as, the Java codebase was liable to breakage and arduous to troubleshoot, and people environments had been getting costly to keep up because the compute overhead grew.
“We couldn’t settle for the danger and scalability challenges of the present answer,“ Miehe says. He and his crew started to weigh the options. “We had been already utilizing Azure Occasion Hubs, so it made sense to discover different Azure providers. Given the straightforward nature of our want—HTTP POST request—we needed one thing serverless that carries minimal overhead.”
Accustomed to serverless code growth, the crew centered on related Azure-native options and arrived at Azure Capabilities.
“Each platforms are well-known for being good for easy knowledge crunching at giant scale, however we don’t need to migrate to a different product in six months as a result of we’ve reached a ceiling.”
—Stephan Miehe, GitHub Senior Director of Platform Safety
A perform app can robotically scale the queue based mostly on the quantity of logging site visitors. The query was how a lot it may scale. On the time GitHub started working with the Azure Capabilities crew, the Flex Consumption plan had simply entered personal preview. Primarily based on a brand new underlying structure, Flex Consumption helps as much as 1,000 partitions and offers a quicker target-based scaling expertise. The product crew constructed a proof of idea that scaled to greater than double the legacy platform’s largest matter on the time, exhibiting that Flex Consumption may deal with the pipeline.
“Azure Capabilities Flex Consumption provides us a serverless answer with 100% of the capability we’d like now, plus all of the headroom we’d like as we develop.”
—Stephan Miehe, GitHub Senior Director of Platform Safety
Making an excellent answer nice
GitHub joined the personal preview and labored carefully with the Azure Capabilities product crew to see what else Flex Consumption may do. The brand new perform app is written in Python to eat occasions from Occasion Hubs. It consolidates giant batches of messages into one giant message and sends it on to the customers for processing.
Discovering the fitting quantity for every batch took some experimentation, as each perform execution has at the very least a small proportion of overhead. At peak utilization instances, the platform will course of greater than 1 million occasions per second. Understanding this, the GitHub crew wanted to search out the candy spot in perform execution. Too excessive a quantity and there’s not sufficient reminiscence to course of the batch. Too small a quantity and it takes too many executions to course of the batch and slows efficiency.
The proper quantity proved to be 5,000 messages per batch. “Our execution instances are already extremely low—within the 100–200 millisecond vary,” Miehe reviews.
This answer has built-in flexibility. The crew can range the variety of messages per batch for various use circumstances and might belief that the target-based scaling capabilities will scale out to the perfect variety of cases. On this scaling mannequin, Azure Capabilities determines the variety of unprocessed messages on the occasion hub after which instantly scales to an applicable occasion rely based mostly on the batch measurement and partition rely. On the higher certain, the perform app scales as much as one occasion per occasion hub partition, which may work out to be 1,000 cases for very giant occasion hub deployments.
“If different prospects need to do one thing related and set off a perform app from Occasion Hubs, they must be very deliberate within the variety of partitions to make use of based mostly on the dimensions of their workload, should you don’t have sufficient, you’ll constrain consumption.”
—Stephan Miehe, GitHub Senior Director of Platform Safety
Azure Capabilities helps a number of occasion sources along with Occasion Hubs, together with Apache Kafka, Azure Cosmos DB, Azure Service Bus queues and subjects, and Azure Queue Storage.
Reaching behind the digital community
The perform as a service mannequin frees builders from the overhead of managing many infrastructure-related duties. However even serverless code might be constrained by the restrictions of the networks the place it runs. Flex Consumption addresses the problem with improved digital community (VNet) help. Operate apps might be secured behind a VNet and might attain different providers secured behind a VNet—with out degrading efficiency.
As an early adopter of Flex Consumption, GitHub benefited from enhancements being made behind the scenes to the Azure Capabilities platform. Flex Consumption runs on Legion, a newly architected, inner platform as a service (PaaS) spine that improves community capabilities and efficiency for high-demand eventualities. For instance, Legion is able to injecting compute into an present VNet in milliseconds—when a perform app scales up, every new compute occasion that’s allotted begins up and is prepared for execution, together with outbound VNet connectivity, inside 624 milliseconds (ms) on the 50 percentile and 1,022 ms on the 90 percentile. That’s how GitHub’s messaging processing app can attain Occasion Hubs secured behind a digital community with out incurring important delays. Up to now 18 months, the Azure Capabilities platform has diminished chilly begin latency by roughly 53% throughout all areas and for all supported languages and platforms.
Working by challenges
This challenge pushed the boundaries for each the GitHub and Azure Capabilities engineering groups. Collectively, they labored by a number of challenges to realize this degree of throughput:
- Within the first check run, GitHub had so many messages pending for processing that it prompted an integer overflow within the Azure Capabilities scaling logic, which was instantly mounted.
- Within the second run, throughput was severely restricted as a consequence of a scarcity of connection pooling. The crew rewrote the perform code to appropriately reuse connections from one execution to the subsequent.
- At about 800,000 occasions per second, the system seemed to be throttled on the community degree, however the trigger was unclear. After weeks of investigation, the Azure Capabilities crew discovered a bug within the obtain buffer configuration within the Azure SDK Superior Message Queuing Protocol (AMQP) transport implementation. This was promptly mounted by the Azure SDK crew and allowed GitHub to push past 1 million occasions per second.
Greatest practices in assembly a throughput milestone
With extra energy comes extra accountability, and Miehe acknowledges that Flex Consumption gave his crew “lots of knobs to show,” as he put it. “There’s a stability between flexibility and the trouble it’s a must to put in to set it up proper.”
To that finish, he recommends testing early and sometimes, a well-recognized a part of the GitHub pull request tradition. The next finest practices helped GitHub meet its milestones:
- Batch it should you can: Receiving messages in batches boosts efficiency. Processing 1000’s of occasion hub messages in a single perform execution considerably improves the system throughput.
- Experiment with batch measurement: Miehe’s crew examined batches as giant as 100,000 occasions and as small as 100 earlier than touchdown on 5,000 because the max batch measurement for quickest execution.
- Automate your pipelines: GitHub makes use of Terraform to construct the perform app and the Occasion Hubs cases. Provisioning each parts collectively reduces the quantity of handbook intervention wanted to handle the ingestion pipeline. Plus, Miehe’s crew may iterate extremely shortly in response to suggestions from the product crew.
The GitHub crew continues to run the brand new platform in parallel with the legacy answer whereas it displays efficiency and determines a cutover date.
“We’ve been working them aspect by aspect intentionally to search out the place the ceiling is,” Miehe explains.
The crew was delighted. As Miehe says, “We’re happy with the outcomes and can quickly be sunsetting all of the operational overhead of the outdated answer.“
Discover options with Azure Capabilities
