• Home
  • >
  • DevOps News
  • >
  • Apache Kafka Reaches 1.1 Trillion Messages Per Day – InApps 2022

Apache Kafka Reaches 1.1 Trillion Messages Per Day – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Apache Kafka Reaches 1.1 Trillion Messages Per Day – InApps in today’s post !

Read more about Apache Kafka Reaches 1.1 Trillion Messages Per Day – InApps at Wikipedia



You can find content about Apache Kafka Reaches 1.1 Trillion Messages Per Day – InApps from the Wikipedia website

LinkedIn’s deployment of Apache Kafka has surpassed over 1.1 trillion messages per day, a milestone which presents just another example of how Internet-scale services are changing perspectives about enterprise-level message brokers.

Apache Kafka is a low-latency distributed messaging system. Earlier this year, LinkedIn’s Navina Ramesh wrote for InApps about Apache Samza, LinkedIn’s framework for stream processing. She described how, back in 2006, LinkedIn started using Hadoop for batch processing. She wrote that unlike Hadoop, which is optimized for throughput, Kafka is optimized for low-latency messaging:

We built a processing system on top of Kafka, allowing us to react to the messages — to join, filter, and count the messages. The new processing system, Apache Samza, solved our batch processing latency problem and has allowed us to process data in near real time.

The Origin of Apache Kafka

The story behind Kafka and Samaza also speaks to the roots of Confluent, one of the fastest growing startups in the business. In a post yesterday, Confluent co-founder Neha Narkhede wrote about how she, Jay Kreps and Jun Rao created the system which is now Kafka. Prior to Kafka, LinkedIn ran with legacy technology, which made data collection and research difficult. By observing the limitations that arose when working with older machines, Narkhede, Rao and Kreps were able to build Kafka to adjust to today’s technology demands.

Read More:   What to Do and What to Avoid – InApps 2022

Today, Kafka is used in data centers across the world, including those at Netflix, where the ability to sync and access data worldwide is crucial. When Kafka was put into production in July 2010, it was originally used to power user activity data. By 2011, Kafka handled 1 billion messages per day. As time went on, Kafka was used to capture all of LinkedIn’s data. This began as handling small changes, such as when a user updated their profile or added a work connection, eventually growing to encompass segments such as search graphs, LinkedIn’s Hadoop clusters, and its data warehouse.

The Future of Kafka

Eventually, Narkhede, Kreps and Rao departed LinkedIn to found Confluent. According to Narkhede’s post on Confluent, since they began the transition, Kafka downloads have increased by a multiple of seven in the last year. Confluent offers developers a tiered subscription package to deploy Kafka at scale for enterprise-level tasks, though the project itself is open source.

confluentgraph

Both LinkedIn and Confluent are active within the open source community, committed to supporting and developing Kafka longterm. LinkedIn’s use of Kafka far surpassed the expectations of its creators, having been the largest project at the time of its inception to offer companies the ability to harness data at-scale reliably.

LinkedIn’s original legacy platform did not allow for the collection of user-activity data or log data, which is crucial to the longterm success of any modern business. Since implementing Kafka, these data sources can be collected, researched and analyzed. Kafka makes data available in real time, allowing for stream processing and analytics to be compiled with ease. LinkedIn continues to develop Kafka to suit its needs, setting custom libraries which layer on top of the original Kafka libraries. Understanding how users navigate, consume, populate and curate content on a platform such as LinkedIn is crucial.

Tier Architecture 2

Running Kafka as an alternative to a standard message broker allows for businesses to handle their data infrastructure with a focus on scalability and security. As more applications move into working in the cloud or with containers, harnessing logs and user data is vital to retaining users longterm. Kafka presents itself as a standout choice for enterprises or small businesses to manage their stream data.

Read More:   Top 5 Software Outsourcing Challenges and How To Minimize Them

Feature image: “LinkedIn Chocolates” by Nan Palmero is licensed under CC BY 2.0.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...