loading
loading
loading
► TRY THIS YOURSELF: https://cnfl.io/flink-java-apps-module-1 What is serialization in Flink, and what are the different types? Serialization is a critical topic when working with Flink because it often serializes things you might not expect. Flink has two different types of Serialization: Internal and External. And within each type of serializer there are multiple formats that might be used. Depending on your choices it can have an impact on message size, flexibility, schema evolution, and more. This video will outline the different ways that Flink uses serializers and show you how to implement a few of the basics. For a complete IMMERSIVE HANDS-ON EXPERIENCE: https://cnfl.io/flink-java-apps-module-1 -- ABOUT CONFLUENT Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io. #flink #java #streamprocessing #confluent
Apache Flink is a powerful engine built for processing streaming data flows in a distributed environment. Rather than accumulating data into batches to be processed later, Apache Flink allows us to process the data as it happens, applying stateful transformations along the way. This makes it an invaluable tool for today’s streaming needs. So why Flink? Peel away the surface of a modern system and you’ll often find a mountain of data being processed. It wasn’t always this way. Not long ago applications were smaller and the data tended to be static. Processing was performed on demand whenever a query was made. If the application required upfront computation, it was done with a batch job running against a relatively small data set. Today, the data sets have grown to staggering sizes. They are too large for a simple batch job to handle. Meanwhile, users are no longer content to wait hours or even minutes for a batch job to process their data. They want results now. As a result, developers are increasingly turning to distributed streaming solutions to process data in real time. Learn how to use Flink and build Flink appications. This course will introduce students to Apache Flink through a series of hands-on exercises. Students will build a basic application in Java that will consume a collection of Apache Kafka data streams. The data will be transformed using Flink and pushed back into new Kafka topics. This simple use case will give students many of the tools they need to start building production-grade Apache Flink applications. Learn more: https://cnfl.io/flink-java-apps-module-1