There’s a revolution underway in how people work with data. Streaming data is no longer seen as a special use case – and that’s a good thing because streaming is a better fit to the way life happens. Innovative technologies for robust stream processing are changing what you can reasonably expect to do with stream-based applications, particularly when low latency is required. Apache Flink is one such emerging technology, and its popularity is growing.
Innovation by those who design and develop new technologies is, however, just one-half of an effective data revolution. It’s not just the people who build the disruptive technologies who must have vision if significant change is to occur: the users of those new technologies must also have vision if the revolution is to have real impact.
Using specific examples from a variety of projects involving streaming data, this talk focuses on how innovation with real impact can happen, including the shift in thinking and in engineering culture that underlie successful change in how we work with data at scale. For instance, people are beginning to recognize that stream-first architectures are useful even beyond real-time processing. Another big idea has to do with where data lives: the big data revolution showed us that data structures spanning more than one machine are a good thing -- now a new revolution involves data structures that span more than one continent (geo-distribution) and that go from on-premise to cloud.
We’ll also look at how streaming data supports flexible practices such as a microservices style that have huge implications in IoT, in A/B testing, deployment of machine learning models and other large-scale analytical workflows. Finally we will how the right technologies and the right design can make life easier for developers and for system administrators by creating a separation of concerns.
This talk should be useful for audiences at all levels of experience.