Hasso-Plattner-InstitutSDG am HPI
Hasso-Plattner-InstitutDSG am HPI
Login
 

Distributed Aggregation with Counting Window

Wang Yue

Chair for Data Engineering Systems

Hasso Plattner Institute

Office: F-1.06
Tel .: + 49- (0) 331 5509-280
Email: Wang.Yue (at) hpi.de
Links: Homepage

Supervisor: Prof. Dr. Tilmann Rabl

 

 

We are focusing on the efficient window aggregation in distributed networks regarding to stream processing. In particular, we are interested in finding the approach that shares multiple queries among different nodes and slicing windows with multiple queries in an effective and efficient way.

Motivation

In the past decade, we have been observing an explosion of the Internet of Things (IoT) applications in the most diverse domains [1], from environmental condition monitoring [2] to industrial process control [3] and health care [4]. Those applications are usually organized in a master-worker architecture in which streams of data are produced by IoT devices and sent to data centers through a network to be processed by a stream processing engine (SPE).

By 2020, the number of IoT devices is beyond 31 billion, and IoT networks that are constituted by devices produce large amounts of data. The amount of data transferred between devices and the SPE may result in high network costs or high processing latency due to network bottlenecks. To address that issue, fog and edge computing techniques based on off-loading part of the data processing that would be performed in the processing data center to machines and devices closer to the data sources can be used.

Research Problem

Current SPEs such as Apache Flink [5], Apache Spark Streaming [6], and Apache Storm [7] are developed for processing large-scale and high-velocity data stream. They achieve that because of that efficient data aggregation can be performed on millions of events per hour [8, 9]. Furthermore, they are widely used in industry and research, but they do not support the benefits of edge and fog computing due to data are sent through network to a data center then processed by a distributed stream processing engine (SPE).

An example of the distributed stream processing is a smart driving system. We collect data from the car to assist for traffic management and traffic estimation. [10] With the smart driving system, we can alert the drivers to the dangerous situation or avoid traffic congestion. [11]

Solution

To efficiently process data from IoT networks, current SPEs commonly centralize raw data from distributed sensors into a single data center, and then the SPEs process data into their computation cluster. Therefore, all raw data produced by sensors are required to be collected through the network to the data center, which leads to unnecessary network costs and high latency. The most common way for current SPEs to process an unbounded stream is to window the input data into fixed-size windows and then process each of those windows separately [19]. Depending on different queries, SPEs define windows with different window measures, window types, and window aggregation functions, which are needed to be processed by different approaches. Currently, all time-based windows, including tumbling window, sliding window, and session window, are able to be processed based on distributed aggregation efficiently. However, the counting window, which is based on count measure doesn’t work well with distributed aggregation. When processing counting windows, system only can perform centralized aggregation instead of distributed aggregation due to local nodes are not able to create and end counting windows locally. So trying to create a counting window on local node, and predicting the window end of this kind of window can enable system process counting window with distributed aggregation, which can reduce computational resource consumption and improve performance.

Related Work

We now present some related work with stream processing in distributed network and window aggregation among multiple queries. Madden et al. [12] propose the tiny aggregation that is a tree-structured distributed data aggregation. And it works a lot on how to build the network route between different nodes. In Cougar [13,14] and LEACH [15], the authors make use of a class-structured network, which elected cluster node among other nodes that aggregate windows from other nodes. The PEGASIS [16] introduces a chain-structured approach that all the nodes are a priori and conscious of all other nodes. SlideSide [17] proposed an approach for incremental processing in multi-queries scenarios that maintains a running prefix / suffix scan over the input stream. FlatFAT [18] uses a tree structure to store the partial result.

In summary, there are so many stream processing techniques that focus on sensor networks and multiple queries. However, none of them try to combine those together, which is a general problem met by many realistic scenarios. Also, they lack the solution of counting window in stream processing and only concentrate on the sliding or tumbling windows.

Methodology

Compared with traditional stream processing, in a sensor network, the feature of data sources is distributed that we cannot centralize all the data into the center node. Therefore, the state-of-are process the query locally to reduce the network overhead and share the pressure of the center node. However, the current techniques of window aggregation do not fit sensor networks completely. To deal with the counting window in distributed aggregation, we propose our counting solution. We add a new node so-called Synchronization Server (SYS), which is separate from the center node, to synchronize information between local nodes. This is a two-round solution, in the first round our solution is to create a local window on the local node and predict the window size of the local window. So first step is to calculate the window size of local window. We send timestamp and count id of partial tuples that in local window to SYS. And then SYS predicts the window end of each window based on that information, afterward informs local nodes to end local windows. In the second round, local nodes, calculate the partial result of each local window, then all partial results are collected in the center node, and center node calculates the final result based on those partial results.

Evaluation

We compared our solution with the centralized system that we discussed, also we have the same network structure. The only query we test is 'output average speed of every 1000 tuples'. We measure three metrics of both systems, latency is how much time system output results, throughput is how many events are processed per second, and network overhead is network traffic used by each system. The experiments show our system outperforms baseline by 10 times while reducing 75% network overhead.

References

  1. Prasanna, Srinivasa, and Srinivasa Rao. "An overview of wireless sensor networks applications and security." International Journal of Soft Computing and Engineering (IJSCE) 2.2 (2012): 2231-2307.
  2. Kelly, Sean Dieter Tebje, Nagender Kumar Suryadevara, and Subhas Chandra Mukhopadhyay. "Towards the implementation of IoT for environmental condition monitoring in homes." IEEE sensors journal 10/13 (2013): 3846-3853.
  3. Ungurean, Ioan, Nicoleta-Cristina Gaitan, and Vasile Gheorghita Gaitan. "An IoT architecture for things from industrial environment." 2014 10th International Conference on Communications (COMM). IEEE, 2014.
  4. Catarinucci, Luca, et al. "An IoT-aware architecture for smart healthcare systems." IEEE internet of things journal 2.6 (2015): 515-526.
  5. Carbone, Paris, et al. "Apache nimble: Stream and batch processing in a single engine." Bulletin of the IEEE Computer Society Technical Committee on Data Engineering 36.4 (2015).
  6. Zaharia, Matei, et al. "Discretized streams: Fault-tolerant streaming computation at scale." Proceedings of the twenty-fourth ACM symposium on operating systems principles. 2013.
  7. Toshniwal, Ankit, et al. "Storm @ twitter." Proceedings of the 2014 ACM SIGMOD international conference on Management of data. 2014.
  8. Nagmote, Snehal, and Pallavi Phadnis. "Massive scale data processing at netflix using nimble." Flink Forward Conference. 2019.
  9. Ichinose, Ayae, et al. "A study of a video analysis framework using Kafka and spark streaming." 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017.
  10. Protschky, Valentin, Christian Ruhhammer, and Stefan Feit. "Learning traffic light parameters with floating car data." 2015 IEEE 18th International Conference on Intelligent Transportation Systems. IEEE, 2015.
  11. Birrell, A. Stewart, Mark Fowkes, and Paul A. Jennings. "Effect of using an in-vehicle smart driving aid on real-world driver performance." IEEE Transactions on Intelligent Transportation Systems 15.4 (2014): 1801-1810.
  12. Samuel Madden, Michael Franklin, Joseph Hellerstein, and Wei Hong. TAG: a Tiny AGgregation Service for Ad-Hoc Sensor Networks. In OSDI, pages 131-146. ACM, 2002.
  13. Johannes Gehrke and Samuel Madden. Sensor and actuator networks - Query processing in sensor networks. IEEE Pervasive Computing, 3 (1): 46-55, 2004.
  14. Yong Yao and Johannes Gehrke. The cougar approach to in-network query processing in sensor networks. ACM SIGMOD Record, 31 (3): 9-18, 2002.
  15. Wendi Heinzelman, Anantha Chandrakasan, and Hari Balakrishnan. An application-specific protocol architecture for wireless microsensor networks. IEEE Transactions on Wireless Communications, 1 (4): 660-670, 2002.
  16. Stephanie Lindsey, Cauligi Raghavendra, and Krishna Sivalingam. Data gathering algorithms in sensor networks using energy metrics. TPDS, 13 (9): 924-935, 2002.
  17. Theodorakis, Georgios, Peter Pietzuch, and Holger Pirk. "SlideSide: a fast incremental stream processing algorithm for multiple queries." (2020). 
  18. Tangwongsan, Kanat, et al. "General incremental sliding-window aggregation." Proceedings of the VLDB Endowment 8.7 (2015): 702-713.
  19. Akidau, Tyler, Slava Chernyak, and Reuven Lax. Streaming systems: the what, where, when, and how of large-scale data processing. "O'Reilly Media, Inc." 2018.