Seminar: DataFlow SuperComputing for BigData DeepAnalytics

Martedì, 3 Dicembre, 2019 - 14:00

Seminar: DataFlow SuperComputing for BigData DeepAnalytics  - Prof. Veljko Milutinovic

Via Roma 56 (S. Niccolo' building), Siena AT 14:00 (sharp) of the 3rd DECEMBER 2019


This seminar analyses the essence of DataFlow SuperComputing, defines its advantages and sheds light on the related programming model that corresponds to the recent Intel patent about the future Intel's dataflow processor.

According to Alibaba and Google, as well as the open literature, the DataFlow paradigm, compared to the ControlFlow paradigm, offers: (a) Speedups of at least 10x to 100x and sometimes much more (depends on the algorithmic characteristics of the most essential loops and the spatial/temporal characteristics of the Big Data Streem, etc.), (b) Potentials for a better precision (depends on the characteristics of the optimizing compiler and the operating system, etc.), (c) Power reduction of at least 10x (depends on the clock speed and the internal architecture, etc.), and (d) Size reduction of well over 10x (depends on the chip implementation and the packaging technology, etc.). However, the programming paradigm is different, and has to be mastered.

This presentation explains the programming paradigm, using Maxeler as an example and sheds light on the ongoing research, which, in the case of the speaker, was highly influenced by four different Nobel Laureates: (a) from Richard Feynman it was learned that future computing paradigms will be successful only if the amount of data communications is minimized; (b) from Ilya Prigogine it was learned that the entropy of a computing system would be minimized if spatial and temporal data get decoupled; (c) from Daniel Kahneman it was learned that the system software should offer options related to approximate computing; and (d) from Andre Geim it was learned that the system software should be able to trade between latency and precision. The approach that satisfies all the above requirements is referred to as the Ultimate DataFlow. The existing Maxeler programming model is applicable to Ultimate DataFlow, too.

The presentation concludes with the latest achievements of Maxeler Technologies in the current and previous year, like emulation of Quark-related processes available thru Amazon AWS and endorsed by the Nobel Laureate Jerome Friedman (Nobel Prize for the discovery of Quark) and tensor calculus applicable for emulation of processes related to QuasiCrystals (discovered by Nobel Laureate Dan Shechtman). It also includes examples related to finances (JPMorgan and CitiBank) and trading (Chicago Mercantile Exchange CME and NASDAQ), as well as those related to: math algorithms, image processing, machine learning, and artificial intelligence. All these examples prepare the attendees for utilization of the future DataFlow engine of Intel, announced through a recent patent by Intel, which was accompanied by an Intel press release stating that DataFlow represents the major paradigm shift in computing, in the century after von Neumann (available on request).

This seminar also offers plenty of hands-on opportunities for attendees, related to all subjects mentioned above. The first 45-minutes of this seminar correspond to the invited key talk at the International SuperComputing Conference, ExaScale Track in Frankfurt, Germany, in June 2018.

The opening presentation is followed by two half-day hands-on workshop for students who like to become fluent in the MaxJ dataflow programming language.