Keynote Speakers

17/05 at 09.00

Rajiv Ranjan

The Osmotic Computing Approach:
Integrating Internet of Things
and Distributed Learning

17/05 at 09.00

Rajiv Ranjan

The Osmotic Computing Approach: Integrating Internet of Things and Distributed Learning

Internet of Things devices, along with the large data volumes that such devices (can potentially) generate, can have a significant impact on our lives, fuelling the development of critical next-generation services and applications in a variety of application domains (e.g., health care, smart grids, finance, disaster management, agriculture, transportation, and water management). Artificial Intelligence technologies, such as Distributed Learning and Training, is finding application in multiple IoT application domains driven by the availability of diverse and large datasets. One such example is the advances in medical diagnostics and prediction that use deep learning technology to improve human health. However, timely and reliable transfer of large data streams (a requirement of deep learning technologies for achieving high accuracy) to centralized locations, such as cloud data centre environments, is being seen as a key limitation of expanding the application horizons of such technologies.

To this end, various paradigms, including osmotic computing, have been proposed that promote the distribution of data analysis tasks across cloud and edge computing environments. However, these existing paradigms fail to provide a detailed account of how technologies such as distributed deep learning can be orchestrated and take advantage of the cloud, edge, and mobile edge environments in a holistic manner. This keynote analyses different algorithmic and programming research challenges involved with the development of holistic and distributed learning algorithms that are resource and data-aware and can account for underlying heterogeneous data models, resource (cloud vs. edge vs. mobile edge) models, and data availability while executing—trading accuracy for execution time, etc.

  1. Introduction to the fundamental concepts related to the Osmotic computing paradigm
  2. Overview of the research and programming challenges involved with composing and orchestrating complex distributed learning algorithms and workflows in the (cloud-edge) Osmotic computing paradigm
  3. Present a novel approach about how to train one Distributed Deep Learning (DDL) model on the hardware of thousands of mid-sized IoT and Edge devices across the world, rather than the use of GPU clusters available within a cloud data centre.
  4. Discuss our initial experimental validation using the United Kingdom’s largest IoT infrastructure, namely, the Urban Observatory (http://www.urbanobservatory.ac.uk/)

Professor Rajiv Ranjan is an Australian-British computer scientist, of Indian origin, known for his research in Distributed Systems (Cloud Computing, Big Data, and the Internet of Things). He is University Chair Professor for the Internet of Things research in the School of Computing of Newcastle University, United Kingdom. He is an internationally established scientist in the area of Distributed Systems (having published about 250 scientific papers). He has secured more than $32 Million AUD (£16 Million+ GBP) in the form of competitive research grants from both public and private agencies. He is an innovator with strong and sustained academic and industrial impact and a globally recognized R&D leader with a proven track record. He serves on the editorial boards of top quality international journals including IEEE Transactions on Computers (2014-2016), IEEE Transactions on Cloud Computing, ACM Transactions on the Internet of Things, The Computer (Oxford University), and The Computing (Springer) and Future Generation Computer Systems. He led the Blue Skies section (department, 2014-2019) of IEEE Cloud Computing, where his principal role was to identify and write about the most important, cutting-edge research issues at the intersection of multiple, inter-dependent research disciplines within distributed systems research area including Internet of Things, Big Data Analytics, Cloud Computing, and Edge Computing. He is one of the highly cited authors in computer science and software engineering worldwide (h-index=67, g-index=216, and 23000+ google scholar citations, h-index=35 and 7300+ Scopus citations).

17/05 at 15.00

Ilkay Altintas

Composable Systems for AI-Integrated Scientific Computing

17/05 at 15.00

Ilkay Altintas

Composable Systems for AI-Integrated Scientific Computing

Scientific computing increasingly involves machine learning and artificial intelligence driven methods which require specialized capabilities for distributed data, networking and computing at the digital continuum. Such distributed architectures built around the composability of data-centric applications led to the emergence of a new ecosystem for container coordination and integration across dynamic resource pools. New approaches for dynamic composability of these emerging systems with traditional supercomputing systems are needed to further advance the data-driven scientific applications. This talk will present our approach for using composable systems in the intersection between scientific computing, artificial intelligence and sensor data integration. The architecture of a working example of a composable infrastructure that federates Expanse, an NSF-funded supercomputer, with Nautilus, a Kubernetes-based GPU geo-distributed cluster, and Sage, a cyberinfrastructure to enable AI at the edge, will be discussed with application case studies in hazards and Internet of Things. Case studies will be presented as integrated scientific workflows that bridge the insights from collaborative dynamic data-driven team science with heterogenous composable infrastructure.

Dr. İlkay Altıntaş is the Chief Data Science Officer of the San Diego Supercomputer Center as well as a Fellow of the Halıcıoğlu Data Science Institute at the University of California San Diego. She is the Founding Director of the Workflows for Data Science (WorDS) Center of Excellence and the WIFIRE Lab. The WoRDS Center specializes in the development of methods, cyberinfrastructure, and workflows for computational data science and its translation to practical applications. The WIFIRE Lab focuses on methods for all-hazards knowledge from data collection to modeling efforts, and has achieved significant success in helping to manage wildfires. Among the awards she has received are the 2015 IEEE TCSC Award for Excellence in Scalable Computing for Early Career Researchers and the 2017 ACM SIGHPC Emerging Woman Leader in Technical Computing Award. Altıntaş holds a Ph.D. degree from the University of Amsterdam in the Netherlands.

18/05 at 09.00

Frank Leymann

Quantum Software: Help from the Cloud

18/05 at 09.00

Frank Leymann

Quantum Software: Help from the Cloud

Quantum computers get real, and quantum algorithms have strong potentials of significant advantages in solving hard problems. But an engineering discipline of building software that make use of the new hardware and these algorithms is missing. In this talk, we sketch the basics of quantum computing and show that quantum software is always hybrid, making use of both, classical artifacts and quantum artifacts. Concepts and technologies known from cloud computing are shown to provide a first suitable environment for quantum software but evolution is required. Finally, elements of a quantum software engineering are discussed.

Frank Leymann is a full professor of computer science at University of Stuttgart, Germany. His research interests include software architecture,  robustness of highly-distributed applications, pattern languages, and quantum computing. Frank is co-author of about 500 peer-reviewed papers, about 70 patents, and several industry standards. He is elected member of the Academia Europaea, a fellow of the center for Integrated Quantum Science and Technology (IQST), Fellow of the Asia-Pacific Artificial Intelligence Association (AAIA), and Kurt Goedel visiting professor for quantum computing at TU Vienna. Frank is scientific director of one of the largest quantum software projects in Europe.

18/05 at 15.00

Manish Parashar

Harnessing the Computing Continuum for Urgent Science

18/05 at 15.00

Manish Parashar

Harnessing the Computing Continuum for Urgent Science

Urgent science describes time-critical, data-driven scientific workflows that can leverage distributed data sources in a timely way to facilitate important decision making. In spite of the exponential growth of available digital data sources and the ubiquity of non-trivial computational power for processing this data, realizing such urgent science workflows remains challenging — while our capacity for generating data is expanding dramatically, our ability for managing, analyzing, and transforming this data into knowledge in a timely manner has not kept pace. In this talk I will explore how the computing continuum, spanning resources at the edges, in the core and in-between, can be harnessed to support urgent science. Using an Early Earthquake Warning (EEW) workflow, which combines data streams from geo-distributed seismometers and GPS sensors to detect tsunamis, as a driver, I will explore a system stack that can enable the fluid integration of distributed analytics across a dynamic infrastructure spanning the computing continuum and discuss associated research challenges. I will also describe recent research in programming abstractions that can express what data should be processed and when and where it should be processed, middleware services that automate the discovery of resources and the orchestration of computations across these resources. I will also discuss open research challenges.

Manish Parashar is Director of the Scientific Computing and Imaging (SCI) Institute, Chair in Computational Science and Engineering, and Professor, School of Computing at the University of Utah. He is currently on an IPA appointment at the National Science Foundation where he is serving as Office Director of the NSF Office of Advanced Cyberinfrastructure. His research interests are in the broad areas of Parallel and Distributed Computing and Computational and Data-Enabled Science and Engineering. Manish is the founding chair of the IEEE Technical Consortium on High Performance Computing (TCHPC), Editor-in-Chief of the IEEE Transactions on Parallel and Distributed Systems. He is Fellow of AAAS, ACM, and IEEE/IEEE Computer Society. For more information, please visit http://manishparashar.org.

19/05 at 15.00

Luiz DeRose

The New Era of High-End Computing: Data Driven, Real Time, in the Cloud

19/05 at 15.00

Luiz DeRose

The New Era of High-End Computing: Data Driven, Real Time, in the Cloud

The continuous increase in complexity and scale of high-end systems, together with the evolving diversity of processor options, are forcing computational scientists to face system characteristics that can significantly impact the performance and scalability of applications. Users need a system infrastructure that can adapt to their workload needs, rather than having to constantly redesign their applications to adapt to new systems. The initial cloud performance of high-end computing applications was inadequate due to network performance and virtualization of the OS. However, modern public clouds have addressed these issues, allowing users to run their workloads on the latest hardware, scaling their clusters to their needs, while paying only for the correct hardware, at the correct time and scale. In this talk, I will discuss the current trends in computer architecture and the implications in the development and use of high-end applications. I will discuss the state of the art of cloud computing for high-end computing, along with user’s experiences, highlights, and success stories. I will finish the presentation with a discussion of some of the challenges, existing limitations, and open research problems that still need to be addressed.

Dr. Luiz DeRose is a Director of Cloud Engineering for HPC at Oracle. Before joining Oracle, he was a Sr. Science Manager at AWS, and a Senior Principal Engineer and the Programming Environments Director at Cray. Dr. DeRose has a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. He has more than 25 years of high-performance computing experience and a deep knowledge of programming and middleware environments for HPC. Dr. DeRose has eight patents and has published more than 50 peer-review articles in scientific journals, conferences, and book chapters, primarily on the topics of compilers and tools for high performance computing.