Volume 2019, Number December (2019), Pages 1-5
In this series of interviews with innovation leaders, Ubiquity Associate Editor and software engineer, Dr. Bushra Anjum sits down with Jeronimo Castrillon, a professor in the Department of Computer Science at the TU Dresden, to discuss the need for novel programming abstractions and methodologies for efficient computing on future systems.
Jeronimo Castrillon is a professor in the Department of Computer Science at the TU Dresden, where he is also affiliated with the Center for Advancing Electronics Dresden (cfaed). In 2004, he received his electronics engineering degree from Pontificia Bolivariana University in Colombia, a master's degree from the ALaRI Institute in Switzerland in 2006, and obtained his Ph.D. (Dr.-Ing.) with honors from the RWTH Aachen University in Germany in 2013. His research interests cover methodologies, languages, tools, and algorithms for programming complex computing systems. He has written for more than 70 international publications and has been a member of technical program and organization committees in international conferences and workshops (e.g., DAC, DATE, ESWEEK, CGO, LCTES, Computing Frontiers, ICCS and FPL). He is also a regular reviewer for ACM and IEEE journals. In 2014 Prof. Castrillon co-founded Silexica GmbH/Inc, a company that provides programming tools for embedded multicore architectures. Prof. Castrillon is senior member of IEEE, member of ACM, and founding member of the executive committee of the ACM "Future of Computing Academy" (FCA).
What is your big concern about the future of computing to which you are dedicating yourself?
In the dusk of Moore's law, there is a wave of disruptive hardware technologies and architectures that will lead to new computing paradigms, away from the half-a-century proven Von Neumann model. This includes quantum, neuromorphic, DNA, and other forms of biology-inspired computing. These new kinds of systems will require novel programming abstractions and methodologies. Abstractions refer to interfaces, via programming languages or libraries, that allow programmers to comfortably write applications without having to care about the complexities of hardware and lower software layers. Imperative sequential programming, for instance, offers a convenient thin abstraction for conventional sequential von Neumann processors. For more than a decade, we have struggled to maintain this abstraction for parallel multicore processors with relative success. It is thus unlikely for such an abstraction to properly work for future systems combining traditional computing with inherently parallel and stochastic processes. Methodologies, on the other hand, refer to tools and algorithms for analysis and optimization to automatically convert a high-level program into an efficient machine-level implementation. A beautiful abstraction is of little use, if it does not lend itself to automatic analysis, optimization, and code generation. A big part of my research is concerned with the understanding of novel computing paradigms to derive abstractions for increased productivity and methodologies for efficient execution on future systems. Efficiency subsumes both application performance and resource-aware energy-efficient computing. Energy efficiency is key given current projections on the contribution to global energy consumption by computing systems (21 percent by 2030 ), while productivity and performance are of paramount importance to enable better products and accelerate scientific discovery.
How did this concern emerge? Precisely, how did your background, education, community, and past professional projects shape it?
I was educated as an electrical engineer and came to programming in a bottom-up fashion and turned into a computer science professor. I started with assembly programming and hardware design before coming in contact with compilers and the formal underpinnings of programming languages. During my doctorate, I worked on optimizing for bare-metal embedded systems, i.e. systems with little or no operating system support, with heterogeneous cores and custom memory systems. The tools we developed then were still close to hardware, tailored to exploit domain-specific features of the hardware. Coming from the bottom, it fascinated me to go upwards through the levels of abstraction and layers of the programming stack, so badly needed to manage the complexity of systems. By construction, however, abstractions generalize and hide details, making it sometimes extremely difficult to really leverage the flexibility and power of hardware (a phenomenon often called abstraction toll). As a byproduct, abstractions may also lead to vulnerabilities as we have seen in the recent past with Spectre and Meltdown, where a gap between the conceptual computer model and micro-architectural implementation details enabled side-channel attacks.
I joined Technische Universität (TU) Dresden in 2014 to become part of a large-scale research project called the Center for Advancing Electronics Dresden (cfaed), an interdisciplinary effort that brings together researchers from chemistry, physics, mathematics, materials science, electrical engineering, mechanical engineering, and computer science. cfaed researches alternative technologies to augment or replace traditional silicon-based computing systems, including silicon nanowires, carbon nanotubes, and organic electronics among others. It was through the enriching interdisciplinary exchanges within cfaed that I first realized the gap between existing programming methodologies and potential future systems, and the need for fundamentally new abstractions.
What project or initiative are you currently leading that has the potential to address the issue presented in response to the first question?
Within cfaed I had the honor of leading the Orchestration research area with the goal of designing hardware and software architectures for emerging technologies to provide significant gains in application performance . Researchers within cfaed perform work on different promising technologies, including device-level hardware reconfiguration with silicon nanowires or carbon nanotubes, dense and fast racetrack memories with spin-orbitronics , flexible board to board wireless interconnects in the 200 Ghz range, and even wet-computing with bio-molecular motors. In this context, we have worked on dataflow programming abstractions that allow methodologies to enable runtime adaptation of the hardware and the interconnect. We have also leveraged the information in domain-specific abstractions for tensors, used in physics simulation and machine learning, to optimize for systems with racetrack memories, demonstrating potential gains in both latency and energy consumption. Based on this work and with the active network of scientific institutions of the Dresden Concept, I am working towards a lab centered on programming abstractions and methodologies for emerging systems with support from the Department of Computer Science of TU Dresden. The main focus of the lab will be to match the domain-specific nature of emerging architectures (e.g., neuromorphic accelerators) with higher-level abstractions in domain-specific languages. Besides tensor abstractions, we work on abstractions for particle-based simulations, used in systems biology, and abstractions for dataflow parallel execution. The lab will have a strong emphasis on funding research exchanges to facilitate the interdisciplinary research required to bridge applications and emerging hardware.
Bushra Anjum is a software technical lead at Amazon in San Luis Obispo, CA. She has expertise in Agile Software Development for large scale distributed services with special emphasis on scalability and fault tolerance. Originally a Fulbright scholar from Pakistan, Dr. Anjum has international teaching and mentoring experience and has served in academia for over five years before joining the industry. In 2016, she has been selected as an inaugural member of the ACM Future of Computing Academy, a new initiative created by ACM to support and foster the next generation of computing professionals. Dr. Anjum is a keen enthusiast of promoting diversity in the STEM fields and is a mentor and a regular speaker for such. She received her Ph.D. in computer science at the North Carolina State University (NCSU) in 2012 for her doctoral thesis on Bandwidth Allocation under End-to-End Percentile Delay Bounds. She can be found on Twitter @DrBushraAnjum.
©2019 ACM $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.
A Ubiquity symposium is an organized debate around a proposition or point of view. It is a means to explore a complex issue from multiple perspectives. An early example of a symposium on teaching computer science appeared in Communications of the ACM (December 1989).
To organize a symposium, please read our guidelines.
Ubiquity Symposium: Big Data
- Big Data, Digitization, and Social Change (Opening Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic
- Big Data and the Attention Economy by Bernardo A. Huberman
- Big Data for Social Science Research by Mark Birkin
- Technology and Business Challenges of Big Data in the Digital Economy by Dave Penkler
- High Performance Synthetic Information Environments: An integrating architecture in the age of pervasive data and computing By Christopher L. Barrett, Jeffery Johnson, and Madhav Marathe
- Developing an Open Source "Big Data" Cognitive Computing Platform by Michael Kowolenko and Mladen Vouk
- When Good Machine Learning Leads to Bad Cyber Security by Tegjyot Singh Sethi and Mehmed Kantardzic
- Corporate Security is a Big Data Problem by Louisa Saunier and Kemal Delic
- Big Data: Business, technology, education, and science by Jeffrey Johnson, Luca Tesei, Marco Piangerelli, Emanuela Merelli, Riccardo Paci, Nenad Stojanovic, Paulo Leitão, José Barbosa, and Marco Amador
- Big Data or Big Brother? That is the question now (Closing Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic