Josh Davis graduated from UD in 2021, earning a double degree in computer science and philosophy and a minor in mathematics. During his time as an undergraduate student, he was named a Eugene du Pont Memorial Scholar (2017), received the Hatem M. Khalil Memorial Award (2018), the Outstanding Sophomore Award (2019), the Outstanding Junior Award (2020), and the Outstanding Senior Award (2021). He also received 2nd place in the Association for Computing Machinery (ACM) student research competition at the 2018 Supercomputing Conference (SC18) held in Dallas, Texas.

Davis is currently pursuing a Ph.D. in Computer Science at the University of Maryland in the Parallel Software and Systems Group (PSSG). He is advised by Abhinav Bhatele.

This interview was conducted by Stephen Siegel over email in November 2023.

Siegel: Congratulations! What is the Supercomputing conference and what was the award you won?

Davis: Thank you! The Supercomputing conference (SC) is an annual international research conference on high-performance computing, one of the most highly-attended in the field; this year’s total attendance topped 14,000, a new record for the conference. Research papers presented at SC’s Technical Program represent some of the latest advances in using the largest and fastest computers on the planet.

Our research poster at SC23 won the Best Research Poster Award, an award given to one of over 70 peer-reviewed posters presented at the conference and selected on the basis of quality of research and presentation.

Siegel: Who did you work with on this project?

Davis: The authors are me, Pranav Sivaraman (a master’s student at UMD), Isaac Minn (an undergraduate student at UMD), and our advisor Abhinav Bhatele. We have also been extensively collaborating with Konstantinos Parasyris, Harshitha Menon and Giorgis Georgakoudis on this project, all of whom are affiliated with Lawrence Livermore National Laboratory.

Siegel: Tell us about the research that your poster reports on.

Davis: Our poster, “Evaluating Performance Portability of GPU Programming Models,” reports on our ongoing research on performance portability, or the ability for a single computer program to achieve good performance across a range of hardware platforms, regardless of underlying differences across those platforms. The newest supercomputers deployed at U.S. Department of Energy national labs are among the fastest in the world, but they rely on graphics processing units (GPUs) to provide most of their computing power.

Writing code for GPUs is challenging, and to make matters even more complicated, the newest supercomputers use GPUs produced by several different hardware vendors (like NVIDIA, AMD, and Intel), with each vendor promoting their own programming interface, or programming model, to write code for their devices. Since it’s impractical for users, who are often not computer scientists by profession, to maintain multiple versions of their software written in different models, most would much prefer to use just one portable programming model to represent their application in an abstract manner that can then be run on any system as needed. However, there are a wide range of competing portable programming models for GPUs, and little guidance for developers on how to pick one.

Our ongoing work compares the performance portability achieved by each of seven different programming models on leadership-class supercomputers using proxy applications, which are small benchmarks meant to represent the performance of larger, more complex scientific applications. By comparing the performance achieved on each system by each programming model for a particular proxy application with a fixed input problem, we provide comparative insights to users on which programming models should provide the best performance portability for applications similar to the proxy app under consideration. We have found significant variability between programming models in how well they enable performance portability, with the Kokkos and RAJA models generally achieving the best performance portability in our tests.

Our future efforts will focus on adding proxy applications to make the comparison more robust and providing deeper analysis into the reasons for the differences in performance portability we observe.

Siegel: Can you think of specific skills or experiences from your undergraduate experience at UD that helped prepare you for your graduate career and this kind of research?

Davis: I’ve relied on a wide range of experiences from my time at UD in my graduate career. First and foremost is the extensive undergraduate research experience I was able to obtain at UD. This began through the Vertically Integrated Projects (VIP) courses I took in my first two years, which allowed me to participate in multiple research projects from my first semester as an undergrad, and established connections with faculty I would continue to work with through the rest of my four years at UD. The VIP program provided my first experience presenting a poster at a CIS department event on the project I worked on with you.

I later submitted a poster to the ACM Student Research Competition at SC18 based on research work with Michela Taufer (who is now at the University of Tennessee, Knoxville), winning 2nd place. I also presented a workshop paper at the 2020 Workshop on Accelerator Programming Using Directives (WACCPD, taking place at SC20) on research I completed during a summer internship at the National Energy Research Supercomputing Center (NERSC) at Berkeley Lab, an opportunity I was able to connect with through Sunita Chandrasekaran. While working on a related project led by Chandrasekaran, I was also able to take advantage of UD’s winter session to work at Oak Ridge National Lab for a month in January of 2020.

All of these provided critical experience in creating effective posters, delivering clear research presentations, and in generating research ideas and bringing them to completion. None of that would be possible without the tremendous support and mentoring I received from UD’s CIS faculty and the VIP program.

UD’s computer science degree program also provided me with highly-relevant coursework for my research interests, including both technical courses like parallel computing and compiler construction, as well as experiences on the human side of computing with the education-focused courses I was able to take during the 2019 winter study abroad program in Christchurch, New Zealand.

Siegel: Was there anything you learned (or were exposed to) at UD that you didn’t necessarily appreciate at the time, but later the importance became clear to you?

Davis: This is a really interesting question. The first thing that comes to mind is learning how to effectively plan and manage data collection when doing experimental work in computer science research and how to create compelling visualizations of the data collected. These are really foundational skills in research that I thought were pretty tedious and difficult when I first learned them in my early projects, but now I’m very comfortable with them. It was a great head start in graduate school to already have years of experience doing things like writing and running batch scripts on remote systems and turning raw data into nice Matplotlib plots.

I also think the collaborative environment in the software engineering courses in the curriculum (they were 275 and 475 in my day) provided a lot of value and helped me learn that I wanted to aim to do graduate school in a group with a similar collaborative approach to research. The skills and software engineering practices I learned in those courses, like managing tasks across team members and communicating across disciplines to understand and meet user expectations, are applicable in the kind of research I do when working with domain science application teams, even if those courses are officially oriented towards the industry world.