I wrote a few weeks ago that, post-pandemic, we in research computing teams are going to have to work to make our value clear to administrations, funders, researchers, and our team members.
A lot of the link roundup items I’ve pointed to over the past year have focussed on our team members, which is crucial - we can’t support research without an excellent, motivated, aligned team doing the work. But there’s a lot less material out coming out on communicating research computing teams’ value to funders and administrators - what are we for, why should this team in particular continue to be here, why are we the right team to take on this new, strategically important, project?
These issues have been on my mind a lot lately. I need to do better on this for the work of our own team - our area is getting competitive for the first time in ages, and we need to better communicate our successes! But I’ve also been working with a decision maker for a different organization with a sizeable research computing team that just went through a leadership change. The decision maker wants some pretty clear medium-term goals and indicators to help the team get back on track.
Listing the right things to do is easy, but actually doing them is hard. We all pretty much know the answer. Successful research support teams have to
And yet generally we don’t.
Focus is the by far the hardest of the two, because it means making difficult choices. Having a focus, a specialty, a particular expertise, makes starting every conversation with funders, administrators, and researchers easier. And it means that your team is the first that comes to mind when a new opportunity pops up.
When you’re “the geospatial software team” rather than “research software support”, you have a strong clientele of committed users, a very clear story to tell administrators, and a very compelling reason why your team should be the one to get the grant to build the mapping software project. When you’re the “accelerated computing system experts” instead of “a team that runs a cluster”, you’re the one the researchers talk to about new GPU needs. When you’re the “long term data preservation team” rather than just a team that also has a lot of tape storage, you’re the one that they talk to about research data plans, and the ones they write into their grants.
The downside? Everyone wants to have a focus, but no one wants to stop doing things.
Research teaches us to be, well, entrepreneurial when it comes to ferreting out new research opportunities and new projects. Those of us who trained in teams cobbled precariously together by a patchwork of grants tend to develop a sense that every funding opportunity should be chased down.
Successful faculty members know better. They quickly develop a good sense of where their efforts are the most valued, and pursue opportunities to make contributions there with laser focus, making occasional, considered, detours or pivots as the field evolves.
But in research support services we often don’t quite develop that knack. There’s a lot of different kinds of research out there, and we genuinely want to help them all. But we can’t. We can’t have five top priorities, and single teams can’t have five specialties. We have to chose, and that means not doing things. Passing up opportunities - pointing potential research clients to other teams - is scary, and it’s the only way we can develop a focus on the areas we’re best at and most needed.
Communication is a lot easier and more successful with a tight focus. For researchers, it’s much easier say exactly what you can do for them when your team writes geospatial software than when you write “research software”. For administrators, it’s easier for them to get a sense from a particular community how valuable your team is then when you’ve helped a number of different groups with quite different tasks.
And the communication you perform is vastly more effective. If you’re regularly posting geospatial software success stories - or contributions or even just helpful tips - on your communication channels, that looks vastly more compelling to researchers who need geospatial work done, or funders evaluating grants, than the same number of stories scattered over three or four specialties. And that’s assuming you could have the same number of success stories with three or four “focuses” as with one, which probably isn’t true.
There is some really good material out there on finding focus, and tightly defining and managing the services you provide to researchers, in the literature around core facilities - I’m going through this article by Turpen et al now - and there’s a lot we could learn from those groups.
I’ll write more about that shortly; for now, on to a focus- and communications-themed link roundup!
How to say “No” right now - Lara Hogan, Wherewithall
The year end is approaching, and with it deadlines and final pushes. But other stuff comes up.
The solution to dealing with too much work isn’t “time management” - managing time isn’t a power granted to us. There’s only task management, and the number one task management skill is declining them.
This is a good time of year to practice saying no or deferring a yes - everyone’s in the same boat and so understands. In her latest newsletter issue, Hogan walks through three steps to say no for the next month:
Blogging your research: Tips for getting started - Alice Fleerackers, ScholCommBlog
Fleerackers’ post is aimed at researchers but works equally well for us in research computing. It starts off with an important point - not every blog post needs to be a 1500 word feature. It could be a summary of a paper or conference session, a Q&A/interview post; anything that makes the audience more informed about your groups work than it was before.
The decision for where to blog is normally easier for our teams - we generally have a website (but getting posts put elsewhere is good too - like the US-RSE blog aggregator or guest posts on relevant sites), and then set some kind of reasonable schedule. The most important thing about the schedule (IMHO) is that it’s realistic - routine posts, even spaced apart by a couple months, is better than a flurry of over-ambitious posting followed by exhaustion and abandonment.
Emulation as a Service for Heritage Institutions (PDF) - Dutch Digital Heritage Network
We tend to think of computing systems and software as tools for powering research and data preservation - but sometimes they’re the objects of research, or preservation, themselves. This is a status report on using emulation of older computing systems to make sure that items of digital heritage - software or content - aren’t lost as the hardware becomes extinct.
Focus enable strategy - not only what you’ll be doing, but how you’ll be doing it.
Developing a software development strategy for a team allows you to focus on the important parts of each project rather than bikeshedding the same decisions again and again. You can’t develop such a strategy for executing projects if each project is completely different.
Larson’s article is an argument in favour of grounding such a strategy in the pragmatic (e.g. boring) and avoiding the siren call of “innovation” by doing it from the ground up - writing multiple decision documents for individual projects or components, and discovering the underlying strategy by synthesizing them.
SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then? - John Russell, HPCWire
Continuing with storage - tiering is hard to manage, and works poorly at modest scales. This summary of an SC20 panel discusses a number of vendors’ upcoming models for how to handle the diverse range of file sizes and access patterns needed in large computing centres. There’s several presentations summarized here, but to my mind the more interesting ones are:
Modern storage is plenty fast. It is the APIs that are bad. - Glabuer Costa
How io_uring and eBPF Will Revolutionize Programming in Linux - Glauber Costa, Scylla
Our current userspace and kernel level APIs for file I/O are based on storage hardware from decades ago, and are now frequently the bottleneck for high-performance persistent media like NVMe or even SSDs. Costa’s two articles - the first more recent and higher level and the second older and focussed on the two recent technologies that will enable future filesystems (or applications that take control over their I/O, such as databases) to make full use of this hardware.
(Worth noting another recent link, too. There are also people who argue that the division between filesystems and and databases is a historical artificact which we should move past - such as the Boomla project is trying to do.)
Efficient single-node graph mining with peregrine.
You don’t always need machine learning. (See also: Figure 1 of https://www.biorxiv.org/content/10.1101/2020.01.10.897116v1 where least squares does very, very well)
Pretty important for CPU-intensive research computing - between cgroups and containers, your scripts should use nproc not /proc/cpuinfo for determining number of cores to use
As you know, I think embedded databases are scandalously underused in research computing - here’s an article on SQLite as a document database.
“Reproducible”, “Replicatable”, or “Repeatable” mean a lot of different things in research computing communities. Konrad Hinson has a blog post laying out reproducibility as a spectrum. I’m not sure I like the terminology he’s chosen, but thinking of it as a spectrum seems a much more useful approach than using yes/no criteria.
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
A before-the-holidays push to get job ads out means we have quite a few listings this week. Highlights below; full listing available as always on the job board.
Assistant (HPC) Manager - Durham University Physics Department, Durham UK
This role will be engaged with the continued operation of the DiRAC Memory Intensive service, both routine maintenance procedures and also research and development projects leading to new capabilities and facilities for both users and support staff. The applicant will work with other support staff and with the other DiRAC technical support teams to deliver an effective HPC service for users, including responding to user queries. The applicant will be a member of the ICC Staff, and their primary role is to support the scientific research of the users of the DiRAC Memory Intensive service. There will be opportunities for training to support the career development of the successful applicant, both within Durham and in collaboration with other DiRAC and/or industry partners.
Research Computing Application Support Manager - Harvard Medical School, Boston MA USA
Reporting to the Director of Research IT Operations, the Research IT Application Support Manager will be responsible for supporting and maintaining Research Computing applications and platforms, particularly applications such as OMERO in support of imaging research, extending to other applications such as REDCap, Globus, Snapgene and other systems supported by Research Computing. This is a hands- on manager/team leader role responsible for managing the application support team, ensuring the applications meet the needs of the Harvard Medical School (HMS) research community, and overseeing operational support.
This position will be in Research Computing (RC) https://it.hms.harvard.edu/rc, which is part of the HMS Information Technology Department. RC supports the research community at HMS. This position will work closely with HMS Information Technology and HMS researchers who consume these services.
Manager - Research Computing Data Science - University of Alabama at Birmingham Medicine, Birmingham AL USA
In this position, you will be required to work closely with colleagues in the IT-Research Computing team as well as with faculty, student/postdoctoral researchers, and technical staff across the campus to enable and accelerate their research computing efforts. You will be an integral member of a team focused on providing cutting-edge cyberinfrastructure for research.
Project Manager - McGill University Health Centre Research Institute, Montreal QC CA
Under the supervision of the Scientific Director, Dr. Simon Ducharme, and in close collaboration with MUHC clinicians and McGill University affiliated investigators, the Project Manager is responsible for taking a leadership role in the development of the creation of a new large-scale clinical and biological database at the RI-MUHC for the Department of Psychiatry. The incumbent works closely with various partners and entities for the coordination, management, strategic planning, and execution of this long-term database aimed at developing markers of for predicting disease course and outcome response to treatment in patients with psychiatric disorders. Depending on research interests and skills of the incumbent there is also an opportunity to develop and play a scientific role in the research activities of the project. The incumbent will be fully immersed in the research environment of the BRaIN Repair and Integrative Neuroscience Program of the RI-MUHC.
Head of Computational X Support (CXS) - Leibniz Supercomputing Centre, Garching Bavaria DE
You will be part of LRZ’s senior management and lead a diverse, talented team of scientific domain and computer science experts who function as the bridge between our computing environment and our user communities of local to international academic researchers, educators, and students. This team represents a broad set of experience and foci, including the computational science areas of traditional HPC numerical simulations, data analytics and machine learning, system and application development, education and training, research science engineering and software development, and quantum computing. Our team also possesses broad domain expertise, including astrophysics, biology, bioinformatics, geophysics, engineering, and social sciences. Collectively, its work spans from supporting our users with LRZ system and software environments, assisting in code porting and performance optimization, tracking incidents, developing system capabilities (for example, containerization and workflows) to participating in large third-party funded research projects and evaluating architectures and software for future systems.
Research Community Engagement Manager - TEC Partners, Cambridge UK
The company wants to help research technology professionals to accelerate research outcomes on the cloud using Microsoft Azure and other Microsoft solutions. To achieve this, they want to provide skilling, training, and community opportunities to this audience through the creation of a Research Technology Professionals community.
HPC Software Product Manager - System Management - Hewlett Packard Enterprise, Bloomington MN USA
We are seeking an experienced product manager for HPE’s system management software product for High Performance Computing (HPC) systems. You will be the subject matter expert for software tools that enable system administrators to manage supercomputing-scale clustered topologies and fabric.
Designs, plans, develops and manages a product or portfolio of products throughout the solution portfolio lifecycle: from new product definition or enhancements to existing products; planning, design, forecasting, and production; to end of life.
Project Manager, Data & Implementation Science - University Health Network, Toronto ON CA
Apply breakthrough design ideation, integrating emerging technologies, and incorporating subject matter expertise and data-driven evidence to create holistic solutions that meet business requirements and establish best practices and standards; Apply rigorous project management, Lean and Agile methodologies across the project lifecycle to ensure milestone and on-budget completion that meets organizational requirements; Collaborate and facilitate requirements gathering within a multi-disciplinary team environment, including executives, managers, clinicians, business teams, technical stakeholders, and vendors; Drive business insight and demonstrate benefits of implemented initiatives to the organization by applying sound data analysis, data quality, data reporting & visualization;
Senior Engineer, R&D High Performance Computing - GSK, Brentford UK
This Senior Engineer, R&D High Performance Computing role is accountable for contributing within a product team for the solution design, implementation & delivery of Pharma R&D Tech systems using cutting-edge technologies. Senior Engineer, R&D High Performance Computing will be working in alignment with agile and DevOps principles, in close contact with a Manager of Engineering, Architects, Agile Facilitators and Product Owners.
Senior Manager, Advanced Computing Projects - University of Minnesota, Minneapolis MN USA
This position will be responsible for the daily operations of critical research computing systems serving multiple departments throughout the University of Minnesota system. Assures the operational efficiency of Research Computing by leading efforts to secure and update hardware, develop and review procedural documentation, and work with the Associate Director for Advanced Systems Operations to develop a sustainable budget. Incumbent is called upon to solve complex research computing challenges through original design of protocols, procedures, and policies aimed at fulfilling the research objectives. Is an integral component member of the research team who is instrumental to the success of the project; consequently, may appear as co-author on peer-reviewed papers. This position normally is responsible for adherence to security standards and, in general, conformity with all other extant regulatory requirements. Operates independently with guidance and oversight from Associate Director for Advanced Systems Operations.
Data Infrastructure Manager - Sanofi, Toronto ON CA
The Data Science is led by the Head of Data Science and consists of Data Management, SPC and Process Modeling teams. The Data Infrastructure Manager will provide high quality data engineering solutions to support data scientists, data analysts and business users. This position will report to Deputy Director, Data Management.
Technical Programme Manager – HPC/AI - Hartree Centre, Warrington UK
You will demonstrate the capability and potential to build upon a strong foundation of programme delivery expertise, existing technological knowledge relating to HPC and AI, and their external reputation developed through robust stakeholder relationships.
You will have a good technical understanding of the opportunities and challenges relating to the adoption of technologies such as AI, machine learning (ML) and deep learning (DL), and be able to work collectively with staff across STFC and our partners to translate these into meaningful programme deliverables and outcomes.
Director, Data & Analytics Centre - Johnson & Johnson, Toronto ON CA
The Director will lead the Data & Analytics Centre, which is one of the core organizations of the Customer Experience Excellence (CEE) division. This role reports directly into the Sr. Director of CEE and is part of the CEE and Sales & Marketing Leadership Teams.
Janssen Canada recently announced a major initiative to lead change in the Canadian Healthcare market. The vision behind this initiative is for Janssen to evolve from being the market leader that delivers valuable medicines and services to the market, to the leader that also delivers sustainable and measurable health outcomes for patients. The CEE division, the Data & Analytics Centre, and this role will play a critical part in realizing this vision for the Janssen Canada organization.
Modeling and Simulation Manager - Rockport Networks, Ottawa ON CA
As the Modeling and Simulation Manager, you will play a key role in Rockport’s success through your contributions managing the network team throughout the analysis and development of Rockport’s innovative software solutions using the latest technologies in a fast-paced and growing environment. Reporting to the Vice President of Systems Engineering, you will lead the development of models and simulations tools for various Rockport’s network designs.
Manage a small Networking team to advance modelling and simulation capabilities of a novel networking technology. Work closely with product owners, engineers, and architects to develop the best technical solutions. Evaluate and compare needed frameworks, libraries, and tools Validate implementations and predict performance in large-scale deployments.
Senior Scientist/Manager - System Solutions - Qiagen, Manchester UK
QIAGEN is looking to hire highly motivated people who are passionate about improving healthcare. This role will be based in the Global System Solutions department, supporting life cycle management and system development of QIAGENs automated platforms and workflows, utilizing state of the art RNA and DNA technologies.
The team is made up of platform managers, scientists and engineers who support, as key experts, life cycle management of instruments, including managing component changes, second-level customer complaint support, and system development in areas such as instrument engineering, software, biochemistry, molecular biology, whole workflow testing and customer troubleshooting.