RCT #188 -Immediate next steps for managers worried cuts are coming
Plus: Influencing people; Who’s using LLMs (hint: our users); USRE’25 CFP; Source Available Licenses; Many-Analyst Papers and what they say for our work; Training on many GPUs; SimOps
Well that escalated quickly
Last issue (#187) we talked about challenging times ahead. Since then, even in the UK things aren’t going great, but in the US...
There’s been a bit of a delay in followup; as some of you know, I’ve been talking individually with leads and managers, especially colleagues in the US about all the stuff going on, and offering help when I can (and at least an understanding ear when I can’t).
With the turmoil of direct and indirect cuts to health research (of questionable legality), cuts to NSF staff, graduate programs are shrinking, summer research undergraduate programs are being shrunk dramatically… and now swingeing cuts to NASA, an agency so broadly popular at home and abroad that it does a decent line selling merch.
And yes, there have been some successes in court rulings, but people who are determined to slash science budgets aren’t going to stop after a foiled attempt or two.
While all the advice in #187 is still relevant, I’ve had a number of conversations with US colleagues about more urgent near-term actions leads and managers can take. I’m going to share the distilled advice here. Not all of it will be relevant to everyone’s situation, but a lot of the moves you’d want to make are in the same direction.
Broadly, the advice falls into the following categories;
- Get a sense of the conversations as best you can
- Make sure you’re as well positioned as you can be
- Cut costs where you can as soon as you can
- Be seen as useful in securing new lines of funding
- Get ready for the worst, just in case
- Support your team during this whole time
Get a sense of the conversations as best you can
Your boss, or your grand-boss, or your great-grand-boss, is hearing about institutional scenario planning, potential pivots, and triaging priorities, probably in a pretty chaotic fashion.
They are not going to be allowed to loop you directly into that, or even directly share any of it. Everything’s too messy and half-formed, there’s probably comments being made that shouldn’t be circulated, and there are unlikely worst-case scenarios being thought about. It would be wildly irresponsible to sending random managers those half-baked probably-not-going-to-need-it-anyway scenarios.
But if you’re going to be a helpful partner to your boss/grand-boss/etc, you need to know some of the priorities right now so you can move things in whatever direction you can. Is the priority finding new funding lines, bringing in external business? Are there particular areas of research (health & life sciences, social sciences) that are going to be hurt most, and is the desire to try to cushion the blow there with resources from elsewhere? Is it time to start cutting costs? [LJD: yes, it is]. What’s the best role you can play right now?
Talk to whoever you can, indicate your willingness to help, and try to get some sense of the directions and priorities. Those will shift, and rapidly. But the more and more frequent information you can get, the more specifically you and your team can be prepared.
Be as well positioned as you can be
A lot of the suggestions last issue were about this, and they remain good and useful activities; but if you haven’t (say) started a program of collecting and writing up testimonials/success stories already, this may be too late to start, as people’s minds are going to be focussed elsewhere.
- Identify your best customers (#176), and make sure you know what you do for them and what that has in common. (This isn’t a bad time to reach out to those best clients and see how they’re doing with all this and if there is help they need)
- Identify the other teams you work with (who are colleagues, not competitors) (#164) - these could be local or national or other external teams. Make sure it’s clear in your mind how you work together and complement each other. Might not be bad to also establish regular conversations with them during … everything .. so you can act together and share information
- Make sure you know what your institution’s long term priorities were (it shocks me how few of us read our institution’s strategic plans and research strategic plans), and to the extent possible, be up to date on shifting priorities right now.
- Be very clear about the value you bring to the institution. Bringing in research funds is always a good one, but some of those lines of funds might not be as feasible as once they were. Supporting excellent science in the priorities above is important. Workforce development (which has the huge advantage here of tying into the teaching mission) through trainings will likely continue to be important. For any of these, have extremely clear and compelling stories about why your team is essential and inexpensive way of generating those outcomes.
And I can’t say this enough - run these past your manager to make sure you’re not missing anything.
Cut costs where you can as soon as you can
This one is tough, because the way our budgets work there’s generally fairly modest amounts of discretionary spend - most of or funding comes to spend a specific way for a specific effort.
But in the likely situation of cuts coming, having already cushioned the blow a little bit and maybe having a little free cash can be very helpful.
To the extent that there are things you can cut current or future spending on, now’s the time. Your institution probably already has some travel freeze and possibly other freezes on things like training spending. If there’s licenses or cloud spend or support costs you could cut, even if it would cause inconvenience, these are things to start cutting.
The one that hurts here - and it’s the only one that really moves any needles in our teams, given our cost structures - is hiring. If your institution hasn’t already frozen hiring, and you do have a req open, well, it’s almost certainly time to close it. Yes, even if you’ve put a lot of work into it, yes even if you’ve been doing a lot of interviewing, yes, even if you’ve been waiting for years to be able to hire this position. This bites, because there are a lot of good people out on the market right now. But if drastic cuts do come, it’s generally last in first out, and hiring someone just to lay them off in a couple months is a terrible, irresponsible thing to do to someone.
Here especially, run everything by your manager before doing anything that would be hard to undo, as there may be local context specific to your situation that changes the advice above. Most of it is going to be pretty close to universal, though; they are standard “batten down the hatches” moves when a storm seems to be coming.
Be seen as useful in securing new lines of funding
There are a lot of partnering efforts that can be done in the longer term which are extremely valuable - I have been meaning to call out University of Michigan partnering w/ Los Alamos on AI research - and I do think these efforts can be useful for a lot of our teams.
But that’s not an answer for today.
Right now we need extant funding lines that we and our researchers can tap into. Some of this might mean making sure alternate calls for funding proposals are sent to our researchers and users; it might mean following up with our best clients and maybe the second tier of clients to see what they’re considering applying for.
It would be very useful to be seen to be making an effort contributing to grantwriting efforts from these other funding sources. Go out of your way to volunteer to write text, generate some relevant paragraphs of your teams capability in the format that any such proposals need to be in, and share broadly.
If your team has ever done fee-for-service work with private sector or external clients, move mountains to see if you can to start bringing in a bit more business there - maybe training, maybe direct services, anything in your portfolio. Start with clients you’ve already worked with, and ask for referrals if they don’t have anything for you. Any trend upwards in revenue from these sources would be almost as meaningful as the actual amount of money involved, if it seems like the trend could continue. Similarly if you can do anything to help some of your clients land some private sector contract work.
And you’re going to have to toot your own horn here, and have your boss and so on do the same. Yes, that’s uncomfortable for us, and well, too bad. In times when a lot is going on, simply doing something good and worthwhile isn’t enough - you have to be seen to do it for it to have the needed impact.
Share your successes, however minor, in the form of useful information and help. “We’ve had good success helping researchers X, Y, Z submit to this alternate funding stream - we’re happy to show others how”; “Here’s how we’re re-connecting with our industry clients and succeeding in bringing in some additional revenue that way - does anyone have any other approaches that have worked for them? Happy to put together a document where we can all share what’s working”.
Get ready for the worst, just in case
Some of our teams will suffer cuts. Others won't. We don’t necessarily know which one we’ll be, so we have to be prepared for the worst.
Part of the preparation involves having a short clear presentation about the work we’re doing, how tied it into other parts of the ecosystem, how costs have already cut, and (as above) how it is an essential, highly leveraged, cost effective way to do the things the institution needs.
The goal of that work is to justify the existence of the team.
Even if the justification for the team as a whole is accepted, people are going to pouring over books looking for anything that can be cut. Again, in our line of work, the big expenses are equipment that’s already been bought, and people’s salaries. That means individuals are going to be looked at for cuts.
As a manager or lead, it’s our responsibility to have ready information about each of our team members to make the case for them. This is sort of the same justification above for the team, but at an individual level:
- Their recent accomplishments, and how those contribute to the current immediate priorities
- The researchers and other teams they work with
- Quotes from researchers if you have them about this team member in particular
- How they’ve grown in the role (past growth is partial evidence for the potential of future growth)
- What they’re working on now
- What their skills are - what other priorities they could be contributing to
I really hope you don’t find yourself going through your team roster with someone who is seriously looking for cuts, but if you do the best thing you could have done for them is to have put together the strongest possible case, in the language of the immediate priorities of the organization, for each of them. The time to do that is now, when you don’t and hopefully won’t need it. Because if you wait until you do need it, it will be too late to do a good job.
Again, your boss will be able to give good feedback with important local context that I don’t have.
Support your team during this whole time
This is a stressful time for your researcher clients, and for your team members. Being in contact with the researchers and helping with you can is good and important work, but our primary responsibility is for our team members.
There’s not much we can tell them - or anyone - about the future. But we can be there for them to hear their concerns, we can reassure them that we’re doing our best and that we’re looking out for the team, and we can direct them into the areas where the institution most needs them.
If you’re not already doing one-on-ones with team members, this is the time to start, so they have a regular check-in with you. Don’t promise anything you can’t guarantee, but be a steady presence, a calm voice, and a ready ear.
Focus efforts where the institution most needs them focussed right now, have the team (and ideally individuals on your team) seen to be valuable and contributing during this challenging time, be there for your team members, and that’s about all anyone can ask of you.
If you need to ask some advice, or even just to vent, or have taken some action you think others would benefit hearing from and want to share, you can email me at any time (jonathan@researchcomputingteams.org), or hit reply to this email, or schedule a short call with me . I know this is a tough time and I’m happy to talk.
And now the roundup (all on happier topics!)
Managing Teams
Over at Manager, Ph.D., I talked *last week *about influencing people, and how being right is just table stakes.
The roundup covered posts on:
- Setting clear expectations upfront to avoid getting into a micromanagement spiral
- Management is important, especially for innovative small enterprises - no real lessons there, just an affirmation that management practices really demonstrably matter
- The power of asking insightful questions and digging in
- Managing our own time, which in practice means putting important, valuable activities on the back burner for a time
Product Management and Working with Research Communities
The Anthropic Economic Index - Anthropic
I know I’ve brought this point up in the past - but whether we as technologists use LLMs in our work is not a super interesting question. The bigger question is, as people who support research, how are we adjusting to the situation where the researchers and trainees are adopting these tools pretty enthusiastically.
Anthropic recently released a blog post, paper, and dataset on the high-level tasks Claude is being used for:
- 37.2%: computational + mathematical
- 9.3%: education + library
- 6.4%: life, physical, and social sciences (which seems like a small number except that only about 1% of people do life, physical, and social sciences!)
One can question the task and categories - they follow O*NET which I’m sure is a perfectly sensible industrial classification but generate odd results for our purposes (bioinformatics is the top result under “office and administrative”; as with technical writing under “arts & media”).
But people, including presumably our clients, are using at least Claude for software development, data analysis, and scientific research quite a bit.
And this can be great! Maybe this lifts some of the pressure first-tier tech support (“what does this error message mean? Why won’t this run? How do I do X again?”). But sometimes it will replace an easy issue with a complicated one if the fix isn’t right.
It will also change how people interact with the software we produce, and the systems we set up. And that changes some of our incentives:
- Across all research support
- Use of these tools increases the value of good documentation and examples, as users can copy those in and ask for how they use that to do X
- Research software
- It makes it easier to write simple scripts around our tools, making our tools more valuable if they can be reasonably automated and scriptable
- It makes writing plugins (much) easier, if we want to allow users to change behaviours
- It makes teaching unit and functional testing much more important (which is fine, because it also makes writing those tests easier)
- Research computing:
- It makes more advanced use cases more accessible (easier to write complicated scripts), which is good to the extent that it enables workflows and bad to the extent that it pushes the systems in directions we’re not prepared for
- It increases the value of relatively standard defaults and tooling vs bespoke local tools - before users have to learn them from scratch either way
- Research data:
- It makes simple analyses easier, probably increasing the value of making data FAIR
- It also makes misinterpreting even correct analyses easier, making descriptions of assumptions and limitations essential
How are you seeing these tools being used, and how are you responding? Let me know - just hit reply, or email me at jonathan@researchcomputingteams.org
(One final note; apparently “general management” tasks are underrepresented in the Claude dataset, which disappoints me. One of the tasks that this sort of tool can absolutely and uncontroversially help us with is practicing tricky conversations which often hold us back as managers and leads, or be kind of a “rubber duck debugging” (but the duck talks back!) about interpersonal / interdepartmental situations.)
Research Software Development
USRSE’25 Call for Submissions - 6-8 October, Philadelphia PA USA, Papers due 2 May
I stopped collecting conference information for the newsletter, but I’m happy to include a blurb for a worthwhile conference if asked! In this case it’s the US-RSE’s annual conference:
Whether you’re a data scientist, digital humanist, scientific programmer, software developer, or research software user, US-RSE is where people at the intersection of code and research come together. The USRSE'25 conference is your chance to connect with peers, mentors, and experts in the fast-growing world of research software. Don’t just take our word for it—100% of last year's post-conference survey respondents said they would return and recommend the conference to others.
Information is available at the conference website; you can volunteer as a reviewer by filling out this form before April 1st.
Keygen is now Fair Source - Zeke Gabrielse, Keygen
I’m fascinated by “fair source” and other “source-available” licences, that potentially open sustainability options for research software by making the source available but reserving some rights (such as running it as a commercial service, for SSPL, or distributing a licence key for some of the fair source licenses). I suspect people would absolutely hate this initially, but I like the idea of having the transparency of source available (which I do think is important for science) while still making the tool saleable in some sense.
In this article Gabrielse, the author of software licensing tool Keygen, talks about the reasons for choosing the Fair Code License for his product.
Research Data Management and Analysis
The Sources of Researcher Variation in Economics - Huntington-Klein et al, SSRN 5152665
Same data, different analysts: variation in effect sizes due to analytical decisions in ecology and evolutionary biology - Gould et al., BMC Biology
Even faced with the same data, ecologists sometimes come to opposite conclusions - Cathleen O’Grady, Science
There’s been a flurry of “many analyst” projects since the first one, in 2018, in Psychology. They’re fascinating!
Weirdly, especially given how intrinsically interesting they are, how they connect with our work in particular, and the importance of the questions they’re asking for science in general, I’ve seen little to no discussion of them in the research software or research data science communities. I think this is a mistake.
In many of these papers, we see huge differences in results from practitioners in the field, including seeing a strong positive effect, strong negative effect, and no statistically significant effect. That’s unsettling!
As O’Grady points out, it’s not necessarily shocking, but some of the explanations are actually worrying in their own right:
The first question asked how the growth of blue tit chicks is influenced by competition with siblings in the nest. […]. Yet expertise plays an important role in winnowing down the choice of analyses, he says. Scientists already familiar with blue tits, for instance, would have a better idea about what analyses to run.
Given the data set in question and the analysis, this sounds dangerously close to “people most likely to know the expected answer get closer to the expected answer”, which is actually not super reassuring!
And here’s a point which strikes right to the heart of the software side of our business: in the economics one, which sees similar variations:
Across researcher demographics, occupation, and professional experience, there was no strong relationship between researcher background and either the level of the effect estimate they reported […]. **The only relevant difference we found is that the minority of researchers who used the R programming language were more likely to report outlier estimates than researchers who used Stata.
The reason inferred in the ensuing discussion on the econ internets around this is that Stata’s widely commented on rigid defaults and economics focus provide guardrails here, which R lacks being a more general purpose framework with many contributed open-source packages, where you can do anything you want multiple ways.
So - is that freedom bad, maybe? As tools and infrastructure providers, to what extent do we owe researchers enforced good defaults and a clear “best-practices” path for standard analyses (which, let’s be honest, is most of the analyses done)? Or at least mechanisms for user communities to define and share those for themselves? Or are our tools strictly caveat emptor even if users get tripped up by bad defaults? Because that makes it easier to do different things?
I don’t know the answer to these questions, but they’re relevant, important, and fascinating, and I think they merit more discussions in our communities than I see them get.
And some other questions or discussion points immediately leap to mind:
- If any series of papers speaks to the importance of publishing data sets, these ones do - we have evidence here that many people hammering on the data set provides a consensus understanding much more securely than one group does
- We’ve always known at some level that any particular analysis could be flawed, but there’s only so many hours in the day. Can we use modern tools to analyze a data set multiple ways, using a variety of different plausible assumptions, and then generate in a semi-automated way an ensemble of analyses?
- Wet-lab work has very clear mechanisms for communicating protocols for running lab experiments, aimed at both transparency and helping others get results that should be apples-to-apples comparable, and there’s easy ways looking up protocols. How would we do something similar for analyses? Should we?
Research Computing Systems
The Ultra-Scale Playbook: Training LLMs on GPU Clusters - Tazi, Mom, Zhao, Nguyen, Mekkouri, Werra, and Wolf, HuggingFace
A lot of research computing teams are wrestling with helping researchers train models on multiple GPUs or multiple nodes, when both the staff and researchers are new to it. There’s a lot of technology and it’s been moving very fast!
This extremely comprehensive resource by Tazi et al goes very deeply into not just the approaches but they problems they aim to address. Maybe most remarkably, it’s illustrated by a data set of 4000 performance results varying all the different parameters and methodologies they’re talking about.
The material is very dense, and many readers will benefit from following links when given to other discussions of the topic - but it’s the best all-in-one overview of larger-scale training I’ve seen so far.
I’m a little late to this, but just noticing the SimOps framework and initial certification.
I think there’s both entirely practical reasons to want increasing professionalism and standardization around best practices for running simulation workloads the way we’ve been seeing with MLOps, and reasons to want to visibly be doing something in this space to try to shed the (unfair) image we have of being the more old fashioned, less sophisticated cousins of our colleagues in cloud and AI.
I like the framing and the general idea, but haven’t had the time to go through any of the materials. Does any of it seem useful/good? (Even if it’s obvious, it can be useful and good to have this stuff promulgated somewhere as best practices and standards).
Glen Lockwood wrote a short bit about some of his work with SimPy for discrete event simulations for modelling, e.g., HPC system reliability.
Random
For your colleagues or users who have now got a grasp on the basics of the terminal, job control might be a good first intermediate step.
Kind of late to this, but the Google office suite (motto: “Eh, good enough, I guess”) of docs, slides, and drawings finally supports markdown. You have to turn it on (per document?) in Tools > Preferences, after which you can “copy as Markdown”, “paste as markdown”, import/export markdown…
Speaking of markdown, here’s using Mermaid to show traces in markdown/github/gitlab/etc.
Fun project, VeriNum, looking at formally verified numerical methods/primitives - cool that it’s being done, but also a demonstration of how hard it is for even simple things like ‘parallel dot product’.
Quick overview of loaders in the context of trying a different glibc. On a separate note, I’ve been looking for ages into material that goes super deep into the whole linker and loader subsystem, LMK if you have a favourite reference.
Fun computational discussion of an extension to Fermat’s Last Theorem, Beal’s Conjecture.
The UX of lego interface panels.
The Ross Ibarra lab’s very generous repository of successful grant, fellowship, and job applications, with links to some of the very very few other similar efforts out there.
A minecraft server written in COBOL.
Conway’s game of life in a manpage with groff.
Turning R scripts and Google sheets into mobile apps or mobile-friendly webapps, respectively.
That’s it…
And that’s it for another week. If any of the above was interesting or helpful, feel free to share it wherever you think it’d be useful! And let me know what you thought, or if you have anything you’d like to share about the newsletter or stewarding and leading our teams. Just email me, or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Jonathan
About This Newsletter
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers. All original material shared in this newsletter is licensed under CC BY-NC 4.0. Others’ material referred to in the newsletter are copyright the respective owners.
Jobs Leading Research Computing Teams
This week’s new-listing highlights are below in the email edition; the full listing of 381 jobs is, as ever, available on the job board.
Senior Research Software Development Manager, Lewis-Sigler Institute for Integrative Genomics - Princeton University, Princeton NJ USA
The Bioinformatics/Research Software Development group in the Lewis-Sigler Institute for Integrative Genomics (LSI) at Princeton University is hiring a Senior Research Software Development Manager, reporting to the Assistant Director of the Lewis-Sigler Institute for Integrative Genomics. The Bioinformatics/Research Software Development group provides computational research expertise to the faculty, staff, and students in the Lewis-Sigler Institute for Integrative Genomics. The Senior Research Software Development Manager will lead a team of four Research Software Engineers, Bioinformatics specialists, and Database developers, who provide dedicated expertise to researchers to create efficient, scalable, and sustainable research software and services that enable advancements in research. The Senior Research Software Development Manager will have the opportunity to work directly with researchers and faculty to create tailored solutions for a diverse range of projects such as sequence assembly, single cell RNA-Seq, mass spectrometry, and microscopy image analysis
Program Manager, Precision Children's Health Sequencing - The Hospital for Sick Children, Toronto ON CA
SickKids is looking for a dynamic Program Manager with expertise in overseeing large-scale human genomics cohort studies. The program manager will provide day to day oversight of a new study called "Precision Child Health - comprehensive sequencing for childhood life-long disorders" (PCHSeq) that aims to generate short-read genome sequencing (GS) on DNA samples from ~10,000 participants, including family members, with a diversity of childhood-onset disorders, coupled with long-read sequencing (LRS) in a subset of ~1,000 "genotype-elusive" participants. Embedded within the Translational Genomics (TG) Node of PCH, this position will play an active role in coordinating participant recruitment activities, advancing knowledge exchange and data sharing initiatives and ensuring seamless integration of these initiatives into care pathways at SickKids. The successful candidate will have the opportunity to work with leading researchers in the area of genomics, and with multidisciplinary healthcare providers from across a broad range of specialties, to improve outcomes for children PCH.
Lead High Performance Computing Engineer - George Washington University, Ashburn VA or Foggy Bottom DC USA
As a Lead High Performance Computing (HPC) Engineer, you will be responsible for designing, implementing, and maintaining high-performance computing systems to meet the computational needs of the RTS. In collaboration with other high-performance computing (HPC) engineers, this senior position is accountable for the operations of multiple HPC systems and contributes to the strategic planning for next-generation services aligned with High Performance Computing services. As part of an advanced team of engineers, this role works closely with the GW research community to define and deliver HPC, related advanced compute and storage infrastructure to support the rapidly evolving research needs. The lead HPC Engineer also engages directly on research projects to understand and consult on the best options available as well as serving as the highest tier of support escalation for operational issues. This position develops and conducts advanced training and mentors other engineers on the team to enhance the interdisciplinary capabilities of the research technology support organization.
US Quantum Science Lead - Riverlane, Boston MA USA
We have a unique opportunity for a proven quantum error correction expert to lead Riverlane’s growing quantum science team in Boston, US. Currently a team of 4 scientists working on cutting-edge quantum error correction research (in collaboration with our UK-based teams), our US quantum science team also works alongside our newly established hardware engineering team in Boston. With plans to further expand our Boston team, this is a fantastic opportunity to grow and strengthen our high-performing US team.
Flow Facility Manager - University of Edinburgh, Edinburgh UK
The SBS flow cytometry facility is a core facility with expertise in research and teaching and a large, diverse user base across biology. We have both conventional and spectral flow cytometers (Fortessa, Aria, Sony ID7000 analysers; an Aria and a recently purchased BD Discover S8 spectral & imaging sorter). We are looking to recruit a facility manager to lead the flow facility: driving best quality experimental design and data acquisition; supporting the cytometers, the users and the wider school community; and contributing to plans for long-term growth and sustainability.
Director of Cloud, HPC, and Web Services (Hybrid Eligble) - University of Pennsylvania, Philadelphia PA USA
The Director of Cloud, HPC, and Web Services at Penn Arts and Sciences has a strong technical background and demonstrated experience leading IT teams in managing, innovating, and implementing technical services and solutions, preferably in a higher education environment. This role provides strategic direction and functional leadership for three critical infrastructure services for Penn’s School of Arts and Sciences: cloud Linux services (primarily AWS) supporting research, teaching, and administrative applications; on-premises Linux high performance computing (HPC) supporting cutting-edge research; and cloud-based Web Services supporting over 100 School websites. The position is hybrid elgibile, with 2-3 days per week required on campus or in our data center.
Research Facility Manager - McMaster Institute for Music and the Mind - McMaster University, Hamilton ON CA
The Facility Manager will manage the day-to-day operations of the LIVELab Core Research Facility to ensure that the research, artistic performance, and outreach excellence mandates of the facility can be realized. This work will involve communicating and coordinating with the technical staff, researchers, students, and volunteers, as well as liaising with performing artists, arts organizations, the public, and donors. The Facility Manager will also be responsible for event organization and project management, as well as billing, financial transactions, financial reports, and assisting with financial planning. The Facility Manager will report to the Director, McMaster Institute for Music and the Mind (MIMM); dotted line reporting to Associate Dean, Research & External Relations.
Software Development Manager - Compilation - Xanadu, Toronto ON CA
Xanadu is looking for an experienced Software Development Manager to lead our Quantum Compilation team. The team is developing JIT and AOT hybrid compilation pipelines for PennyLane, an open-source library for quantum machine learning, quantum computing, and quantum chemistry. Although quantum software development experience is not required for the role, a strong compilation background is strongly preferred. Experience with MLIR and LLVM core libraries.
Principal Research Data Steward - University College of London, London UK
The UCL Centre for Advanced Research Computing (ARC) is UCL’s new institute for infrastructure and innovation in digital research - the supercomputers, datasets, software and people that make computational science and digital scholarship possible. University College London (UCL) is seeking an experienced research data governance manager to lead the development and provision of services to support researchers working with sensitive and confidential information. The postholder will be based in the Centre for Advanced Research Computing (ARC), and work with researchers and data spanning all academic disciplines.
Operations Manager, Canadian Astronomy Data Centre - National Research Center, Victoria or Penticton BC CA
CADC is responsible for distributing to science users the observational data from Canada’s suite of ground and space-based observatories, providing advanced processing of the raw observations from those observatories and preserving those observations for use by future generations of astronomers. In 2024 the Canadian astronomy community joined the SKA Telescope project which will become the world's largest astronomical radio telescope. The SKA Regional Centre Network (SRCNet) is building out a cloud computing infrastructure that will interact with over a dozen cloud sites around the world. The CADC is leading Canada’s contribution to SRCNet which will be developed as part of CANFAR (Canadian Advanced Network For Astronomy Research). As the manager of the CADC Operations Team, you will be responsible for managing the team ensuring the successful operation of CADC and CANFAR services. You will implement efficient team management, clear workflow prioritization, and adherence to operational standards. Your responsibilities include deploying and monitoring the web service layer, managing staffing processes and supporting staff development.
Executive Director of the EPFL Center for Imaging - EPFL, Lausanne CH
The EPFL Center for Imaging serves as a hub for state-of-the-art technologies in imaging science across research domains. The only one of its kind in Switzerland, the center pools the expertise of over fifty laboratories distributed across the five EPFL schools. Its mission is to facilitate cross-disciplinary research in the field and anchor EPFL’s position as a world-leading institution in imaging science. We are looking for an expert in imaging science with management experience in academia to serve as the Executive Director of the EPFL Center for Imaging (https://imaging.epfl.ch/). In this role, you will oversee the Center’s operations, guide its strategic development, and expand its impact on the national and international stage. Reporting to the Academic Director and the Steering Committee, you will be responsible for consolidating the Center’s position as a leader in imaging science, launching new initiatives, building industrial collaborations, and expanding institutional partnerships in imaging.
Senior Product Manager - AI for Science - Microsoft, Redmond WA USA
As a Senior Product Manager – AI for Science, you will play an integral role in defining and executing our vision for AI-driven science platform development. In this role, you will work with cross-functional teams of product managers, engineers, designers, scientists, and researchers to build new product experiences, shape underlying AI capabilities that support these experiences, and foster a culture of innovation and high-craft product making.
Technical Project Manager (Functional Genetics) - Liverpool School of Tropical Medicine, Liverpool UK
Liverpool School of Tropical Medicine (LSTM) is seeking a Technical Project Manager (Functional Genetics) to support and lead groundbreaking research in mosquito genetics and vector control strategies. Working closely with the Head of Department for Vector Biology, Professor Tony Nolan, the successful candidate will play a key role in the operational and scientific management of the research group, ensuring smooth running of projects and support the group’s strategic goals.
Quantum Research Manager - IBM, Hursley UK
IBM Quantum is an industry first initiative to build universal quantum computers for business, engineering and science. This effort includes advancing the entire quantum computing technology stack and exploring applications to make quantum computing broadly usable and accessible. We seek a Senior Research Scientist with an outstanding track record in quantum computing to join IBM Research UK and to manage and develop a growing team of researchers. Experience with applications to quantum simulation and/or quantum chemistry is desirable.
Director of Bioinformatics, Germline - Fulgent, El Monte CA USA
Founded in 2011, Fulgent has evolved into a premier, full-service genomic testing company built around a foundational technology platform.
The Director of Bioinformatics leads a team of experts in bioinformatics and biostatistics in the planning, designing, and implementing bioinformatics infrastructure to address clinical diagnostics production, R&D, and client needs. The Director formulates strategic plans for bioinformatics solutions, working closely with other team leads and executive leadership. The Director serves as the expert and champion of bioinformatics, strongly emphasizing innovation and development of best-in-class bioinformatics solutions.
Director, Bioinformatics - Sanofi, Cambridge MA USA
The Computational Biology Cluster is part of the Precision Medicine & Computation Biology (PMCB) global research function at Sanofi. We are looking for a leader in Computational Biology with deep expertise in building bioinformatics solutions, software, data and analytics workflows. The post holder will lead the new Bioinformatics Data, Software and Pipeline Engineering team in the Computational Biology cluster and help to index and integrate new biomedical insights from biomedical big data.
High Performance Computing Group Leader, Laboratory for Laser Energetics - University of Rochester, Rochester NY USA
Are you ready to lead the future of high-performance computing? The University of Rochester’s Laboratory for Laser Energetics (LLE) is seeking a visionary HPC Group Leader to drive innovation and operational excellence in advanced computing environments. Join our Theory Division’s leadership team and help shape groundbreaking projects at one of the world’s premier research facilities for laser physics and computational science. Provide leadership and strategic direction for LLE’s HPC Group. Manage state-of-the-art systems like Conesus (powered by 4th Gen Intel Xeon Platinum) and Deluge, supporting groundbreaking simulations and research. Oversee R&D initiatives in HPC system software, data analytics, and storage solutions. Build and maintain strong cross-organizational relationships, ensuring alignment with institutional goals. Foster a culture of safety, innovation, and collaboration.
Staff Data Scientist, Data Science Institute - University of Chicago, Chicago IL USA
The Data Science Institute (DSI) at the University of Chicago was established to support the development of emerging efforts in data science across the University. Working in collaboration with UChicago Data Science Institute faculty and staff and The Schmidt Family Foundation 11th Hour Project (“11th Hour”), the Data Scientist will increase the data capacity of 11th Hour program staff and grantees. This role will consult on 11th Hour grantee data collection and modeling as well as the long-range goals of advancing scientific discovery, developing open source resources, innovative technical consulting, development and delivery of workshops and training, and other activities designed to advance the research and practice of data-driven and data intensive discovery across 11th Hour grantee areas. Work with nonprofit and social impact partners to design and execute data science projects for social good across multiple verticals including climate, energy, food, agriculture, and human rights.
Machine Learning Manager - MILA, Montreal QC CA
We are seeking an experienced and dynamic Manager to lead a team of applied researchers within Mila’s Applied Machine Learning Research Team. In this pivotal role, you will oversee the planning, execution, and delivery of cutting-edge research projects, ensuring alignment with Mila’s strategic goals. You will be responsible for managing and mentoring diverse teams of researchers, fostering a collaborative and innovative environment, and facilitating effective communication between researchers and external partners. Your leadership will be crucial in driving the application of machine learning solutions to real-world challenges, advancing both scientific knowledge and industry practices.
Clinical Research Core Manager - University of British Columbia, Vancouver BC CA
The Clinical Research Core Manager oversees the complex research, administrative, and operational aspects of large-scale, collaborative clinical research projects for the Division of Respiratory Medicine at the Centre for Heart Lung Innovation (HLI), St. Paul’s Hospital, UBC. This role involves managing a team of project managers and study coordinators, ensuring the successful execution of study recruitment, data collection, and data management. The individual will be accountable for the execution of study protocols within the allocated budget, personnel resources and timelines. The individual will provide guidance to colleagues to ensure the successful achievement of study goals and milestones and maintain a high level of current knowledge in security, privacy and ethical arenas. Additionally, the Clinical Research Core Manager is responsible for the oversight of core finances, budgets, and the management of study personnel appointments.
Senior Director, Analyst - High Performance Computing, AI Supercomputing, Quantum Computing - Gartner, Remote USA or CA
As an analyst, you will bring Gartner's research, best practices, and tools to life across a broad range of world class clients. Your focus will be on both the pragmatic research that assists a client in solving today’s problems, and provocative research that challenges a client’s assumptions on how the future will play out. Key areas of coverage will be on emerging technologies that will allow Infrastructure & Operations organizations to innovate using new technologies and lay the foundation for continued success and growth. Knowledge (and intellectual curiosity) in areas such as emerging AI technologies, quantum technologies (computing, communications & sensing), HPC (supercomputing) and smart robotics is important. You will apply your broad knowledge of technologies, vendors, processes, and best practices to provide actionable insights to our clients.
High Performance Computing Cluster Manager, Atmospheric Sciences - Hampton Universtiy, Hampton VA USA
The Hampton University High Performance Computing (HPC) Clusters Manager handles the development of HPC clusters, cloud implementation, data storage and transfer services for fundamental, sensitive, and secure computing systems. The Manager regularly engages with Academic units, and research leadership on institutional-wide strategy and investments, and builds and maintains relationships with the faculty and the various stakeholders. The HPC Cluster Manager must give great attention to detail and accuracy, be able to prioritize a diverse set of tasks, have excellent oral and written communication skills, and conduct him/herself in a professional manner.
Senior Data Lead, UK Human Functional Genomics Initiative - University of Exeter, Exeter UK
We are seeking a Senior Data Lead for the UK Human Functional Genomics Initiative. The successful applicant will work with the Director of the UK Human Functional Genomics Initiative to develop and lead the UK Functional Genomics Initiative (FGx) Data Coordination Centre (DCC). The Senior Data Lead will be responsible for providing high level oversight, leadership and management of the staff and activity in the DCC. Through close collaboration with the Director, the FGx Executive Board and external partners they will be oversee a team tasked with building and testing bespoke applications for data management and analysis, and develop and maintain robust data management processes for the FGx initiative, maximising the integrity, quality and completeness of the data collected. The post-holder will also have a key leadership and strategic role, as part of the FGx Executive Group, providing senior business and data management expertise and input to drive development of innovative and reliable information systems solutions to underpin delivery of the FGx initiative portfolio and its overall business operation. The Senior Data Lead will oversee the management of data from a wide portfolio of research projects and you may be required to develop expertise in new areas.
Principal Software Scientist - Diamond Light Source, Harwell UK
Software helps to enable the world-leading science performed at Diamond. Our software systems facilitate operations at all levels, including the low-level control of synchrotron and beamline hardware, the planning, execution and monitor of experiments, data archiving and retrieval, data processing and visualisation, the application for beamtime, and capture of remote experiment plans and samples. Our software engineers work alongside our scientists to develop innovative and robust solutions to keep Diamond at the forefront of scientific research.
Research Computing Manager, Department of Orthopaedics - University of Oxford, Oxford UK
Data science is an essential part of the world-leading scientific research taking place at the Kennedy Institute of Rheumatology (KIR). Our researchers are using high performance computing (HPC) to analyse genomic, genetic and imaging data, to perform mathematical modelling as well as for translational informatics and digital pathology. In this role, you will join the KIR’s data science team to take responsibility for enabling scientists to use the HPC solutions that they need to conduct their research. This will involve working closely with students, post-doctoral researchers and group leaders to assess needs, provide support and training and to lead deployment and optimisation of HPC solutions, software and bioinformatics pipelines.
Director of HPC Services, EPCC - University of Edinburgh, Edinburgh UK
EPCC is a critical enabler of High-Performance Computing (HPC) and Artificial Intelligence (AI) and Data Science Services provision in the UK. We manage and support the delivery of the UK’s national supercomputing services that contribute to the UK being a world leader in science and innovation. The Director of HPC Services provides high level and strategic leadership as well as hands-on management, working with stakeholders to grow the international reputation and standing of EPCC and The University of Edinburgh. The ideal candidate will have the credibility to work with a broad range of external stakeholders and the experience to deliver exceptional HPC services at one of the world’s leading supercomputing centres.
Executive Director, Research Computing & Data Engagement - Clemson University, Clemson SC USA
Clemson University is seeking an Executive Director, Research Computing and Data Engagement. The Executive Director will lead the humanware support for advanced computing and data science services provided by Clemson Computing and Information Technology. Reporting to the Associate Vice President for Research Computing & Data (RCD), the Executive Director actively engages in outreach across Clemson to identify potential new users of advanced computing resources (including on premises HPC, OSG, ACCESS, NRP and cloud services). The Executive Director for Engagement works closely with the RCD Executive Director for Infrastructure, who is responsible for the design and operation of Clemson’s 3 petaflop Palmetto2 high performance computing cluster, the 5 PB Indigo firmware high performance storage, and other supporting technologies.