I’ve been catching up on some reader feedback now that I’m back in the swing of newsletter things. Thank you all so much for your emails! I’ve learned a lot from our exchanges, and often shifted my position on things (or at the very least corrected how I talk about those things).
I especially appreciate pushback. Some areas I’ve really gotten pushback on in the past couple of weeks are what I’ve written about about utilities vs professional service firms (#127), or strategy and strategic planning (#125, #130). Discussion about those topics has helped me realize that I’ve never really been explicit about the audience I’m prioritizing for this newsletter.
In the past I’ve worked with VPRs and those that report to them, people putting together strategic plans for national communities, and those reporting to CIOs. People in those roles are doing important work, and deserve more support than they get!
But the most urgent need in our community, and the priority for me in this newsletter, isn’t supporting those that already regularly meet with CIOs or VPRs. It’s helping the first-level managers of individual small teams, and those who aspire to have more say in leadership in of those groups. Managers and leads and aspirants who were never given any training, are still given little to no direction or support, and yet find themselves accountable for a key research support function. People trained in academia's years-long timescales, who are now having to figure out on their own how to run a team that has people depending on them weekly for firm deliverables. Leads who are figuring out how to do research project management when the research problem is still fuzzy. People managers still new to hiring, managing, and figuring out how to structure and strategize for a service organization. People now responsible for research software development groups, or data science service teams, or computing ops teams, or informatics-heavy core facilities, or even compute-and-data heavy spinouts from academic research groups.
There are thousands and thousands of people in your position. You are the absolute backbone of computational and data-enabled research support. The rubber hits the road, research projects succeed or fail, at the level of your teams and organizations. And I am routinely frustrated — outraged wouldn’t be too strong a word either — by how little support and training and resources you get.
Everyone is more than welcome to read this newsletter, of course, and give me feedback and pushback! There is lots of overlap with other communities. Some people leading small teams at new centres are the ones reporting to VPRs and CIOs. The people management topics are extremely widely applicable; we’re all just people, after all. I hope many folks from many communities read and benefit from the newsletter, even if not everything here will be relevant to their immediate needs.
I especially hope that more people in director and higher level roles, even VPRs and CIOs, read the newsletter too, if only to better appreciate the needs and challenges of the line managers and tech leads in our community.
But I can only write for one audience at a time, and this newsletter is for you.
Not forever, though. If I do my job well, you’ll outgrow this newsletter after a while. If you unsubscribe because you’ve become more confident in your skills and ability and knowledge, and don’t need this long weekly newsletter cluttering up your mailbox any more, I will count that as an enormous success for the newsletter and for the community. My hope is that after then, as you become more senior, you’ll give the incoming managers and leads more direction and support than you received. Maybe you’ll even recommend this newsletter to them. And perhaps you and I could work together in different ways.
To speed that day, let’s find new ways of working together and sharing knowledge within our professional community. I’d like to start building more peer-to-peer knowledge sharing amongst the newsletter readership as it grows. We’re not quite at a critical mass here yet, but we’re getting there. We have the (so far pretty quiet) #research-computing-and-data channel at the invaluable Rands Leadership slack, but maybe we can foster other ways of sharing each other’s stories and knowledge, too. What would work best for you? I could do short interviews with other readers, and report on them here; we could send in stories; we could build another (or build on another) online community. Let me know what you think would work best (and what wouldn’t work at all)! Just hit reply, or email me at email@example.com.
And now, on to the roundup!
Routinely giving direct feedback is, I think, one of the hardest skills for many managers coming from academia and tech to master. But it’s absolutely vital. Probably everyone reading this wishes that the people they report and are accountable to gave them more feedback. Like you, your team members and peers deserve to know what the expectations for their work are, and when they are exceeding or failing to meet= those expectations. If you won’t tell them these things, if you won’t share that information, how can they possibly learn it? What will they learn instead?
Majors emphasizes the need for clarity of feedback, that you’re giving it for the right reasons, and that you give it frequently (“don’t wait for a ‘wow’ moment”) and that it’s mostly positive.
Stainer provides a meta-model for feedback - the Centre for Creative Leadership’s SBI model that Google uses (and that Majors recommends), Manager-Tools Feedback Model, and Lara Hogan’s feedback equation all follow this basic structure pretty closely. Note that this model can and should be used to give positive reinforcing feedback, and to give it significantly more often than you give negative corrective feedback:
Then he suggests a habit of seeking opportunities to give more feedback (again, mostly positive).
Reducing Friction - C J Silverio
Silverio gives us a great article on a key role for technical leadership of a team and an organization - reducing the number of things slowing the team down unnecessarily, rather than trying to speed them up somehow.
The theme is reducing the friction in the system. Having enough process that people know how to do things, and having friction where needed (cars can’t drive without friction between the wheels and the road!) but not unnecessary aerodynamic drag coming from either overly heavyweight processes or inadequate tooling and support.
The article correctly points out everyone agrees these are bad things, but that friction builds up over time, to the point that people might not even really see it. “That’s just the way things work here”. Constantly reducing friction requires eternal vigilance.
A nice post by the Stay SaaSy team on what becoming a more senior leader entails:
You need to build a machine that repeatably produces the outputs that you owe the [organization], rather than focusing on producing the outputs themselves through personal heroics or force of will.
The post also mentions accepting a larger scope of responsibility, winning with people you don't necessarily like, and that you need to constantly search out feedback.
Related to the increase in scope, Chris Williams who has a long career in tech leadership posted a short and terrific description of what hearing “be more strategic” means as feedback. It’s a tiktok video — yes, your faithful Gen-X correspondent is linking to tiktok videos now, and no, don’t worry, there’s no Research Computing Teams tiktok series coming up. He has a lovely diagram of the implied increase in scope of thinking along both time and organization dimensions, which I’m stealing from shamelessly to include here.
The solutions to all our problems may be buried in PDFs that nobody reads - Christopher Ingraham, Washington Post
This is an old (2014!) article that came up on twitter and other places recently, and has an evergreen message for those of us in research and service organizations.
The key statistic that Ingraham zooms in on comes from a World Bank report on impact of their reports. Nearly a third of over 1,500 PDF reports that were atlon their website for at least two years had never been downloaded, not even once.
The suggestion in the article is that the issue was the format (PDF vs web page), and doubtless that makes some difference, but there’s a bigger issue here than a technology choices.
Our work does not speak for itself. Reports and successful case studies and our centre’s web resources are not moral agents that take independent action. They are not capable of communicating themselves to those on campus or in our communities who might benefit from them or learn from them.
Only people can communicate with other people. That means assembling successful case studies (say) of your centre or product or services on your web page is a crucially important first step, but it isn’t enough. It means making those results known requires constantly communicating them, again and again, to your community. It’s labour intensive, and tedious, and there’s no alternative. Otherwise, the pointer to a team who could be a solution to our researcher’s problems might be on a web page that they never read.
One of the challenges and benefits of moving so much training to a virtual format has been having to think much more carefully about how we structure the many different kinds of communications that go on in training. Q&A, support, peer-to-peer communication during group work, as well as the actual teaching part.
I’ve put off discussing this paper since the end of July because I haven’t been able to summarize it - it covers the range of communications and activity types, and how to structure them to improve learning and building a community within the context of the course. So I won’t. If you’re hosting long virtual trainings, or even want to think about how you’re going to structure long-form in person training in the future, to support both learning and DEI, this article is very thorough and thought-generating while being a quick read.
Correlates of Programmer Efficacy and Their Link to Experience: A Combined EEG and Eye-Tracking Study - Peitek et al, ESEC/FSE 2022
When we teach researchers or developers to code, we do not put nearly enough emphasis on how to read code.
We found that programmers with high efficacy read source code [in a] more targeted [way] and with lower cognitive load. Commonly used experience levels do not predict programmer efficacy well, but self- estimation and indicators of learning eagerness are fairly accurate.
In a study of a small number of people who underwent very close scrutiny (eye-tracking, electroencephalography) while working on 32 Java snippets, developers who did particularly well were consistently better at zooming in on the right part of the source code while reading it, and read the code more easily.
Other metrics that seemed to be relevant were how much time the developers spent reviewing others’ code, writing tests, and mentoring and learning. Years of programming (professional or otherwise) had very little correlation with effectiveness (I can not emphasize this enough - get rid of your “5 years of python experience” job requirements).
Self-reported comparison with others also correlated well with performance, but I’d suggest caution here. The participants were overwhelmingly male (31/37) and we men are disproportionately likely to confidently compare ourselves favourably to others.
I also wouldn’t suggest using these results for hiring decisions - it’s one study. But it does help emphasize that if we want to help our developers grow professionally and become more effective, we might prioritize some activities that are well-motivated from many other sources as well:
Maybe I’m the last person to know about this, but the (unofficial) R Installation Manager, rig, looks amazing for juggling multiple R installations, with even a nice menu bar for Mac OS X.
The Jupyter+git problem is now solved - Jeremy Howard, fast.ai
One of my biggest complaints with Jupyter has long been that it is a terrific exploratory code environment, but had no real offramps for getting code under version control and unit tests, or building good documents, or interacting with real IDEs. In those important respects, RStudio has always really shone.
I figured those were unavoidable limitations of Jupyter and were unfixable. VSCode’s notebooks, giving some of the best of IDEs and notebooks, seemed cute but struck me as just papering over the problem. Now nbdev is starting to make me consider I might have been wrong.
Howard’s article talks about the git merge driver and Jupyter save hooks that come with nbdev2, greatly improving handling of version control with Jupyter whether there’s conflicts or not.
Why are bioinformatics workflows different? - Benjamin Siranosian
Bioinformatics and other scientific data analysis pipelines increasingly rely on workflow managers like cromwell, nextflow, snakemake, toil, and others. These tools are described in the same way and use much of the same terminology as various data engineering workflow tools (think airflow, dagster, prefect, argo…) and people from one community can be confused about the difference between the two.
This is a handy article to have in your back pocket if a bioinformatician/data engineer wonders why the workflow orchestration tools used by data engineers/bioinformaticians are so weird and ill-suited to what they need. Siranosian concisely describes the different problem regimes the two different kinds of tools are solving and why the needs are different.
What you should and (probably) shouldn’t try from SRE - Steve Smith and Ali Asad Lotia, Equal Experts
This is an older article that caught my eye - a very pragmatic approach to adopting some of the practices from Service Reliability Engineering that offer a more principled approach to providing good customer experience for services and systems, without dealing with the things that aren’t feasible unless you’re a huge organization.
Their capsule summary for the two recommendations for teams of our kinds of sizes:
Is anyone using firecracker yet? I’d love to hear about it if you are.
Firecracker is a virtual machine manager which can spin up very lightweight VMs in not much longer than it takes to spin up a docker container. This allows for very strongly isolated execution of code that doesn't have to be particularly trusted.
Stansilas writes of their student project experience writing a code benchmarking service. This is very much “run untrusted code as a service”, and the untrusted code runs in firecracker VMs.
This is a fun use case, and other applications within research computing and data teams (CI/CD; coding playgrounds for public teaching) come quickly to mind.
Terrific news, everyone - there’s a Unicode-to-EBCDIC encoding, which means that we know how to type poop emojis on punched cards.
XScreensaver was released 30 years ago.
A cute postgres-in-the-browser playground for learning and testing your psql skills without setting up (and messing up) a postgres server.
Performing efficient anti-joins.
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
This week’s new-listing highlights are below; the full listing of 214 jobs is, as ever, available on the job board.
Data Architect - University of Sydney, Sydney AU
The Data Architect acts as the subject matter expert to lead initiatives that support delivery of data to empower university staff to enable impactful teaching and research, drive sustainable performance, and deliver outstanding student experiences. This is a key role in developing strong stakeholder relationships to understand opportunities and challenges and support the University in implementing a Data Strategy that will enable data to be used to suit a broad range of purposes.
Manager, Statistics/Data Science - Pfizer, Groton CT or Lake Forest IL USA
You will design, plan and execute statistical components of plans for development and manufacturing projects. This will assist in establishing the conditions essential for determining safety, efficacy, and marketability of pharmaceutical products.
Software Development Manager, Verily Health Platforms - Verily, Kitchener/Waterloo ON CA
As a development manager, you will lead a team of software engineers in delivering one of the most critical components of Verily’s strategy, the creation of a Health Platform focused on improving patient outcomes and lowering costs.
Operations Manager, Research and High-Performance Computing Support - McMaster University, Hamilton ON CA
We are seeking candidates for the Operations Manager position within the Research and High-Performance Computing Support (RHPCS) unit. As part of the Office of the Vice-President, Research, RHPCS supports the high-performance and advanced research computing needs of McMaster's diverse research community. Reporting to the Operations Director and supported by other members of the Operations team, the Operations Manager provides day to day operational leadership of the activities within RHPCS. The Operations Manager plays an integral role within RHPCS, with responsibilities that include budget development, the management of human and physical resources, client-relationships, and financial and administrative matters.
Manager, Applied Science, AWS Braket - Amazon, Seattle WA USA
Amazon Braket is looking for Applied Scientists in quantum computing to join an exceptional team of researchers and engineers. Quantum computing is rapidly emerging from the realms of science-fiction, and our customers can the see the potential it has to address their challenges. One of our missions at AWS is to give customers access to the most innovative technology available and help them continuously reinvent their business. Quantum computing is a technology that holds promise to be transformational in many industries, and with Amazon Braket we are adding quantum computing resources to the toolkits of every researcher and developer.
Software Engineering Manager - The Kirby Institute - University of New South Wales, Sydney AU
The Kirby Institute is a world-leading health research institute at UNSW Sydney. We work to eliminate infectious diseases, globally. As the Software Engineering Manager, you will lead a team of software engineers to work on the EPIWATCH system, an open-source epidemic intelligence (OSINT) observatory, using machine learning and natural language processing.
Manager, Informatics Operations and Project Management- Vaccine R&D - Pfizer, Pearl River NY USA
The individual will partner and act as the “glue” between Senior management, Informatics Support team, Information System Architects, Software Engineers, Application Developers, Data Analysts, Business Analysts, Statisticians, and various laboratory stakeholder groups to ensure information systems are developed, maintained and available for customer use. The Informatics Project Lead will ensure efforts around project scope, project timelines, business analysis, software testing/validation (SDLC tasks), change control, customer communication, and project management are delivered in a timely fashion. Coordination and collaboration are at the center of the role, and the individual will have to juggle priorities of emergencies, “nice-to-have” requests, and broader needs across many stakeholders, partners, and technical analysts.
Engineering Manager, Scientific Applications - Benchling, Remote USA
The Scientific Modalities team builds tools and platform that help scientists do their research - like our molecular biology suite, which scientists use to analyze and work with DNA, RNA and proteins. Some of the tools help scientists better manage their data and collaborate easily replacing their spreadsheets, pen and paper workflows, while others are tools that are innovatively built to enable new forms of research in cloud by integrating with the rich Benchling platform. We're looking for an experienced manager to support the engineers on one of the Scientific Modalities teams. We're particularly excited to hear from managers with a track record of obsessing over customer needs and product quality as they build their roadmaps and guide their team's execution.
Senior Technical Project Manager (Remote) - Benchsci, Remote CA or US or UK
We are currently seeking a Project Manager to join our growing Platform Delivery Team! As part of the job, you will drive the end-to-end delivery of OKRs and projects. Reporting directly to the Manager, Platform Delivery, you will work closely with a number of stakeholders across the organization to make sure projects are successfully delivered to our customers. You’ll have multiple projects on the go at any given time, so you should be able to easily handle competing priorities. This role will support our core infrastructure team so you should have a DevOps engineering background or experience supporting core infrastructure teams within a product environment.
Group Leader - Software Services Development - Oak Ridge National Laboratory, Oak Ridge TN USA
In this role, you will lead and advocate for a diverse and talented group of software engineers, guide their technical work, and expand their ranks with new hires when needed. You will develop a vision and implement a cohesive group strategy to ensure the success of new and ongoing technical projects to position the group and the organization for future success. The Software Services Development (SSD) group writes and maintains large web applications and web APIs used by both staff and end-users of the National Center for Computational Sciences (NCCS) computational ecosystem.
Team Lead - Research Cloud and Compute, Defence Science & Technology Group - AU Department of Defence, Adelaide or Melbourne or Canberra AU
You will manage a team of cloud engineers and work to provide secure cloud and compute systems, solutions, systems support and related technology to meet researcher needs. Work closely with the Discipline Leads in the Computational & Data Intensive Sciences directorate to ensure that there is strategic service alignment to ensure that the cloud and compute service offerings meet DST Group’s strategic initiatives. You will be a highly experienced leader able to develop, deliver and sustain a complex world-class portfolio of secure compute and cloud infrastructure systems.
HPC Instruction Program Manager - Purdue University, West Lafayette IN USA
As the HPC Instruction Specialist, you will serve as the course coordinator for training and workshops delivered to faculty and graduate research assistants in HPC, Data Management, and computational programming fundamentals. You will be responsible for working closely with Faculty and Computational Scientists to design educational materials and curricula that introduce, reinforce, and explain HPC, Data Management, and computational programming fundamentals. You will research current trends and in collaboration with Faculty and Computational Scientists, incorporate findings into the development of curriculum goals and pedagogical methods. In this position you will work with domain specific departments to develop for credit HPC courses and plans of study. Consult with faculty and staff to ensure the accessibility of course materials for students with disabilities. You will assess and evaluate course effectiveness, measure student learning outcomes, and make recommendations for continuous improvement. Some workshops will be provided for a national audience via XSEDE.
Machine Learning Project Manager - Industry - Alberta Machine Intelligence Institute, Edmonton AB CA
The Project Manager will utilize their exceptional business and project management skills to build great relationships with Amii’s clients. The Project Manager will also contribute to program and product development and lead on delivery of those programs and products with clients. In this role, the Project Manager will bridge the gap between business and technical expertise while focusing on planning and executing client engagements. The Project Manager is responsible for clear communication with the team and stakeholders on the progress and performance of projects to ensure timely and successful product delivery. Capitalizing on the knowledge and expertise of cross-functional teams at Amii, the Project Manager will empower clients with the knowledge and skills to advance their business in the spectrum of AI adoption.
Manager, Research Platforms - University of British Columbia, Vancouver BC CA
The Manager, Research Platforms collaborates as part of the ARC leadership team to develop overall strategy with a focus on supporting advanced research computing projects, data management needs of the researchers, and digital research infrastructure (DRI) consultations system-wide. DRI refers to the physical infrastructure (such as network and advanced research computing), data management (such as advanced research software, research data web portals, and research data digital platforms), and people needed to support computationally- and data-intensive research.