Research Computing Teams - Routine Feedback and Link Roundup, 19 Mar 2021
Hi, everyone:
This week I want to talk a little bit about giving routine feedback to team members; or you can skip to the roundup.
I mentioned last time that I find thinking about team members performance in terms of expectations clarifying. That’s more obvious when talking about longer-term goal setting or performance reviews - here are our expectations for the next quarter/year, and then those expectations were met, or not - but I think it’s especially useful when thinking of more immediate on individual tasks.
Let’s talk about team members meeting or exceeding your expectations on when performing a task first. As a rule we don’t spend enough time thinking about and handling this case. Meeting your expectations is something most of your team members will do most of the time. And yet it still absolutely merits being routinely called out; it is important for you to routinely give positive feedback for meeting or exceeding expectations.
One objection sometimes heard to the above: “Isn’t meeting expectations just the job? The default? Why take the time to call that out?” Heavens; can you imagine if for the next month everyone around you - peers, team members, your manager - met all your expectations on everything? Or exceeding? What would that feel like? If that month would be significantly better than how most of your months go, isn’t it worth trying to nudge things towards that goal?
A second objection sometimes heard: “But they obviously know what the expectation was; they met it. Confirming that is a waste of everyone’s time.” Oh my, no. Both parts of that can be incorrect.
The first sentence isn’t necessarily true; they don’t necessarily know the expectations they met, or which ones were particularly important here. They did something, and presumably they feel like they did a good job, but they don’t necessarily know why you think it’s a good job, what was particularly important to you about how they performed the work. Consider a small task like a routine code review - it could have been the review’s thoroughness, timeliness, consideration for the level of the reviewee, combination of the three, or something else that was important to you. Or how about something more multilayered like a presentation; what was it about the presentation that met an important expectation - the brevity, the clarity, making a convincing case?
The second sentence definitely isn’t correct. As we saw in an HBR article in #64, there’s basically no plausible number of times you can tell people things like “good job” that is too much. It’s easy for you and it makes a real difference to the other person.
Whether they know they met an expectation or not, if we want them to continue working in such a way to meet that expectation in the future, it’s on us to communicate that. And we need to communicate the expectation with enough specificity that they know what they did and have a sense of how to do it again.
Because that’s the important thing about communicating expectations - it’s about pointing coworkers in the right direction for the future.
The considerations are exactly the same for times that someone’s performance on a task doesn’t meet your expectations. Your team member, or peer - or even, honestly, your boss - generally wants to meet your expectations . (How much they want to meet your expectations will of course vary depending on their relationship with you). They don’t necessarily know if they met every expectation on a task. They know it took them two weeks to do the code review, but they don’t necessarily know that two weeks was too long in this case. They know that their presentation had twenty slides on background and four slides on proposed next steps but they don’t necessarily know that this was the wrong balance for this talk, or why. You have context they might not have. Sometimes you’ve explicitly communicated the expectations they didn’t meet ahead of time, but it’s common that you haven’t. Either way, it’s worth communicating about.
The goal in the case of not meeting expectations is exactly the same when communicating those expectations. It’s about pointing them in the right direction for the future. And it’s your duty to do that pointing, that nudging. Otherwise, you are choosing to let them continue to fail to meet that expectation.
How would you like it if your boss was letting you fail to meet their expectation on something again and again but simply decided to withhold that from you because they couldn’t be bothered having the conversation? If you wouldn’t like that very much - and most of us wouldn’t - then there’s what’s the justification for treating your team member that way?
“Oh, it’s just a small thing” - that makes it worse, not better. Talk with them! The biggest part of the job of leading a team is keeping everyone pointed in the same direction, and it is much easier to give people small nudges about small things early than to wait until something big happens.
Note that everything we’ve talked about so far is just as relevant for peers as it is for people who report to you one way or another. Teams are groups of people who are accountable to each other. Mutual accountability is what distinguishes a team from a bunch of officemates. It is perfectly fair to have expectations of your peers, and for them to have expectations of you; that’s how teams work. What changes a bit with relative role is how you request expectations be met in the future.
There’s a few widely used models for giving feedback. What’s important is that they are:
- Short and easily constructed enough that you can give feedback frequently,
- Focussed on observable behaviour (not inferred internal state like “attitude”, which is an interpretation of what the behaviour “means”)
- Focussed on changing or reinforcing that behaviour in the future.
The simplest model is the Situation-Behaviour-Impact model, popularized by Google in their training materials. That might look like
“Last sprint, when you agreed to review Sunita’s pull request by Wednesday, the review wasn’t finished until the following Tuesday; that blocked her progress for half the sprint”. Or
“When you presented the proposed plan at the project kickoff meeting, the material you presented had a really good balance of just enough relevant context and case for the plan overview. That helped ensure the discussion afterwards was well informed and not sidetracked into irrelevant details.”
The Manager-Tools model is a bit more direct and simple; they’ve tested it in a large number of organizations a few different ways, and apparently this is the way that’s worked the best (which I don’t have any trouble believing). In this model they get rid of the Situation part entirely. It shouldn’t be necessary to remind people about - if it’s that far in the past the manager has waited too long to give it - and it necessarily is focussed on the past instead of the future. Instead, they bookmark behaviour and impact with two questions. The first is something along the lines of “May I give you some feedback?”; and, assuming the answer was yes, the behaviour and impact follow, and it ends with “Could you do that differently next time?” or “Thanks!” (and, implicitly, “Could you keep doing that please?”)
So that would look like:
“May I give you some feedback? [If yes:] When you don’t follow through on code reviews by the time you said you would, other developers have their plans for the rest of the sprint disrupted and we lose time. Could you do that differently next time?”
or
“May I give you some feedback? [If yes:] When your presentations have the right background material without anything extraneous, and a good overview of next steps, that makes the whole meeting and following discussion more productive. Thanks!”
On the other hand, Raw Signal Group’s training on feedback is a quite explicit that it’s about expectations; they encourage managers to begin with “My expectation is”, which has the advantage of owning the expectation explicitly.
One other difference between the models is that in the Manger Tools model, there’s no follow up discussion. That’s the more likely outcome when giving frequent, small feedback; the reasons and context are fairly clear. On the other hand, in SBI or Raw Signal group, discussion afterwards is more common.
Personally I’m skeptical that either “most of the time” or “hardly ever” are the right frequencies to be having conversations after giving feedback. If it’s “most of the time” then I’d worry that it means that feedback is usually big or surprising, which suggests feedback’s not being given often enough; if “hardly ever” I’d worry that the manager isn’t having their expectations recalibrated often enough.
That’s more than enough on giving routine feedback for one newsletter; next issue I’ll recap and extend this discussion a bit, and start talking about goal setting.
Managing Teams
Five Questions that Will Help You Strengthen as a Decision-Maker - Art Petty
Tweet: “Every time I make myself write out…” - Cindy Alvarez
As managers, making decisions is a pretty big part of the job - priorities for the team, criteria the next new hire, etc. It pays to spend a little time structuring our thoughts around decisions, whether they are decisions we’re making completely by ourselves, or decisions the team is making together.
Petty has five questions he suggests we use to guide ourselves or to guide discussions when a group is informing a decision:
- What problem are we trying to solve?
- Are we solving the right problem?
- What do we need to know?
- What are the assumptions we’re making about our preferred solution?
- If our preferred soltuion isn’t an option, what will we do?
These are good questions - it also prompted me to dig up a reference I filed away a while ago and traced back to Alvarez’s linked tweet above, which takes a different tack but might be a little crisper, depending on the context:
We are doing .....
Because we see the problem of .....
We know it’s a problem because .....
If we don’t fix it, we’ll see .....
We’ll know we’ve fixed it when we get .....
Virtual “Storming”: How to Work through Tensions with New Teams - Nobl Academy
A lot of heist movies have really clear depictions of Tuckman’s four stages of group development. Forming is the initial “the team is brought together” sequence, where a group of individuals comes together for a common (nefarious) purpose. After the initial honeymoon phase, when the hard work begins, comes the Storming phase - the individually brilliant but mismatched group initially has conflicts as they try to figure out how to work together. In the Norming phase, default solutions and workarounds to those conflicts start to emerge as “the way we get things done in this team”. And then there’s the performing phase, where those norms and the groups collection of individuals’ skills really starts to shine, and they start to deliver in a serious way on some high-performance larceny.
The key here is that the team has to go through the storming phase for the norms established in the next phase to be effective. And storming is harder to do constructively in distributed teams.
This blog post walks through some approaches
- Asynchronous documents about how people work best
- Facilitating debates about how to proceed when conflict does arrive
- Don’t necessarily feel the need to be an impartial referee - propose other solutions, participate in the discussions, creating “an environment where healthy dissent leads to progress”
- Disagree and commit
- Be self-aware
- Hold a retro - not on the decisions made (you don’t want to be constantly relitigating things) but on the process by which the team made them
Handling the Emotional Weight of 1:1s - Lara Hogan
There’s been a lot going this year, for us and our team members. For some, this final stretch - when the end is still distant, but in sight - may be the hardest.
Our team members can tell us a lot in their one-on-ones, either directly or indirectly. It isn’t always easy, for us especially if we ourselves are running on fumes. Our team members relationship with us isn’t symmetric; they can share their feelings with us in a way that’s often not appropriate for us to share with them, and they have one manager to share with while we have multiple team members to hear from.
We need to have empathy for our team members without it completely exhausting us. So we need mechanisms to establish some boundaries around that empathy.
Hogan has the following suggestions:
- Compartmentalize; focus on the team member during the one-on-one, but put those thoughts away (or on the back burner) when you need to focus on other team members or other work
- Ensure there are some gaps between one-on-ones to give you time to gather yourself and your focus again
- Don’t put too many of those meetings into one day fo ryou
- Find your own supprot outside the team - peer managers, or others.
Product Management and Working with Research Communities
Batch-processing groups of reading materials (articles or book chapters) - Raul Pacheco-Vega
A few strategies to “stay on top of the literature” (more like, “catching up with the literature”) - Raul Pacheco-Vega
In research computing we often need to quickly get up to speed in a new area relatively quickly to support a project, and then stay somewhat current in the area for the duration of the project. Honestly, that’s my favourite part of working in research computing, but it’s still a lot of work.
More than that, for people new to reading the ancient and obscure genre of writing called “the journal paper”, it’s just mystifying. A trainee or staff member who tries reading a paper in a field new to them as they would read a blog post or book is going to find it a pretty demoralizing process.
Pacheco-Vega’s first article walks through his live-tweeted example of his approach - getting a key, initial, small (5 papers) group of papers, and creating his note-taking structure. He uses a spreadsheet, with columns for
- Citation
- Main idea
- Additional notes - specific other or supporting ideas, and his commentary on them
- Relevant cross-references
- Specific quotes, and their pages
Then he reads the papers, highlighting in different colors the main idea and its consequences, and with the other colours supporting ideas of decreasing importance. Then he keeps the paper with his notes and highlights, and updates the spreadsheet. As he reads the papers, he may find other papers which he then triages - by looking at abstract, introduction, and conclusion (I’d add, figures) - and if it seems relevant it goes into the spreadsheet for processing.
In the second article he describes his strategy of catching up with the literature - keeping track of potentially interesting papers, ruthlessly triaging, and either reading one a day or ‘batch processing’ as above.
I’m a huge fan of reading several related papers at once - it helps me see connections between them much more easily.
Good Slides Reduce Complexity - Tom Critchlow
Critchlow’s talks about slides for presentations. Those of us who came up in research are used to presentations whose primary purpose is informative. We were teaching material as a TA in class, or we were telling other people about our research. There may have been a bit of convincing people in those talks - convincing students that the material matters, or convincing skeptical researchers that your approach is solid - but even there, the convincing is done by providing additional information. Motivation for the relevance of the material, or background information that supports your analysis approach.
As you advance your career in research computing, presentation purposes change. You are no longer merely informing - your manager, or your research stakeholders, could just read an email if you wanted to inform them of something. Instead, you are assembling materials to propose and recommend a course of action. At the end of the presentation, you want the assembled group to make a decision - ideally, but not necessarily, your recommended decision.
Critchlow’s post has some important points on organizing such a presentation:
- You’re going to get through less material than you think - make everything count
- Have an executive summary, with slides titles that tell a story
- Use figures to explain, not illustrate
- Reduce complexity
(PS - if you are giving such a presentation for a decision you care about, the group presentation should not be the first time the attendees are seeing this material. Manager tools has a great podcast episode on prewiring, and we talk about it in the “change management of stopping doing things” writeup in #58).
Which color scale to use when visualizing data - Lisa Charlotte, Datawrapper
Manim - Animation engine for explanatory math videos - manim project
These two posts focus on a quite different form of communication - visual communication of data or math concepts.
Charlotte’s post is a deep discussion of plotting data using colour. It is a comprehensive four-part overview on the kinds of coloyr scale to use, and why, when plotting data. The first part, linked above, gives an overview you’ve likely seen before - categorical scales, sequential vs diverging continuous scales, and highlighting or de-emphasizing data with colour. The following three go into much greater detail, with terrific illustrations of the concepts.
manim is a community fork and extension of the mathematical animation package developed for the well-known 3Blue1Brown channel of mathematical videos.
Research Software Development
The SPACE of Developer Productivity - Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Thomas Zimmermann, Brian Houck, and Jenna Butler, ACM Queue
We’ve covered several times the challenges of measuring developer productivity, particularly individual developer productivity. Forsgren et al walk us through recent literature on the subject, disabusing us of some common myths and encouraging us to instead, as managers of developers, keep an eye on the SPACE dimensions of how well our team is doing:
- Satisfaction and well-being - employee satisfaction, developers having the tools they need, avoiding burnout
- Performance - outcomes like quality and impact
- Activity - the things done; design and coding, CI/CD, operational work like incidents
- Communication and Collaboration - how discoverable is knowledge, how quickly is work integrated, what is onboarding time like
- Efficiency and flow - The number of handoffs in a process, perceived ability to stay in flow and complete work, wait times, interruptions
What metrics for that might look like are as shown by the table below.
Research Software Engineering with Python - Damien Irving, Kate Hertweck, Luke Johnston, Joel Ostblom, Charlotte Wickham, and Greg Wilson
A couple years in the making (with a few other books in the series on the way), this ebook is online and soon to be published. It’s aimed at (for instance) researchers or junior staff who have successfully solved their problem with some programming in (say) Python, but want to go to the next step - putting it in version control with Git, automating some things with Make and the shell, adding some configuration and error handling, and creating a package, and getting external collaborators.
The material would be useful standalone for trainees or junior staff (or maybe just staff new to Python), as well as for use in courses.
Resurrecting Fortran - Ondřej Čertík’s
This could have gone in Product Management/Engaging with Research Communities just as well as in software development. Because while I think Čertík’s title overstates things a bit - Fortran has been chugging along quite happily, thankyouverymuch - “Revitalizing Fortran” would be entirely fair. And it was all enabled by doing the hard but relatively straightforward work of online community building.
Fortran’s only real community for decades has been the Fortran Standards Committee, which does great work but as all ANSI or ISO groups is highly (if unintentionally) opaque and inaccessible. Čertík describes the community building work of joining the committee, engaging other champions, starting a website and semi-official incubator, and what’s emerged.
And what has emerged has been pretty remarkable in such a short time. A nascent Fortran standard library! A package manager!
Calls for Papers
15th IEEE International Conference on Networking, Architecture, and Storage (NAS 2021) - 24-26 Oct, Riverside CA USA, Papers due 9 May
Topics of this meeting which overlap with newsletter readership include:
- Accelerator-based architectures
- Big Data infrastructure and services
- Data-center scale architectures
- GPU architecture and programming
- HW/SW co-design and tradeoffs
- Non-volatile memory technologies
- Parallel and multi-core architectures
- Parallel I/O
- Software defined networking
- SSD architecture and applications
Random
It turns out that font standards are so complex that someone has implemented a game in a TrueType font.
A package for plots implemented entirely in CSS that works by styling tables.
City of London Police warn against illegal, dangerous website sci-hub.se. So, you know, don’t download any illegal, dangerous, or stolen papers from sci-hub.se or else London’s cybercrime unit will be very disappointed that you downloaded those papers - which could be dangerous or illegal! - from sci-hub.se.
A cheat sheet in numerical analysis for software developers.
How to read ARM64 assembly language.
Sandia’s open-access quantum computer is now in GA. Not sure where things are going with quantum computing, but a lot of organizations are willing to spend nontrivial amounts of money to start playing with it.
A very detailed walkthrough of how C++ resolves a function call.
A menagerie of difficult people to deal with on software projects. Tag yourself, I’m the Professor/Peacemaker/Optimist.
A look at the different mechanisms for data (change) durability, and from that a proposal of four laws of durability.
That’s it…
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Jonathan
Jobs Leading Research Computing Teams
This week’s highlights below; full listing available as always on the job board.
Senior Manager, Data Science - GitHub, Remote, US or CA
Core Science: Members work on strategic priorities that leverage data to improve overarching product performance, guide experimentation for the company and help proactively level up the discipline of Data Science at GitHub via their broad purview.
Your job will be to lead Core Science within Data Science at GitHub. The Data Science team is highly distributed and the right candidate will thrive in an environment of asynchronous communication. We expect you to have excellent written communication skills and be able to create working relationships with coworkers in locations around the globe. In this role also you will lead, provide guidance, support and advice to your team.
Data Science Manager - Robert Half (Recruiter), Remote, Canada
Robert Half Technology is looking for a Data Scientist Manager to join our client’s team! Our client is a fast-growing SaaS company specialized in social media with extremely talented team members. If you’re someone with a strong technical background and the ability to apply data science to business problems, this role might be for you! This is a remote contract role to start, with strong potential to be hired on permanently. Depending on your availability, the workload will be 20-40 hours/week. You will be managing a team of two junior data scientists, providing guidance and consultation in business intelligence, data visualization, strategy, and generally supporting day-to-day operations. The responsibilities will be 60% business, 40% technical.
Senior Manager, Advanced Computing Projects - University of Minnesota, Minneapolis MN USA
This position will be responsible for the daily operations of critical research computing systems serving multiple departments throughout the University of Minnesota system. It ensures the operational efficiency of Research Computing, by leading efforts on major hardware acquisitions, developing and reviewing procedural documentation, and working with the Associate Director for Advanced Systems Operations to develop a sustainable budget. The candidate is expected to devise original solutions to complex research computing challenges involving policy and compliance requirements. They are responsible for critical analysis of security standards, and operational conformity with applicable regulatory requirements. As an integral member of the research team who is instrumental to the success of the project, they may consequently appear as co-author on peer-reviewed papers. The position operates independently with guidance and oversight from the Associate Director for Advanced Systems Operations.
Senior Software Engineer - Science and Technology Facilities Council, Didcot UK
You will join our team developing and delivering data management solutions for the scientific facilities that we support: primarily the Diamond Light Source, ISIS Neutron and Muon Source, and Central Laser Facility, all based at the Rutherford Appleton Laboratory in Oxfordshire. You will work alongside service managers and DevOps engineers to maintain, improve and further develop the data management software solutions, engaging with scientists and engineers within the scientific facilities to ensure that the services provided meet their needs and continue to evolve to meet their future needs.
Group Leader in Scientific Computing - Science and Technology Facilities Council, Didcot UK
This post involves leading and managing a team of people who develop and support the critical software that enable scientists to undertake data analysis remotely. The group was formed two years ago and in this time the service has matured and is now used in production in STFC’s ISIS neutron and muon facility. We now need a leader to help us provide a similar service to other departments and facilities. You will be leading on the strategy, oversight and governance of the Group, as well as working with the team to set the priorities. You will have experience of leading teams, communicating with customers and experience of the software development lifecycle and managing production systems.
Project Manager - Scientific Software Development - Atomic Weapons Establishment, Reading UK
This role will include: Leading the development of the UK National Data Centre for the Comprehensive Nuclear-Test-Ban Treaty. Taking the lead on the assessment of CTBT International Data Centre products, through processing, evaluating and providing feedback on those products, and the capabilities of the CTBT International Monitoring System. Developing and enhancing the accessibility of a large archive of seismo-acoustic waveform data
Project Manager, Prairie Water - University of Saskatchewan, Saskatoon SK CA
The project, soon to enter its fourth year, is working to develop 1) Prairie-specific small watershed-scale models that predict runoff, groundwater recharge and wetland state and function across the region under changing climate and land-use; 2) new assessment of groundwater resources and their sustainable management; 3) a multi-stakeholder process for how to mobilize science with communities and governance; and 4) a set of decision-support tools to help users understand the short and long-term impacts of water management decisions, and the drivers of these decisions, across the diversity of Prairie industrial, agricultural and community sectors.
This position provides scientific and logistic support to the Prairie Water team. This will involve liaising with a growing group of partners to the project, including federal and provincial government agencies, NGOs, First Nations, and community partners. Duties will include providing technical and other support to the Principal Investigators, Drs. Christopher Spence and Colin Whitfield, and to the wider team.
Research and Development Manager - Sensors & Software, Mississauga ON CA
The R&D Manager will estimate R&D efforts, plan implementations, have ownership of project and department budgets and triage product issues. They will need to be dynamic, collaborative, and curious as well as an impeccable communicator as they will be expected to collaborate with both the internal team and 3rd party development resources located in other regions of the world. Candidates must have exceptional technical and operational leadership skills, strong business and financial acumen and commercial awareness, proficient in managing change and ensuring continuous improvement, as well as sufficient technical ability to challenge and lead highly skilled engineers. They will have a keen interest in new technologies and understanding of complex engineering principles and an ability to lead and inspire diverse multidisciplinary teams at all levels of the organization.
Manager, Medical Information - Servier, Laval QC CA
Reporting directly to the Medical and Scientific Director, the Manager, Medical Information (MI) ensures the strategic management of the r ole of MI as part of Servier Laboratories’ pharmaceutical responsibility. In this capacity, the incumbent ensures the dissemination of efficient and high quality MI for Servier Canada’s portfolio of products. The scope of responsibility is local. He/she manages the proper dissemination of responses to unsolicited MI requests for Servier’s medicinal products and works closely with the Medical Affairs, Pharmacovigilance, Legal and Regulatory Affairs sectors to build and maintain the local MI infrastructure. He/she participates in the management of the MI team, oversees the work of the Coordinator and Specialist and acts as ambassador for the organization’s values and culture.
Chief Scientist - Autodesk, Toronto ON CA
Your responsibility will be to ensure we are effectively exploring the right science while also maximally exploiting overlaps in our various research tracks. We maintain a wide range of collaborations with academic and institutional partners that you would also manage. Today this consists of engaging with industry and academic thought-leaders as well as leveraging our Advisory Boards. You will also be instrumental in discovering and advocating for new research areas that we are not currently considering.
Project Manager, Data & Implementation Science - University Health Network, Toronto ON CA
The Data and Implementation Science Team supports impactful delivery of solutions benefiting the health system; discovers, validates, designs and delivers opportunities of change, and brings contextual awareness through data to ensure successful implementations, value and outcomes for UHN’s patients. An exciting opportunity is available to showcase your talents as a Project Manager within the Modern Endpoint Experience Program, where you will work on a variety of Digital Operations, Patient Experience and Virtual Care projects. These may include, and are not limited to: patient and clinician endpoint hardware projects, virtual care projects, operational support planning and other endpoint experience initiatives.
Faculty Computing Manager - King’s College London, London UK
The ideal candidate will have a broad background of technical experience covering infrastructure and end user technologies, in particular Linux and open-source solutions. The Faculty is growing and seeks someone with a strategic vision that allows technical innovation and scalable operations. They should be an excellent communicator able to effectively work with technically savvy academics and inexperienced users alike.