Our AMA (Ask Managers Anything) question last week was:
What tools do you use to keep yourself organized? As a research software developer I had two or three tickets on a JIRA board to keep track of, now as a software development manager I have what feels like dozens of things to be on top of.
We got a number of answers:
OmniFocus 3.9.1 on Mac, OF 2 on iPhone and iPad (their calendar event and todo display in OF3 for iOS/iPadOS is too broken to contemplate using); Full Focus Planner (paper) from Michael Hyatt. Every time I try to use anything for project/task management that doesn’t include context views (a la GTD), it fails and I end up working on a single project for a long time rather than all the tasks that require a given tool or location.
I’m glad to hear you’re starting to play with Trello for organization, it’s helped me a lot to be able to put tasks in various buckets and move them as needed. I like to have “this quarter”, “this month”, “this week” and “today” columns, and that helps me keep an eye on medium term goals along with day-to-day work.
I started using a bullet journal and I love having everything in one place - just have to keep your indexes updated!
I also got some thoughtful pushback on my take on Manager READMEs, suggesting that my main reason for not liking them was the wrong one, and I think I’m convinced:
I agree that they’re controversial, but I disagree that we’re paid to adapt to the team members’ communication preferences. […] We’re paid, as managers, to deliver results and to retain (and grow) competent staff. If a manager’s preference is for reading (JFK) rather than listening (LBJ) or talking/writing (Churchill), then having a bunch of direct reports talking at her/him without any write-up is ineffective. If the manager is a listener, a stack of reports, no matter how brief, will be ignored.
My problem with Manager READMEs is that it serves as an unchanging documentation of how a new staff member will understand to communicate: “But the README when I was hired in 2014 said…” The README cannot address every situation that a manager and staff member will experience, and so is incomplete while looking like LAW to the staff member. This reason is why I don’t have one. I will tell all my staff that I prefer reading to listening, with an opportunity for follow-up discussion, that I will not email them outside of work hours and expect responses before their next shift, and so on. A README constricts, rather than explains, how the manager-report relationship forms by its written nature.
I like this approach, that the reason to write up these ideas is for your own clarification and to share with team members verbally, not as a document.
We haven’t gotten many new questions submitted to the AMA, so we’re down to our last two. The current top question is:
are you doing anything to maintain team morale during this challenging time? its definitely hard on some of my directs
I don’t have any good answers to this one; I’ve been trying, but being extra diligent about celebrating wins, thanking people, being flexible with people as they go through problems, and listening particularly closely during one-on-ones is all I’ve got, and I’m not sure it’s enough. Our and other teams have tried general team-togetherness things - virtual get-togethers - and it wasn’t wildly successful.
What have you been doing to maintain team member’s morale? Please send in your responses by email (just reply), and contribute new questions for the community at http://ama.researchcomputingteams.org !
And now on to the link roundup.
Why Minimize Management Decision Time - Johanna Rothman
I’ve mentioned before that my wife, trained as an emergency-room nurse, and myself, trained as an academic, have very different default approaches to decision making under uncertainty. I fall more under the “I’ll just do a quick literature review and read these two books first” school. She has what Google calls “a bias towards action”. Both are perfectly good approaches in the right context.
Unfortunately, as a manger, my default stretching out of decision making - and savouring every minute of it - isn’t great for the team, for a number of reasons. Many of us brought up in the research side of research computing likely suffer from the same problem. As Rothman’s article points out, it has real costs for the team:
The last is subtle but real - one can get too wrapped up in the analysis and lose sight of the immediate problem at hand.
This article discusses the “why”s of minimizing decision making time. As a “how”, I’m trying to give myself strict time limits for decisions, with shorter time limits for decisions that can easily be undone, and longer limits for those that can’t.
Most of us are going to be working from home for a good long time. I have a meeting scheduled this week where one of the topics of discussion is if we’re ever going back to the office, and should we just commit to fully remote and allow the hiring of people from as far away as institutional HR will allow (we found out last week that does not extend out-of-country). And if we’re going to be doing this for a while, ergonomics matters. Andrew Helwer walks through his new setup, where he was frugal and what he sprung for.
How to Stop Remote Work from Stealing Your Life - Karin Hurt and David Dye, Let’s Grow Leaders
Besides just thinking about ergonomics now in the long haul, we also need to think about our routines so that as managers we don’t let remote work overtake us completely. The article talks about the power of rituals, which many of us have read about already, in terms of setting boundaries between work and home. The other two points - force ourselves out of crisis mode, and embracing experiments - are really good ones.
Many of us have been in crisis mode one way or another for a while, and it’s easy to let that become the norm. We can’t let that happen, for us or our team members. The article provides a couple ways to shake that off.
The idea of embracing experiments, which we often do in work but maybe less often in our own life, is a great idea. Try lots of things, one at a time, that seem like they might work in improving work-life balance and setting boundaries; and routinely evaluate them and drop them if they’re not working after some decent amount of time.
This is an article about reviewing your own time and accomplishments, rather than your team members. There are concrete steps for doing a GTD-style weekly review, where you look back on the week to assess how well you met your goals, and at higher and higher levels, monthly, quarterly and annual reviews.
This is the blog of a product, and of course they point out how useful their product would be for such an activity, but it’s still a useful article. I do something like the weekly review mainly because for sharing updates to my bosses’ weekly staff meeting, but I must admit I don’t really track it against weekly goals, and don’t do the higher-level work of monthly, quarterly, or annual reviews in any systematic way. Does anyone else? Has it been helpful?
This isn’t about research computing, but I think it has some lessons for those of us working with research communities, and especially those of us running computing platforms for researchers.
Yegge, who’s best known for a different rant about google and platforms, talks about Google’s aggressive policy of deprecating old versions of services. This is much easier for the Google Cloud platform - it lets them keep their environment clean and they only have to worry about supporting the most recent versions of things - but for those using the platform, it means they’re always having to do work just to keep things working.
But you see the difference here. Backwards compatibility comes with a steep cost, and [one platform] has chosen to bear the burden of that cost, whereas Google insists that [the user] bear that burden.
And I think this is worth thinking about when we’re supporting researchers. We as research computing staff are much better placed to bear computational and technical burden, but I think too often there’s a temptation to keep our stuff nice and tidy and simple by outsourcing the burden of keeping things working to researchers, postdocs, and trainees.
But the thing is, every single [user] has choices. And if you make them rewrite their code enough times, some of those other choices are going to start looking mighty appealing.
PACER – upscaling Australian researchers in the new era of supercomputing - Aditi Subramanya, Pawsey Supercomputing Centre
Pawsey Announces PACER Program to Prepare Australian Researchers for the New Era of Supercomputing - HPCWire
I missed this when it first came out in July. In preparation for a major upgrade of their computing facilities, Pawsey in Perth Australia is setting up a user-engagement program of a type I’ve seen a few times before (most close-up with SOSCIP here in Ontario) where the program funds PhD studentships and postdoc positions embedded with the research group but also reporting to the Centre. There are pros and cons to these sorts of approaches; it builds capacity by focussing on developing people who have both disciplinary domain knowledge and computational skills, but it is a bit “meh” on researcher engagement by basically outsourcing the engagement to these specific people.
What other types of approaches have you seen for this sort of interdisciplinary skills development and engagement? Have you seen anything that works worse, or better?
Vale is an open-source linter for text (maybe only English) that works on Mac, Linux, or Windows and allows you to specify house styles for text. If you have a large library of documentation for your community and you want to ensure it shares a common style for readability, this could be really helpful.
On the documentation side, The Divio page is a nice set of pages talking about a systematic way to think about various kinds of documentation - tutorials, explanation, reference, and how-to guides - as documents meeting different needs, along two different axes: when studying vs when working, and providing practical vs theoretical knowledge. For our systems, our software, our curated data sets, really all of the outputs of research computing, we need these kinds of documentation, and it’s not enough to provide one of those kinds and not the other.
It looks like a lot of pages but it’s a relatively short read, and if you feel like you need a more systematic approach to some of your product’s documentation, I think it’s worthwhile.
Backstabber’s Knife Collection: A Review of Open Source Software Supply Chain Attacks - Marc Ohm, Henrik Plate, Arnold Sykosch, and Michael Meier
Having libraries of contributed packages to build on greatly increases productivity, but inevitably there are actors who take advantage of such open development. Based on a dataset that spans npm, PyPI, Maven Central, and RubyGems, this paper presents a dataset of 174 malicious packages that were used in real-world attacks on open source software and some simple analysis of how they were used. Tools are getting better at spotting security issues in dependencies, and reporting has gotten much better, but for security-sensitive applications it’s still something one has to watch out for.
Why write ADRs [Architecture Decision Records] - Eli Perkins, GitHub blog
We’ve written before on the importance of recording the why’s of architecture decisions. Even the best self-documenting code or infrastructure can only describe how it works, which is different from why it was implemented this way rather than another. Without that context, it’s very difficult to know, when something changes, if the architecture should be reconsidered. Perkins does a good job in a short article describing three good classes of reasons why to write them:
This is the computing equivalent of a standard project management decision log, which is used for the same reason in more general project management contexts.
There’s a pendulum back-and-forth in almost all things in tech or other rapidly evolving fields, representing real tensions but where the strengths and constraints shift over time. I enjoyed this short article on building a recommendation engine within postgres using python and stored functions. After years of databases being often used as fast CSV files by application developers (because development tools were improving so quickly), I suspect the computational and data-movement advantages of pushing more computation back into the database will spawn more tutorials like these.
NSF Computational and Data-Enabled Science and Engineering (CDS&E) - Proposals due 1 Sept - 15 Oct depending on the Directorate being applied to
For those of you at institutions in the US, this annual proposal is for integrated, multidisciplinary projects that are doing new science that require new computing methods.
Better Scientific Software 2021 BSSw Fellowship Program - Due 30 Sept
If you’re at a US institution eligible to receive funds from the DOE, and you have ideas about improving the quality of scientific software, you or someone in your team can apply to this program for $25,000 to share your idea, methods, or tooling with the world.
Series of Online Research Software Events (SORSE) - Kickoff 2 Sept 2020 9am BST
SORSE is an ongoing series of events covering, very broadly, the world of research software development. They already have 16 events lined up (but not yet scheduled), covering software demos, discussions, talks, and workshops, including some guy giving a brief talk on being a research software manager. There’s also an ongoing call for participation with deadlines the end of every month (next deadline, 8pm BST Aug 31).
In an analogy that will probably make users of both languages furious, I’ve always thought of APL as Perl for matrices. J is an ASCII-only descendant of APL, and this article is a nice crash course in J, using one of Kenneth Iverson’s talks on APL as a kicking-off point.
AWS, continuing to go after HPC workloads, now has much cheaper HDD Storage for Amazon FSx for Lustre File Systems - it uses SSDs as cache but HDDs for storage. This gives an 80% reduction in cost albeit at a range of reductions in throughput. For many workloads, that’s a no-brainer - one way or another.
A “lockbox” (encrypted store) for ssh keys
Want to test to make sure that your postgres-backed service is robust? Noisia will serve up punishing workloads against the database.
And that’s it for another week. Let me know what you thought, or if you have anythingyou’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Highlights below; full listing available on the job board.
Senior Research Software Engineer - Cambridge University, Cambridge UK
The successful candidate will have a MSc or PhD degree in Computer Engineering, Computer Science or significant relevant experience. Experience writing and maintaining high-performance application code, with significant experience of the key languages commonly used in scientific computing such as C, C++ (preferred), Fortran or Python. Experience with the frameworks used to exploit large, modern parallel computers such as MPI, OpenMP, CUDA, OpenACC or PGAS is highly desirable. Experience or knowledge in the areas of machine learning and data science would also be a plus. A key aspect of this role is your enthusiasm to develop new skills, hone your existing skills and be comfortable mentoring and supervising more junior colleagues.
Project Manager, Inclusive Data - Sight Savers, Hayward Heath UK
We are looking for an individual with the following knowledge and experience: Educated to degree or Masters level (or equivalent experience) in relevant subject such as international development, public health or policy; Extensive experience in project coordination or management within the international development sector with in-country experience being an asset. Strong understanding of the international development/not-for-profit sector. Working knowledge of social inclusion and/or disability essential. Demonstrable experience in data management and data analysis.
Chief Operating Officer - Computing and Computational Sciences - Oak Ridge National Laboratory, Oak Ridge TN US
Bachelor’s degree in a scientific or engineering discipline with at least 12 years of experience working in R&D missions and operations. 5 or more years of experience in effectively managing operational and programmatic teams to support cutting-edge R&D. Experience at a major experimental research facility or an equivalent type of facility is required. Experience successfully working across scientific computing community agencies and programs.
Research Computing Applications and Support Team Leader - University College London, London UK
The Research Computing Applications and Support team leader manages a team of Research Computing Analysts who look after the researchers’ support needs and the application stack deployed on these resource as well as providing training and face to face consultancy for these services. The Research Computing Applications and Support Team Leader should have experience designing, deploying and supporting HPC services in a research focused environment with a solid understanding of the technologies (for example Linux, interconnects, schedulers, parallel file systems and compiler stacks) and challenges (for example communicating with researchers with a wide variety of levels of technical knowledge and funding challenges/deadlines) involve.
Technical Program Manager, Brain Research - Google, Toronto ON CA or Montreal QC CA
Preferred qualifications: Master’s, PhD degree, or equivalent experience in Engineering, Computer Science, or other technical related field. Ability to speak and write in French fluently and idiomatically. Ability to exercise technical judgment in solving software engineering challenges. Exceptional verbal and written communication skills with the ability to interact with technical and non-technical global, cross-functional groups.
Senior Data Scientist/Manager - Cerbri AI, Toronto ON CA or Washington DC USA or Austin TX USA
The ideal candidate is adept at leveraging large data sets to find patterns and using modelling techniques to test the effectiveness of different actions. S/he must have strong experience using various data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms, creating/running simulations, and testing its real-time implication. S/he must be comfortable working with a wide range of stakeholders and functional teams, trading off design to help others.