How many surveys have you skipped over or deleted in emails or web pages in the last month?
Whenever I suggest we talk to people more often, the question of surveys always comes up.
Surveys are a limited but useful tool for collecting data from our researcher clients, but work best in a very specific use cases. They should definitely be one tool in our toolbox, but they get overused, get used lazily, and get used for purposes where other methods would be a better fit.
As in so many areas of our work, when we’re considering gathering data from our client community, the key question is to ask ourselevs is: “what problem am I trying to solve?”
The trap of surveys is that they’re very little work for us. After all, it feels like a nice clean distillation all of this complex human people interaction stuff into something quantitative, which we know how to deal with. We end up with a few numbers that we can put in a slide and then move on, feeling like we’ve accomplished something, by outsourcing the work to our respondents.
There are absolutely particular use cases where surveys are good and helpful. But it all comes down to: what are we trying to accomplish?
Surveys work well when:
In general we do not have large audiences, and we definitely don’t expect high response rates. Many of the things we need to know are fairly complex and subtle, and benefit from a lot of back-and-forth discussion. And generally the things that matter most are ongoing efforts, rather than needing point-in-time snapshots.
A lot of surveys go out because they’ve always gone out. The annual report has always had a little bar chart with satisfaction numbers, say, and it would be weird to stop having it. Besides, we want to see if the number has gone up or down!
But in our contexts, we are rarely if ever going to get enough data to clearly see trends — noise and or response bias dominates. Some teams ask a lot of questions and hope they’ll end up with a nice dashboard to “show what’s going on” - for most of us, that just isn’t going to work.
Besides, what do you think when some other group shows survey responses, demonstrating that users of their services who are engaged enough to respond to surveys are happy? If a picture of a plane with bullet holes in it doesn’t come to mind, you’re a more charitable person than I am.
Many of these “because we’ve always done it” surveys can be much more effectively replaced by some pull quotes from the user (and non-user) interviews described last issue (#158). What would be more compelling to you - a table of some survey results, or a few well-chosen pull quotes:
This year we had great success with [Initiative X]:
- “I’ve already accomplished three projects this year with the new X service; it’s changed what kinds of work our lab can do” - Mechanical Engineering Faculty Member
- “My trainees are extremely enthusiastic about X; one has been bringing it up as a selling point when we’re interviewing graduate students” - Physics Faculty Member
On the other hand. we continue to have trouble meeting researcher expectations with our Y services:
- “While other efforts are good, we’ve found we just can’t rely on Y and are looking for other options” - Chemistry Faculty Member
and that will be a priority in the coming year.
One of those two approaches gives much higher confidence that the team presenting them knows and cares how their research clients are experiencing their services.
One area where surveys can be very useful for us is where we’d expect response rates to be high because the recipients have participated in something that they cared about.
One common example is training courses, or other special events. People expect the “how did we do” sort of survey, and because it just happened, they were there, and they likely have opinions, they’re likely to fill it out. Here though, the use case is funny - you have the people right there. Are you sure a survey (rather than just talking to people) is the right way to go? Maybe it is - there are advantages (anonymity, less time investment), but disadvantages (inability to ask follow up questions), too.
For training in particular, immediate reaction to the training is just one rung on the Kirkpatrick model of evaluation. We probably also want to know how much learning occurred (e.g. with pre and post tests), and follow up months later to find out whether they’re applying the training and what the results have been. A survey can be a great way to begin that followup.
Another use case is followup after a project. We helped a research group get some code written/do an analysis/run some big computation. A post-project satisfaction survey can make sense! On the other hand, having a retrospective with them will provide much richer information. The survey approach can be a great way to augment retrospectives if it’s sent to someone else (say, the PI while we were working most closely with the trainees, so included them in the retrospective), or if it’s done in a way that gives you an advantage like anonymity (sending them out for all engagements in a given quarter, say).
If we are going to use surveys to collect data on situations like this, where people are very engaged, or in some other context, the next question is - to what end?
Surveys are ways of measuring something, and as managers or leads we make measurements to inform decisions about what action to take. So the key question is: what actions will you take based on the survey results?
Any question on a survey where we don’t know ahead of time what we’re going to do based on the answer is a waste of our and the respondents time. And every extra question drops response rates! So it’s best to be very clear up front what we want to accomplish once the answers are in.
Sometimes the thing we might want is to use answers for is advocacy. “Look, 90% of our respondents say they could do more and better science if we were better funded!” That’s a good result, but is a survey they best way to compellingly make this point? Wouldn’t some very specific pull quotes from some very influential researchers be more effective? Maybe a combination of the two is even stronger; great! We just want to make sure we begin with the end in mind.
Other times, the survey questions can ask something very specific we can actually do something about. Did people like the new venue, or prefer the old one? Are people happy with the current prioritization of work between two competing services, or should we prioritize one over the other? Repeatedly asking for opinions on things we can’t or won’t change is a great way to have people stop responding over time.
One of my favourite survey styles is just a quick check in, whose main purpose is to identify people to follow up with because things are going particularly well (and you want to ask for testimonials/help with advocacy) or particularly poorly (and you want to talk to them before it becomes a big deal).
This is a simple two-question survey:
Then we follow-up with (ideally) everyone who gave us contact information and who responds 9-10, 6(say) or lower, and a random sample of people in between. This is a great way of using survey information to target our user and non-user interviews from last issue.
You can break the single NPS-style question into several areas of the service provision if you want to follow up on more specific areas (this is useful as your team and researcher base starts expanding). Some possible examples:
When discussing with a new researcher, would you say that we:
Or some similar list. As you can see from (the maybe exaggerated case) above it’s important not to index too highly on the super-technical stuff, and to include people-interaction topics.
Many many social science grad students take courses that include material on good survey design. Tap into that expertise! They know what they’re doing. They’ll likely know about other alternatives, such as watching people as they work, or group discussions. And right away, they’ll ask the key question - what is the research question you’re looking to answer.
Surveys are just one tool in the toolbox for understanding what our researcher clients think or need. When we’re thinking about what information we need, certainly consider surveys as one possible instrument for collecting that information - but seriously consider others, too. They’re more time consuming for us, but because of that they demonstrate much more concern about people’s needs and concerns, and give much richer information. They’re also something that can be delegated to team members who want to take on wider responsibilities and start engaging with clients and stakeholders.
Am I too negative about surveys here? Is there some other use case you’ve been happy using them for? Just reply or email me at firstname.lastname@example.org.
And now, on to the roundup!
6 Steps to Make Your Stratic Plan Really Strategic - Graham Kenny
If there’s one thing I’ve radically changed my opinion about over the past five or ten years, it’s the value of strategic planning for our teams.
Don’t get me wrong, I still think having a strategy is very important. And I do believe (and have seen!) that a clear strategic plan can be transformative, even in our context.
But more often, we don’t get trained how to do these plans well, don’t have the case studies to learn how they can work, and we often don’t have the air cover from higher up in our organizations to do it in any meaningful way at any rate. But it’s still demanded of us. In those cases, the “strategic plans” end up being, basically, useless compliance exercises where you submit a document because you have to submit a document, the boss or stakeholder receives the document because they have to receive one, and then no one ever thinks about it again.
Again, I think that’s a shame, but that’s the situation most teams are in. If you find yourself in that situation, don’t worry! Go through the compliance exercise, putting the minimal amount of plausible effort in that will make those who want the document from you happy enough.
Then actually start doing some strategy, in a focussed, incremental, and iterative way that will actually work.
The approach that Kenny describes here, mostly with professional services examples, is consistent with the samizdat approach I see working relatively well:
In short: execution, then strategy.
In last week’s Manager, Ph.D. there were articles on:
Also over at Manager, Ph.D. I give the first piece of advice I always give about project management tools - don’t worry about project management tools. Get good at the work and the process, first; then choose a tool.
Lack of sustainability plans for preprint services risks their potential to improve science - Naomi Penfold
Funding woes force 500 Women Scientists to scale back operations - Catherine Offord, Science
Our community can’t have nice things, because we collectively won’t pay for them.
We want open access journals, but don’t want to pay author charges. We want good software, and funding for software maintenance, but won’t use any software or online service that isn’t free. We want robust and flexible and cutting edge computer hardware - as long as it’s the lowest bid.
Stuff costs money. If we won’t spend money, we get less and worse stuff. It’s all well and good to say that someone else should pay for it, but there is no one else. It’s not like grant committees and scientific advisory boards and tenure panels are staffed by dentists or construction workers. The community is us.
Anyway, preprint servers are having trouble staying up and running without being bought by companies like Elsevier, and the organization that wrote the workbook on inclusive scientific meetings shared in #155 has had to layoff staff. Maybe someone else will fix it.
The lone developer problem - Evan Hahn
Hahn points out one very real problem in some research software:
I’ve observed that code written by a single developer is usually hard for others to work with. This code must’ve made sense to the author, who I think is very smart, but it doesn’t make any sense to me!
Hahn suggests that one reason is that the lone developer hasn’t had to explain the code to coworkers or collaborators. Under those conditions, it is pretty easy for the code base to evolve to become very idiosyncratic (and poorly documented). I’m not sure how best we can address this issue, other than continuing to build communities where those writing research software can bounce ideas off of each other and get helpful and constructive feedback in a way that builds, rather than erodes, confidence and capability.
So I really enjoy the Annual Reviews series of journals - good review articles are fantastic tools to help get up to speed in a new area, and these journals have consistently high quality reviews. The latest Annual Reviews of Statistics and its Applications is out. If you’re comfortable describing things in terms of statistical models, there are several reviews that may be of timely interest:
Move past incident response to reliability - Will Larson
When I started in this business, most systems teams approach to incident response was pretty haphazard (or, erm, “best effort”). That was a luxury afforded to our teams at that time because we had a very small number of pretty friendly and technically savvy users whose work could continue despite a few outages here and there; outages which were normally pretty short because our systems were small and less complicated.
None of that’s the case now. Our much larger and more complex systems are now core pieces of research support infrastructure for wildly diverse teams, many of whom can’t proceed with out us. I’m very pleased to see more and more teams stepping up to have real, mature, incident response expectations and playbooks. That doesn’t necessarily mean 24/7 on call - our teams and context are different - but it does mean those teams aren’t just playing it by ear every time they have an outage.
Larson reminds us that just responding professionally to service incidents is only half the job. The other half is learning from those incidents to have more reliable systems.
Larson suggest three steps:
The answer is extending your incident response program to also include incident analysis. Get started with three steps: (a) Continue responding to incidents like you were before, including mitigating incidents’ impact as they occur. (b) Record metadata about incidents in a centralized store (this can be a queryable wiki, a spreadsheet, or something more sophisticated), with a focus on incident impact and contributing causes. (c) Introduce a new kind of incident review meeting that, instead of reviewing individual incidents, focuses on reviewing batches of related incidents, where batches share contributing causes, such as “all incidents caused when a new host becomes the primary Redis node.” This meeting should propose remediations that would prevent the entire category of incidents from reoccurring. In the previous example, that might be standardizing on a Redis client that recovers gracefully when a new Redis primary is selected.
The common underlying problems will be different for us, but the idea is the same - once we have good incident responses, including writing up (and disseminating!) good incident reports, start mining all that fantastic data and making changes that reduce or eliminate entire classes of failures.
Once done, we can use our scientific training and come up with experiments and test them to make sure they work, a la Slack’s “disasterpiece theatre” (#5).
The SSD Edition: 2022 Drive Stats Review - Andy Klein, Backblaze
Backblaze: SSDs fail slightly less than HDDs - Chris Mellor, Blocks and Files
Backblaze continues to generously provide the very valuable community service of publishing their disk drive reliability results; Klein’s article gives the full results.
This year they have enough data to start making some convincing data about SSD reliability, and Mellor’s article highlights maybe the most surprising result:
Cloud storage provider Backblaze has found its SSD annual failure rate was 0.98 percent compared to 1.64 percent for disk drives, a difference of 0.66 percentage points.
I have to admit, I would have expected the difference to be larger. It’s something of a tribute to hard disk manufactures that they can make tiers of metal spinning at top linear speeds of tens of meters per second with read/write heads hovering above them at distances with tolerances of micro-to-nanometers that have reliability numbers similar to solid state devices. It reflects what a mature technology hard drives.
And maybe the fact that SSDs are still so compartively new that Klein and team see such inconsistency in SSD SMART data:
Here at Backblaze, we’ve been wrestling with SSD SMART stats for several months now, and one thing we have found is there is not much consistency on the attributes, or even the naming, SSD manufacturers use to record their various SMART data. For example, terms like wear leveling, endurance, lifetime used, life used, LBAs written, LBAs read, and so on are used inconsistently between manufacturers, often using different SMART attributes, and sometimes they are not recorded at all.
This week I learned about Princeton’s JobStats package, which lets uses look up the CPU/GPU/memory/scheduled runtime utilization of their slurm jobs, and gives recommendations.
Julia Evans writes about her first experiences using nix - stand up an old version of hugo for her blog, including creating a package. I’m somewhat reassured to see how easily you can use nix piecemeal like this without completely going all-in on it.
Inside NCSA’s Nightingale Cluster, Designed for Sensitive Data - Oliver Peckham, HPC Wire
NCSA’s Nightingale system has been online for two years now, and Peckham speaks with the team about the work it supports.
These kinds of sensitive data clusters are going to proliferate in the coming years, and as the standards we’re held to for sensitive data continue to rise, they’re going to end up looking noticeably different from traditional HPC systems. HPC systems are optimized for performance partly by optimizing away from having strong inter-tenant security (e.g. encrypted traffic) I’m curious to see how things evolve.
Lots of computerized machines built in the 80s or 90s - some airplanes, computerized looms and sewing machines, and even the Chuck E. Cheese Animatronics - require 3.5” floppies for updates or routine maintenance, and with new floppies being essentially non existent now, these machines are in danger of becoming obsolete e-trash even though they otherwise work perfectly.
I continue to be impressed by the growing set of code-analysis tools available - topiary is a library for writing code formatters, letting you focus on the ASTs for your language and rules you want without writing parsers, lexers, etc.
Learn and play with Trusted Program Modules for e.g. confidential computing in your browser - tpm-js.
Cross-platform flying toasters - After Dark screensavers in pure css.
Finally, computers that you can just saute and eat if they’re misbehaving - Inside the lab that’s growing mushroom computers.
Visual Studio plugin for debugging CMake scripts, it’s obviously completely normal for a build system to require an IDE and debugger to figure out what the heck is going on.
By all accounts, Google Groups continues to deteriorate. Does anyone have any favourite mailing list hosts for communities? I’ve heard good things about groups.io, which seems great but is a bit more full-featured than I normally am looking for.
A battery-free gameboy powered by solar or the user’s own button-mashing.
A github cli-based script to merge dependabot updates.
Running (the smallest) LLaMA model on a laptop.
Cryptography is hard and even schemes approved by experts can have vulnerabilities; don’t roll your own.
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
This week’s new-listing highlights are below; the full listing of 181 jobs is, as ever, available on the job board.
Senior Research Software Engineer (Operations Research and Financial Engineering) - Princeton University, Princeton NJ USA
In this position, you will be an integral member of dynamic research teams in the department of Operations Research and Financial Engineering (ORFE) supporting faculty-directed research focused on the development of accelerated numerical computations and open-source software packages for cutting-edge computational research. You will make fundamental software contributions to multiple faculty-directed research projects in areas like real-time embedded optimization, energy grid reliability quantification, optimization problems in the social sciences, machine learning for banking applications, and large-scale tensor computations. As a Senior RSE, you will also mentor and provide technical leadership to the Research Software Engineering team, as well as teach advanced computational techniques to raise the computational capability of the team. You’ll also have the opportunity to co-author scholarly publications.
Lead Computational Biology Specialist - Northwestern University, Evanston IL USA
As a Lead Computational Biology Specialist, you will lead and run the services and initiatives to support genomics and -omics research by partnering with teammates, Northwestern researchers, stakeholders, and peers in other institutions. You will onboard and support researchers to run genomics pipelines on the Northwestern Genomics Compute Cluster and assist in managing their data. You will regularly communicate with stakeholders and collaborate with them in expanding services and resources for genomics researchers. As an integral member of Research Computing Services, you will help faculty and students build computing- and data-related skills by leading workshops and through consultation.
Manager, Information System Security, Citizen Lab - University of Toronto, Toronto ON CA
The Citizen Lab is an interdisciplinary laboratory based at the Munk School of Global Affairs & Public Policy, University of Toronto, focusing on research, development, and high-level strategic policy and legal engagement at the intersection of information and communication technologies, human rights, and global security. The Manager, Information Systems Security assumes responsibility for the strategic and tactical planning and provision of systems security, confidentiality, privacy and risk management in the areas of systems administration, server and service design, implementation, operation and support. The Manager, Information Systems Security is responsible for developing, updating, implementing, promoting and training the community on the Citizen Lab’s Information Security Program. The Manager, Information Systems Security is instrumental in ensuring reliable and robust access controls, service availability, and activity / incident reporting. The Manager, Information Systems Security applies known security standards as well as establishes new security standards and best practices related to the use and operation of information technology solutions, systems, servers, network services solutions and proposes strategies by which those standards and best practices are implemented, tested and confirmed on a regular basis.
Data Architect and Analytics Manager - Western University, London ON CA
Working under the leadership of the Chief Data Officer, the Data Architect and Analytics Manager will provide technical leadership and oversight of Western’s business intelligence and analytics team and will leverage collective expertise and knowledge to manage large, enterprise scale data projects. The role will be accountable for overseeing the team to meet its goals in support of providing strategic recommendations to maximize the value of information assets via their creation, access, and use, to address the University’s strategic needs. The Manager will participate in the development and implementation of strategic plans and policies to ensure successful alignment and progress and will manage resources, lead and direct the work of others, and ensure appropriate controls are in place to manage risks. The role will coach and train staff and provide comprehensive consultation on a variety of processes to ensure the University community is served effectively.
Bioinformatics Manager, Molecular Virology and Microbiology - Baylor College of Medicine, Houston TX USA
The KLemon Lab at Baylor College of Medicine is hiring a Bioinformatics Manager. Our long-term goal is to identify bacterial strains and compounds that will be leads for new approaches to prevent and treat infections due to nasal bacterial pathobionts and respiratory viruses. To achieve this, we are elucidating the molecular mechanisms of interspecies interactions in the human nasal microbiome. The bioinformatic manager will spearhead our computational research on microbe-host interactions in human nasal microbiota. A complete application includes a CV, examples of your work on GitHub, the names and email addresses for three references and a cover letter.
Program Manager 4, HPC Division - Los Alamos National Laboratory, Los Alamos NM USA
The High-Performance Computing (HPC) Division welcomes qualified applicants to apply for a Facilities, Operations, and User Support (FOUS) Program Manager position. HPC Division provides a variety of advanced high performance computing systems and services to scientists and engineers at the Laboratory through multiple programs, largest of which is the ASC Program. Within ASC, FOUS provides for HPC facilities, networking and storage infrastructure, large-scale HPC systems support, 24X7 HPC site operations, and user support functions. The successful candidate will serve as the Program Manager 4 for the Facilities, Operations, and User Support (FOUS) Program. This position will report to the High Performance Computing Division Leader. However, the position will support the ASC Program and FOUS Program Director. The Program Manager interacts with all the major organizations involved in large-scale facilities and utilities. Further, you will coordinate with DOE/NNSA project management and facilities-related organizations as well.
Lead High Performance Computing Architect - Mount Sinai, New York NY USA
The Lead HPC Architect, High Performance Computational and Data Ecosystem, is responsible for architecting, designing, and leading the technical operations for Scientific Computing’s computational and data science ecosystem. This ecosystem includes high-performance computing (HPC) systems, clinical research databases, and a software development infrastructure for local and national projects. To meet Sinai’s scientific and clinical goals, the Lead brings a strategic, tactical and customer-focused vision to evolve Sinai’s computational and data-rich environment to be continually more resilient, scalable and productive for basic and translational biomedical research. The development and execution of the vision includes a deep technical understanding of the best practices for computational, data and software development systems along with a strong focus on customer service for researchers. The Lead is an expert troubleshooter and productive team member. The incumbent is a productive partner for researchers and technologists throughout the organization and beyond. This position reports to the Director for Computational & Data Ecosystem in Scientific Computing
Privacy Manager, Research and Innovation - Hamilton Health Sciences, Hamilton ON CA
Reporting to the Director of Legal and Chief Privacy Officer, the Privacy Manager, Research and Innovation has oversight of the development and implementation of the privacy research and innovation component of the overall privacy program and strategic plan at Hamilton Health Sciences (HHS). You will: assist with the development and maintenance of data privacy and security compliance programs and policies; collaborate with the Research Administration team, Population Health Research Institute (PHRI), and the Health Information Technology Services Team on compliance and risk management matters; develop and deliver online and in-person privacy training regarding the use of data for research and other purposes; and in cooperation with the Privacy Manager, Clinical Operations, support the execution of day-to-day clinical privacy activities, including: privacy assessments, compliance reviews, agreement reviews, and promoting practices and standards across the Hospital as required.
Head of Translational Research Bioinformatics - Roche, Various USA or ca
We are seeking a talented and highly motivated Sub-Chapter Lead for the Translational Research Bioinformatics team at the Director/Senior Director level. The team helps lead translational research efforts across the breadth of Roche Diagnostics portfolio including clinical chemistry, immunohistochemistry, tissue pathology, and molecular diagnostics. This position requires a broad experience in novel biomarker discovery for one or more clinical indications including cancers, cardiovascular and neurodegenerative diseases, and infectious diseases. The candidate should have extensive experience leading cutting-edge research projects and establishing research collaborations with academic medical centers and industry research consortiums. The candidate will work closely with other Bioinformatics teams, assay research and early development teams, and external collaborators to develop and maintain a highly effective research portfolio and pipeline of diagnostic biomarkers used to support assay development.
Director Research Support, Office of Advanced Research Computing - Rutgers University, New Brunswick NJ USA
Responsible for providing leadership and vision for solutions in support of Rutgers University research community across multiple research disciplines. Reporting to the Associate Vice President for Advanced Research Computing, the Director serves as a leader, actively engaging in outreach at all Rutgers campuses (New Brunswick, Newark, Camden, and RBHS) to identify unmet demands for advanced computing resources, make initial assessments and provide potential solutions for them (including HPC, HTC, NRP, ACCESS, and public and private cloud services). The Director of Research Support leads a team of highly skilled domain scientists, all of whom work closely with faculty, research associates, students, and staff on understanding their research, and allowing for the customization of training and support solutions.
Data Science Initiative Managing Director - University of Minnesota, Minneapolis MN USA
The UMN Data Science Initiative (DSI) is an exciting new system-wide initiative administratively housed in the Office of the Vice President for Research. The DSI is aimed at the communication, coordination, integration, and amplification of existing data science activities. This includes both near-term coordination of existing efforts and far-term collaboration to initiate new research and education topics. The Managing Director will provide strategic, operational, and administrative leadership to the DSI reporting to the Director of Research Computing (RC) and working under the guidance of the DSI Executive Leadership Team. This position will be responsible for ensuring that the DSI is running efficiently and executing on its strategy; the position is accountable for management of the DSI’s operations, funding portfolio, research studies, and grants; staff administration and management; and ensuring that strategic goals are defined, planned, measured and executed upon.
Sr. HPC/Cloud Workflow Engineer - Irish Centre for High-End Computing (ICHEC), Galway IE
The successful candidate will work as a part of the Performance Engineering Activity/ lead the development and deployment of HPC and AI workflows and CI/CD pipelines on multi-node clusters and/or cloud-based systems. work closely with other ICHEC projects, particularly National Services, EuroCC Academic Flagship & SME Accelerator Programmes, Climate Informatics, Earth Observation, Health Informatics, Biodiversity, and Quantum Computing; collaborate with national and international research/industry partners who are HPC technology developers and end-users for collaborative R&D; disseminate the outcomes of the projects through reports, publications, press releases and presentations at events; lead the preparation and delivery of relevant training courses to academic, industry and public sector audiences.
Head of Digitization (Librarian) - University of Texas at Austin, Austin TX USA
The University of Texas Libraries seeks a collaborative, detail-oriented, and highly motivated individual to support UT Libraries in its mission to evolve and sustain digital collections for teaching, learning, and research. The Head of Digitization leads digitization planning and operations as an integral part of the Preservation and Digital Stewardship unit. The role supervises the team responsible for digitizing collection materials in a variety of formats, using a range of capture technologies and techniques. The Head of Digitization will collaborate with colleagues across the University of Texas Libraries to identify, plan, and implement projects and initiatives requiring digitization of UT Libraries collection materials for purposes of access and long-term preservation.
Project lead, discipline-specific data guidelines - TU Delft, Delft NL
At TU Delft, we provide vast support for researchers on research data / software management at a central and faculty level. We are now exploring ways to improve the discipline-specific support and guidance we offer. In order to do that, we are searching for a motivated, innovative and driven new colleague to lead a project on creating discipline-specific research data / software management guidance, under the umbrella of the TU Delft Open Science Program. As the project lead, you will connect data / software management experts and researchers to steer the co-creation and integration of best practices on data / software management in specific types of research workflows/fields.
Senior Assistant Director of Strategic Initiatives - National Center for Supercomputing Applications, Urbana IL USA
NCSA is seeking a highly motivated leader to serve as the Senior Assistant Director of Strategic Initiatives. The Senior Assistant Director of Strategic Initiatives drive strategic planning, budgeting, implementation, operation and/or administration of NCSA and grant-funded projects and/or programs; make critical decisions in creating, improving, and maintaining those projects and/or programs in support of NCSA and Illinois’s strategic vision, or in support of the necessary outcomes of the sponsored projects and their funding agencies.
Assistant Director for Engagement - National Center for Supercomputing Applications, Urbana IL USA
NCSA is seeking a highly motivated leader to serve as the Assistant Director for Engagement for NCSA’s Engagement Directorate. The Engagement Directorate works with Industry Partners and has various research and educational groups. The Assistant Director of Engagement will oversee and manage projects and programs, develop, and implement strategic goals and engage in outreach.
Senior Engineer, HPC Infrastructure - GSK, Collegeville PA USA or Poznan PL or Stevenage UK
The Physical Infrastructure team is accountable for the on-premises half of our hybrid cloud infrastructure strategy. They maintain and operate our current high-performance computing environments (multiple clusters consisting of 1,000 nodes, 40,000+ CPU cores, 600+ GPUs, and 40+ PB of storage that support 1,500 R&D users), while working to transition toward a “private cloud”-style edge computing framework. Designs, builds, and operates tools, services, workflows, etc that deliver high value through the solution to key business problems. Contribute to maintenance and operations our extensive on-prem infrastructure that powers computation across the R&D organization. Work to transition from classical HPC ways of working to a modern “private cloud” approach that focuses on Infrastructure-as-Code, DevOps-driven workflows, and containerization, and helps put site reliability engineering and automation at the core of every running Onyx service
Team Lead, Data Engineering - Swinburne University of Technology, Melbourne AU
The Team Leader of Data Engineering will excel at establishing strong partnerships with key stakeholders and colleagues to develop and implement data provisioning capabilities. Using multiple source systems combined with the analysis and definition of technical requirements, data provisioning, producing operational reports and implementing innovative solutions. This position reports to the Director, Enterprise Architecture and Information (EAI) in the Information Technology department and leads a team of Data Engineers responsible for the provision, maintenance, improvement, cleaning and manipulation of data in the university’s operational and analytics data platforms, including the Swinburne Information Hub.
Manager, Student Analytics and Modelling - University of Western Australia, Crawley AU
Reporting to the Director Load Planning and Management, you will take a lead role in the development and implementation of statistical models and forecasting techniques to support the decision-making process for student recruitment, retention and success with an institutional-wide remit.
Head of Data Science & Advanced Analytics - Janus Henderson, New York NY USA
Build and lead a team of data scientists in the delivery of analytics and machine learning by analyzing drivers to understand the relative impact of opportunity, behaviors, and effectiveness to drive JHI’s distribution strategy and streamline client reporting by using analytics and machine learning. Find opportunities to build efficiencies of scale, reduce redundancies and risk associated with manual error and deliver creative solutions to the business. Monitor market, role, and location-level performance to diagnose opportunities that inform practice management activities which improve our sales practices. Deliver simple, consistent and streamlined dashboards for leadership to identify key gaps in performance and mentor for stronger business outcomes
Bioinformatics Manager I - Frederick National Laboratory, Frederick MD USA
Lead the rare disease informatics research project for ABCS. Provide scientific and technical leadership for bioinformatics and data science aspects of the project. Manage and supervise analysts supporting the project. Design and architect solutions for complex data integration aspects of rare disease research. Write, revise and submit scientific manuscripts on significant research findings. Interface with stakeholders, government sponsors and company management, and provide status reports, presentations, and application demonstrations
Assistant Center Director, Research Informatics - Moffitt Cancer Center, Tampa FL USA
Moffitt Cancer Center, a National Cancer Institute-designated Comprehensive Cancer Center, is seeking an inaugural Assistant Center Director of Research Informatics to provide informatics leadership for all research activities at the Cancer Center. This new leader will serve as a bridge between the Moffitt Research Institute (MRI) and the Center of Digital Health, aligning institutional data platforms and strategies and with the data-related research Shared Resources, overseeing research related aspects of data sharing partnerships with external organizations, and providing informatics support for cross-departmental research initiatives.
Manager, Data, Analytics, Reporting & Evaluation - University of British Columbia, Vancouver BC CA
The Manager, Data, Analytics, Reporting & Evaluation (DARE) provides campus-wide leadership in the development, planning, implementation, and evaluation of significant and strategically important institutional initiatives. This role will provide strategic support to the Equity & Inclusion Office with primary responsibility for the development and analysis of demographic and experiential data and its use to inform, track progress, and articulate outcomes of strategic plans as relates to promoting equity, diversity, inclusion (EDI) and anti-racism at UBC. The Manager also provides consultative support to senior leadership, AVP, Equity & Inclusion, Deans, Associate Deans, and Directors across campus in the development, implementation, and evaluation of faculty/unit strategic plans.
Scientific Software Developer/ Bioinformatician Team Lead - Rangram (Recruiter), Mississauga ON CA
As a part of the project, a pipeline has been developed that helps in finding evidence for causality of genes. The portal democratizes access to these results and eases decision making for biologists throughout the company. This has been a successful effort: multiple interesting findings have been reported in the few months since it has gone live. This is an exciting opportunity to conduct clinical-translational bioinformatics research
Senior Scientific Software Engineer - Life Sciences - SandboxAQ, Remote USA or CA or EU
Our Simulation and Optimization (S&O) team is looking for a highly talented and experienced Scientific Software Developer [Life Sciences] to join our Drug Discovery team. We are looking for candidates with deep technical expertise in conceiving, designing and building complex software solutions to scientific problems, specifically in the area of computational chemistry, and delivering these solutions to internal and external customers and users. We are looking for an all-round talent, familiar with the full stack of scientific software development, from inception all the way to dealing with customer and user requests, with a track-record of scientific accomplishments and the interest and ability to solve highly challenging and complex problems at the intersection of computational chemistry, numerical mathematics, computational science, high performance computing (HPC), molecular physics and quantum computing.