I want to write a little bit more in the coming year about strategy for leaders of research computing and data teams: setting priorities; deciding what and what not to do; and deciding what success looks like. It’s an important topic - and yet as with so many things in our line of work, no one teaches us how to do it. It’s also much more ambiguous in our discipline than it might have been a researcher, where you can keep score pretty simply with papers and grants.
Those of us who have gone through University strategic planning processes come out even more confused, because these “Strategic Plan” documents aren’t a great introduction to strategy at the best of times, and they’re particularly odd ducks in the context of a University.
There was a great article by Alex Usher and team at Higher Education Strategy Associates last week. I want to use it to set the stage for what a strategy is, what a good overall framework does for teams or orgs, why Strategic Plans for universities are so constrained, and what that means for us. Usher has written quite a bit about strategic plans in the past, some of it quite cutting, and all of it insightful - I recommend the articles, and the blog in general, if this topic interests you.
(This is a long one, and won’t be of interest all readers - feel free to skip to the roundup.)
A strategy, ultimately, is a means to solving a problem. You have a problem: say how to reach sustainability, or how hire and keep good team members, or how to position your team against a number of other local and remote teams. A strategy is a choice, a decision, about how you will solve that problem, given the existing situation and the tools at your disposal. Remember the story of how Pauli dismissed a paper as “not even wrong”? A good strategy, like a good theory, must possibly be wrong. If it couldn’t be wrong, it isn’t a strategy. If someone else couldn’t defensibly make a different choice, it isn’t a strategy. If a choice doesn’t address a well-defined problem, it’s not a strategy. Aspirations to “excellence” or “high community engagement” and the like aren’t strategies. (The canonical book to cite here is Good Strategy, Bad Strategy. I’ll warn you though, especially in academia, getting into this is like learning to recognize bad kerning. The badly done stuff is everywhere, and seeing it constantly gets really annoying).
Strategic Plans are a poorly named genre of documents. (Naming things is hard). It isn’t a plan - it doesn’t (and shouldn’t) say “we’ll do X, then Y, then Z”. It also isn’t a strategy, though it should implicitly be the documented outcome of deciding on one or more strategies.
Let’s say you woke up one morning and found yourself in charge of a research organization that faced new opportunities and challenges. A good strategic plan is something you could use to guide you away from some otherwise plausible choices and towards others. It’s a document which gives a community clarity about how big picture decisions will be guided - about the principles around which those choices will be assessed. It gives stakeholders clarity about what the organization is for, what it will be doing even when challenges arise, and if it can help them meet their goals.
Any document providing such clarity to leadership and stakeholders about what the organization will do when faced with opportunities and challenges is a perfectly fine Strategic Plan. Sometimes smaller versions of the document for smaller organizations is called a Strategic Framework. That’s is a better name, because it is a framework - or at least guardrails - for making strategic decisions (for the leadership, but also for stakeholders).
Plan or Framework, there’s no fixed structure for such documents. The most common approach is to have a section describing what the organization is for (with enough clarity that it’s easy to understand what it’s not for), the broad classes of activities it will undertake (often with some prioritization), and how it will measure whether it is accomplishing its purpose. But the essence of the document is the shared understanding of what the organization is and how it will be guided, not the section headings. For a modest-sized team, a well-written Strategic Framework type document could be a page or less in length, and would likely be called something less pretentious, but it would serve as one just fine.
Bad documents, on the other hand, cite lofty goals, too big to be achievable and too fuzzily defined for anyone to know for sure if they had been achieved anyway. They list grab-bags of individually worthy but collectively incoherent activities. The measures for success are just counts of things they do and not measures of what they’re trying to achieve by doing those things. They leave stakeholders uncertain as to what’s in and out of scope if new opportunities arise, or what the true priorities are if activities have to be scaled back. They leave leadership with no concrete guideposts to inform challenging decisions, and stakeholders no hint as to how those decisions will be made.
The problem with Strategic Plans/Frameworks, and why I think there’s so many crummy ones, is that the good ones are unremarkable and boring. They are the documentation of clear decisions made about what the organization is for and how it will measure its success. Read after the fact, you’d be forgiven for mostly forgetting having ever laid eyes on it. “Ok, the Institute for Theoretical Applied Translational Studies is for X, Y, and Z, and for the next couple of years it’ll be doing this that and those, and there was some list of KPIs. I’d work there, or work with them, if I wanted to do such-and-such. If I needed so-and-so, I’d have to look elsewhere. Yada yada. I don’t see what the big deal is“.
But, that’s huge. Getting an organization, a team, and stakeholders aligned to the point that they can clearly lay out what the organization is for and how it will measure whether it’s doing a good job is an absolutely foundational success.
Our job, as leaders, is to reduce uncertainty (#34). We are life-sized Maxwell’s Daemons, manually reducing entropy within and at the boundaries of our organization, so our teams can help our stakeholders in the way only they can. In research, and in computing, the range of possible things we could do is almost infinite. Discussing the purpose of an organization or team with stakeholders, and building enough consensus to agree on a relatively quotidian document, while having that document as an artifact to continue basing decisions around for the next few years, is a real accomplishment.
And it’s doable! Like most of management, the successful outcome is boring and mostly invisible, but it’s doable. Most well-run medium-to-large nonprofits, and many large-ish multi-institute research centers, have exactly such boring, clear, and useful documents. Getting to the point of having that document is almost never boring, mind you. The discussions can be quite… vigorous. But it is absolutely doable. (Disclaimer: while I have succeeded doing this a couple times in the past, I have also notably and pretty comprehensively failed to do it once for a large national organization, too.)
University strategic plans aren’t the worst to be found out there, but they arent the best, either. One reason we find them awkward is that they aren’t written for us. They’re written for donors and (for public institutions) governments, for reasons the HESA article discusses, so they for dry if not cringy reading for intrnal technical folks. Another reason is that universities are very constrained as to the directions they can set. Universities can’t directly steer teaching or research activities, because of academic freedom. So these are hard documents to write for universities.
But! The exercise leading up to a strategic document is a way to get the whole University on the same page as to where things are now, and to choose two or three priority areas to push forward in the next years. This is no small thing.
Those new areas are where the University leadership, including your VPRs and CIOs and Deans, have committed to try to make advances, based on input from the community. (If it’s not clear to you what those new priorities are, you can read the current strat plan along with the previous couple to watch the evolution). Leadership will be pushing very hard on those priorities, because it takes constant effort over years to make any change in as large and fractious an organization as a university. Nudging things in new directions takes long term suasion and building of necessary supporting infrastructure.
And that’s where we come in - we are an essential part of that infrastructure. We’ve discussed already that research computing and data can help lead research by making existing things easier, or making new things possible [#104]. What we haven’t discussed, and the HESA article does, is that our teams and other kinds of centres are amongst the few levers that institutional leaders have within Universities. Building new capabilities - the HESA article talks about it in terms of faculty hires, but I’d add creating centres of all types, including our own - creates new possibilities, new paths that people wander down:
In fact, one of the very few ways in which institutions or faculties can in fact regain actorship over things like teaching and research is in the act of hiring. […] However, in the act of hiring, it is possible to shape entire institutional futures (since hires can stay at an institution for 30-40 years) by choosing to build strength in clusters of related academic topics (eg. Water, China studies, Poverty), either within or preferably across disciplines. Given this, it is a bit remarkable how little time is spent in universities thinking specifically about hiring as a strategic activity. It’s not as though we don’t know that it works: pretty much all of Caltech’s strength as an institution, for instance, can be traced to a very calculated set of about a dozen or so strategic hires between 1910 and 1930. We just don’t talk about it or engage in it very much or – and this is the key part – linking it to strategic goals very often, either at the institutional or (maybe more appropriately) the faculty level.
Especially when our teams are part of a pillar which will help the institution go where it needs to go, our teams work can make a push into a new research area more or less successful.
Leadership too often doesn’t have the bandwidth to be checking in with us and nudging us to institutional priorities, especially since they typically don’t know what’s technically possible on the ground. (I’ve talked before about why the relation between us and academic leadership is like a nonprofit leader and their board before - this is part of the reason why). But they do have priorities, and they are able and enthusiastic to talk about them at length given the opportunity. Teams that demonstrate themselves strategically important are resourced strategically. Teams help leadership grow capabilities important to the institution, in those priority areas, are more likely to get opportunities and resources when they arise.
Our research computing and data teams have a real role in helping move our institutions grow to meet new challenges. Developing an effective professional relationship with leadership is hard in our institutions, because they’re quite flat and leadership has a lot of balls in the air. But that’s what it takes to make sure the work we and our teams are doing has as much impact as possible.
Send me an email if you have questions about university or research group strategic plans or their processes, or any particularly good (or terrible!) stories to share. Also let me know if there are topics you want me to cover in coming months on issues of strategy for research computing and data groups. Just hit reply or email me at email@example.com.
Sorry, that was long - but it’s an important primer on strategic planning documents for people who have been discombobulated by what goes on in universities. We’ll talk more (and at less length) later - for now, the roundup! It’s a short one this week - there’s always a bit of a lull in relevant articles towards the start of summer.
What To Do When Your Feedback Doesn’t Land - Lara Hogan
Feedback often doesn’t require much of a conversation. Done well, feedback is lots of tiny little nudges (mostly positive!). If any one of those nudges doesn’t really register with the team member, or is misunderstood, well, it’s not a big deal.
Sometimes though it does take some conversation. Maybe you just want to find out what happened in a situation you can make some adjustments elsewhere. Sometimes it’s because the feedback is more serious, or even just more complex. Hogan’s Feedback Equation is good for those; it ends with an open-ended question. (She has a bunch of good open-ended questions here).
Sometimes when these conversations happen and it’s more serious or more complex, the feedback doesn’t seem to land, or register, or… something. That might be because the team member doesn’t understand, or is getting defensive. If the conversation is awkward, it’s pretty natural for the manager to just want to wrap things up and get out of there, but we know that’s only putting off the problem.
Hogan has five steps for dealing with these situations:
The HESA article above highlights the importance of hiring, a topic that comes up repeatedly here.
Our teams have some of the same problems with hiring technical staff as everyone else does, but a lot of the articles I see from tech aren’t super relevant - they assume a lot of money, or that you’re constantly hiring multiple people at a time. Our teams are cash constrained, and hiring is only occasional. That makes hiring the right people even more important, and it means we have to take advantage of other mechanisms (like working with co-op students or other interns) to keep our hiring processes sharp.
Chin’s article talks about cash-strapped hiring with a few case studies. (We’ll probably see more articles about cash-strapped hiring in the coming year, as the market hits a tech correction and startups that were used to being able to raise money left right and center suddenly have to tighten their belts). The principles Chin sees in common for successful attempts are are:
(The points about actively recruiting candidates that are likely to match the criteria are very relevant, too. Always be on the lookout for people in the community who might be good team members if the timing worked out.)
Point two, successful onboarding, is vital. I’m increasingly certain that hiring and onboarding are the same thing. In fact, starting the hiring process from picturing the end of the onboarding process - say four months in, the person is now a successful part of the team doing the job - is the way to go. Then back out the onboarding process. Then start putting together a list of hiring criteria for the job ad.
And for hiring criteria - I can’t stress enough the importance of point 1, tuning the hiring criteria to the actual work and the team. I have seen too many research computing and data teams hire against a laundry list of specific technical knowledge that might well be handy (but can easily be learned) while not actually evaluating how well the candidate would likely actually do the real, day-to-day job.
Au talks about something sort of skimmed over in Chin’s article - the interview process. Actually evaluating the candidate against the hiring criteria.
Getting the hiring bars and rubrics agreed to across everyone who’s going to interview is a fantastic opportunity to make sure the whole team is on board with what the new candidate’s job will actually be, and to help them develop their interviewing skills.
Also, we’re in science - calibrate your devices! Test the questions you’re going to ask on people who you know are confident would be good team members. If they stumble to get the question answered adequately in the time allotted, then adjust accordingly.
WorkflowHub - EOSC-Life and ELIXIR
It’s fascinating to watch the unit of scientific software grow from the individual code to the pipeline. It’s become necessary not just because data analysis pipelines become a bigger part of research computing and data; even simulation science workflows are growing more complex.
Europe’s WorkflowHub, is a home for workflows (in CWL, NextFlow, Snakemake, and other formats). It pointed out a tool I hadn’t heard of before, WfCommons, for analyzing or simulating the run of complex workflows, good for testing workflow runners and systems. Interesting stuff!
Rands Leadership Slack engineering-effectiveness: A curated summary of shared tips & resources - Curated by Jasmine Robinson
I’ve mentioned the Rands Leadership Slack here a few times, where we’re slowly growing a research-computing-and-data channel. That channel doesn’t have enough of a critical mass yet to keep conversations going naturally, but luckily there’s so much else going on in the slack that there’s always something to read or a conversation to be a part of.
One new Rands’er has gone through the past five years of conversation on the Engineering Effectiveness channel and distilled the past five years of discussions, advice, and suggested resources into best practices, tips, resources, FAQs, and more: it’s well worth a look if you’re wondering how to make technical teams more productive and effective.
(The Rands slack operates under Chatham House rules, so distilling discussion from there is explicitly allowed as long as it’s not attributed).
It continues to impress me how a new younger community is rebooting the Fortran ecosystem - here’s an example of dusting off quadpack from netlib, refactoring it in modern fortran, getting it up on GitHub with GitHub actions for CI/CD to “to restart Quadpack development where it left off 40 years ago”, and put it into package managers.
Getting the code into modern Fortran isn’t some stylistic or aesthetic thing, either - intelligent refactoring support and other tooling in IDEs can’t make hide nor hair of F77 constructs. How is an automated tool supposed to know what to do around a computed goto? By moving it into F2008 or even F90, the code becomes much easier and faster to improve further.
I’m not sure if this effort comes too late for Fortran or not, but certainly the demand for number crunching on large rectangular multi-dimensional arrays has never been larger…
Revisiting data query speed with DuckDB - Jacob Matsen, DataDuel
The influx of new database types is a huge boon to data analysis/data science workflows. If, as is often the case in our line of work, the workflow is more about analytics than it is row-by-row data mutations, columnar or OLAP databases are the way to go. Matsen here is very impressed by the 80x(!) speedup he got using duckdb (an embedded columnar database) on a CSV file off of disk vs. running the same query in Postgres with the data in a table. Any of a number of other columnar databases would have done as well (although apparently not SQL Server Columnar Tables) especially if the data were converted to something more machine friendly like Parquet files.
AMD Technology Roadmap from AMD Financial Analyst Day 2022 - Cliff Robinson, Serve The Home
The Increasingly Graphic Nature of Intel Datacenter Compute - Timothy Prickett Morgan, The Next Platform
Both Intel and AMD announced what’s coming next on their technical roadmaps over the past couple of weeks. I’m not going to comment on any specifics, because we’ve decided this isn’t a speeds and feeds newsletter and because I have a conflict. But there’s two things we can safely take away:
Running cost-effective GROMACS simulations using Amazon EC2 Spot Instances with AWS ParallelCluster - Santhan Pamulapati & Sean Smith, AWS HPC Blog
A big issue for using cloud for typical HPC workloads is relative cost. The more we learn to take advantage of much cheaper (often 90%) preemptable spot instances for workflows that can make effective use of them, the cheaper those workflows become. What’s more, those same approaches can also help us make better use of on-premises clusters, where preemption is possible but not widely used.
Here the authors demonstrate ParallelCluster + slurm restarting preempted Gromacs jobs with no additional work required. For the (not-uncommon) case of running large numbers of relatively few-particle jobs, checkpointing frequently is pretty lightweight on a high-performance filesystem, so the additional cost for this is pretty modest.
Strong Showing for First experimental RISC-V Supercomputer - Nicole Hemsoth, The Next Platform
Here Hemsoth writes about the ISC student cluster competition team @NotOnlyFlops out of the BSC, and their success competing with a cluster built out of the same SiFive motherboards we talked about in #122. Putting something like this together under the time constraints of a competition, where you don’t necessarily know what you’ll be asked to run on it, is pretty impressive! These systems (and more importantly the software for them) is much more mature than I would have thought a couple of months ago.
A bash parser for command-line arguments - bashjazz.
I’ve wanted something like this for years - why don’t we have something like this in IDEs? A prototype text editor which allows drawing and rendering of line diagrams in SVG along with text. It’s 2022, why do we still have to use ASCII art to draw our in-code diagrams?
Speaking of editors - Atom is being EOLed.
Web apps are cool, I guess, and so are CLI tools, but what about tools that run over DNS?
As time goes on, the implementations of “World for X” stretch further and further into the past - Wordle in Pascal for Multics (an OS from the late 60s).
Web browsers are extremely complicated pieces of software - here’s a book walking through building a simple one in 1,000 lines of Python.
For those working with trainees, new to git and GitHub, GitHub is putting together a series of interactive self-paced tutorials for beginners - GitHub Skills.
Deep Learning is taking over a lot of things, but not in-terminal games without complete knowledge - symbolic methods are still comfortably ahead in the 2021 NetHack challenge.
An old school BASIC interpreter + DOS environment, reimagined as web app - EndBASIC.
The Mamba (faster drop-in replacement for conda) project is pretty mature now - here’s a quick overview.
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
This week’s new-listing highlights are below; the full listing of 142 jobs is, as ever, available on the job board.
Senior Manager, R&D Information Systems - Gilead, Dublin IE
This ETL Architect role will be a valued member of the Development Systems Team in Dublin. This role will be responsible for designing and maintaining system integrations within our AWS environment as well as supporting Gilead’s data governance initiatives.
Research Systems Manager - Australian National University, Canberra AU
The Research Systems Manager will report to the Associate Director, Research Analytics and Systems. The successful candidate will effectively lead the team to support all enterprise systems managed by the Division, working with external vendors to ensure they are fit for purpose and effectively maintained. They will lead the development and implementation of projects to improve the University’s research management processes and application support services; as well as support continuous process improvement. They will also work closely with ITS to ensure research University systems align with the University roadmaps, maintain architectural integrity and ensure the implementation of sustainable solutions.
Team Leader, IMT Scientific Computing Services Technical Solutions - CSIRO, Various AU
The Scientific Computing Services (SCS) group in CSIRO Information Management & Technology (IMT) is seeking an experienced leader to lead the Technical Solutions team. The Technical Solutions team is responsible for providing support for a range of managed platforms for scientific research, such as CSIRO’s high-performance computing (HPC) facility, an internal research cloud and data storage systems. The team engages CSIRO researchers and delivers solutions to research projects based on project requirements. In addition, the team is also responsible for the management and deployment of a scientific software portfolio comprising both commercial and open-source applications, as well as the online delivery of user documentation. The Technical Solutions Team Leader is a senior leadership position. The successful candidate will work with and lead the Technical Solutions team in ensuring high quality of service delivery. Moreover, the successful candidate will work closely with peers within the Scientific Computing Services group as well as the Scientific Computing Platforms group. The Technical Solutions Team Leader is also expected to build strong networked relationships with the research business units to ensure that IMT and Scientific Computing services are aligned with the core science and functional business of CSIRO.
DevOps Manager and Enterprise Architect - University of Alabama, Tuscaloosa AB USA
Can you lead the DevOps team creating the infrastructure that will make performing experiments with the National Water Model a breeze? The Alabama Water Institute at the University of Alabama seeks applications for an Enterprise Architect/DevOps Manager to support a hydrologic model software development project. AWI is creating a replicable, hybrid HPC-cloud water science environment to support a collaborative experiment to analyze the National Water Model and improve US capacity for critical flood and drought prediction.
Senior Manager/Associate Director, Research Computing - Rubius Therapeutics, Cambridge MA USA
This newly-created role will include broad responsibility for providing IT support to the Research, Translational Medicine, and related organizations including: · Platform Owner for the Benchling Electronic Lab Notebook system · IT Business Partner to the Research and Translational Medicine teams · Data Governance facilitator
Project Manager Advanced Research Computing - Hosted Records Inc, Washington DC USA
The Advanced Research Computing (ARC) section provides computing services to the research divisions of the Federal Reserve Board. ARC seeks an Intermediate Level Project Manager with exceptional acumen and experience in project management, along with excellent communication, organization, and technical writing skills, to manage several projects and participate in the development of ARC’s project management office.
High Performance Computing Manager - Tier4 Group (Recruiter), Seattle WA or Remote west-coast USA
Our client is a genomic health IT company that keeps pace with the constant advancements made in genomics and connects that research to patient DNA to help diagnose and treat patients with rare genetic diseases and cancer. They prefer someone that is local to Seattle but will consider remote candidates on the West Coast.
Data Science/Machine Learning Manager - Soma Direct, Kitchener ON CA
SomaDetect is a high-growth technology startup in the dairy industry that is looking to expand its development team. We are looking for a talented and innovation-driven technical data science manager who is passionate about leading a team of world class scientists to solve difficult customer problems in the dairy industry using the latest scientific advances. You will be directly responsible for leading a team of scientists through the ideation, design, prototyping, development, and launch of innovative scientific solutions. You will closely partner with project managers, sensor development team, cloud team, UI designers, engineers etc. to pioneer state-of-the-art solutions to extremely challenging problems in machine learning and computer vision.
Network and Security Infrastructure Manager - British Geological Survey, Keyworth or Nottingham or Edinburgh UK
The British Geological Survey (BGS) is an applied geoscience research centre that is part of UK Research and Innovation (UKRI) and is affiliated to the Natural Environment Research Council (NERC). It is a world leading geological survey whose core mission is to inform government of science related to the subsurface and to undertake applied research to solve earth and environmental issues, both in the UK and globally.
Director, Data Analytics and Strategic Insights - Australian Catholic University, North Sydney AU
Reporting to the Chief Operating Officer, you will be a trusted advisor who will create an environment that is innovative, flexible and agile and rich in vision in order to effectively lead the Office of Data Analytics and Strategic Insights. A key aspect of this role will be to make best use of data to provide support, analysis and insight to the University’s senior leadership and executive committees.
Senior Data Manager, Research Data Science - CSL, Melbourne AU
CSL is a leading global biotechnology company with a dynamic portfolio of life-saving medicines. In this newly created position, you will provide hands-on data management support across the Research project portfolio to ensure advanced data science applications across CSL R&D with a focus on translational and clinical development areas. You will build Research project data governance, architecture, quality controls and processes for study data (human and non-human) and big data (genomic, proteomic, imaging) management, including data collection, validation, curation, and ingestion.
Academic Deputy Director (Research Computing) Sydney Informatics Hub - University of Sydney, Sydney AU
The successful candidate will hold PhD qualifications and will have a recognised international academic and professional standing within an informatics-related field. You will have extensive experience in people leadership and management within the higher education/knowledge worker sector, experience in working with industry, and a track record in managing financial sustainability, infrastructure and the operational needs and practices of an academic unit. Your ability to make strategic decisions, together with a strong business acumen, will support your endeavours in academic leadership, key stakeholder management and industry collaborations.
Senior Staff High Performance Computing Engineer - Guardiant Health, Redwood City CA USA
Guardant Health is a leading precision oncology company focused on helping conquer cancer globally through use of its proprietary blood tests, vast data sets and advanced analytics. Guardant’s HPC team builds and operates the computational technology backbone of the company. This includes scalable data storage that holds PBs of genomics data, high performance compute clusters running a custom bioinformatics pipeline in production and R&D environments, and the software infrastructure that hosts an ecosystem of services for internal data processing and external data integration. To facilitate Guardant Health’s fast growth in the next few years, the HPC team is looking for a strong technical engineer who can help maintain and help grow the HPC infrastructure during its aggressive expansion, while working with corporate IT, SQA and DevOps/SRE teams.
Technical Program Manager - Quantum Computing, Amazon Braket - AWS, Seattle WA USA
We are looking for a Technical Program Manager to help grow one of AWS’s newest and most forward looking services. You will drive programs and solutions that help customers experiment with quantum technologies, develop new skills, discover potential applications, and plan for the future. You must enjoy working on complex and ambiguous problems, you will enjoy working with engineers and scientists, and thrive in a fast-paced and uncertain market. You are familiar with software development best practices, building and managing scalable systems, and delivering a user experience that customers will love.
MATLAB Statistics and Curve Fitting Product Manager - Mathworks, Natick MA USA
You will draw on your product management expertise, application knowledge and collaboration skills to expand the use of MATLAB statistical analysis products in industry and academia. You will collaborate with product development teams and stakeholders on product direction, planning, launch, and sales enablement. The successful candidate will have strong leadership and collaboration skills with experience working with cross-functional teams, and global sales and marketing teams.
Manager, Machine Learning - Paravision, Remote US or CA
Paravision is a Silicon Valley-based start-up that builds AI and computer vision solutions with an ethical and customer-centric approach. Our best-of-breed technology is trusted globally by industry-leaders in access control, travel, government programs, and identity verification. Remote-first, we are growing rapidly across North America and looking for a highly motivated Manager, Machine Learning to lead our high-performing team of machine learning engineers. This role will report to our Director of Machine Learning, and drive the design and creation of scalable machine learning applications that enable various use cases across our customer base. This is a full-time exempt position that can be based remotely from anywhere in the U.S. or Canada.
Platform Product Manager - Penguin Solutions, Freemont CA USA
Penguin Computing, a subsidiary of SMART Global Holdings (SGH), specializes in innovative Linux infrastructure, including Open Compute Project (OCP) and EIA-based high-performance computing (HPC) on-premise and in the cloud, AI, software-defined storage (SDS), and networking technologies, coupled with professional and managed services including sys-admin-as-a-service, storage-as-a-service, and hosting, as well as highly rated customer support. The Platform Product Manager will be contributing to the High Performance Server Hardware product roadmap planning and product management from concept, product launch, and throughout the entire product life cycles. The position is in the team responsible for managing the current and future generation server hardware platforms of HPC, Cloud, and AI solutions.
Senior Manager, Data Science and Advanced Analytics - Alexion (AstraZeneca Rare Disease), Boston MA USA
Senior Manager, Data Science and Advanced Analytics is an incredible opportunity within the Commercial Strategic Data Management team to shape the commercial organization by crafting, developing, and fielding data science solutions that drive impact for patients. This role will drive complex analytics to answer key business questions, generate disease/treatment insight, and support HCP lead generation. This role will work closely with Marketing, US Commercial leadership, Sales Field Team, and Field Operations. This role will help further build the data science capability supporting Commercial. There will also be an opportunity to help share the data and analytics strategy supporting lead generation and other key business priorities.
Director, Computational Science - Sanofi, Toronto ON CA or Cambridge MA USA or Barcelona ES
You are a dynamic data scientist interested in leading a group of ML and software engineers to develop and deploy advanced ML solutions that will change the way biomedical research is performed. We are looking for individuals ready to challenge the status quo to ensure development and impact of Sanofi’s AI solutions for developing the drugs and improving the clinical trials that will benefit the patients of tomorrow. You have a keen eye for improvement opportunities and a demonstrated ability to deliver AI/ML solutions while working across different technologies and in a cross-functional environment.
Engineering Manager, Machine Learning (Remote) - BenchSci, Toronto ON or Remote CA
BenchSci’s vision is to bring novel medicine to patients 50% faster by 2025. We’re achieving it by empowering scientists with the world’s most advanced biomedical artificial intelligence. We are looking for an Engineering Manager to join our growing Machine Learning team. You will lead a team that works on challenging problems to have an impact on the 41,000+ scientists across the world who rely on BenchSci for their research. As an Engineering Manager, you will manage planning and prioritization of upcoming work, coordinate multiple engineering teams, and coach team members. You will work closely with tech leads and scientists to evaluate priorities, spot potential problems before they occur, and support the team’s technical roadmap with planned engineering investments.
Manager, Data, Insights & Advanced Analytics - Trillium Health Parnters, Toronto ON CA
The Data, Insights & Advanced Analytics Program strives to shape a healthier tomorrow through innovative uses of data and analytics focusing on population health, artificial intelligence, cloud computing, governance, privacy and security, and business development, with the mandate to build our state-of-the-art population health platform to empower our researchers and partners. Reporting to the Director, Innovation & Partnerships, the Manager of Data, Insights & Advanced Analytics will lead the Data, Insights & Advanced Analytics team visioning, designing, building and implementing the data platform enabling and enriching research and business analytics at Trillium Health Partners.
R&D Manager / Tech Lead - Synopsys, Aachen or Berlin or Stuttgart DE or Livingston UK
You are passionate about high performance simulation solutions? You want to create software tools that enable our automotive and telecommunications customers to develop embedded software for their next generation of self-driving cars, mobile devices or virtual reality applications? You burn for robust, high quality and flexible software architectures? As a tech lead you are responsible to define and drive the development of our testing tool and simulation technology in a highly competitive market. You will work in a team of high professionals and help to define the next generation of Virtual Prototypes.
Manager, Research Infrastructure and Development - Canadian Research Knowledge Network, Ottawa ON CA
The Manager is the lead product owner of the Canadiana platform which includes the Canadiana Trustworthy Digital Repository (TDR) and application (access) platform. The Canadiana platform is home to over 60 million pages of digitized documentary heritage and is a critical part of providing access to and preservation of Canadian cultural heritage material. The Manager leads a team of full-stack software developers and system administrators in building tools and features that enhance the platform for users and CRKN members, and oversees an innovative and agile technical development roadmap that is forward-looking, transparent, and prioritized by members and user needs.