A couple of reader replies from the last issue - here’s long-time reader Scott Delinger, CEO of Canadian regional academic HPC consortium Prairies DRI (digital research infrastructure), responding to an article about remote work:
Our situation in Canadian ARC [Advanced Research Computing] is slightly unusual, in that we’ve ALWAYS had remote work, even when staff were sitting in the same room prior to COVID. So the main difference is the effort required around group social aspects, not work tasks.
This is exactly right - Canadian ARC is an extreme example, but in our line of work, distributed collaborations are pretty common. That experience gives us a huge advantage when it comes to remote work more generally; we can make use of that by applying what we’ve learned there (around collaborating in a more document-based way, asynchronous discussions, etc) more consistently in other contexts. Some things are still hard remotely - social cohesion, mentoring juniors - but we can build on our existing experience. A lot of other communities didn’t benefit from having had that starting point.
In addition, in response to what I wrote last issue (#155) about strategic plans, another long time reader Adam DeConinck writes in to say:
Regarding strategy docs, one of my favorite ways to approach this is from Will Larson’s StaffEng site: https://staffeng.com/guides/engineering-strategy. This effectively takes the approach that strategy should be inferred, bottom-up, from the actual design work that your team is doing. Rather than designed “top down” as a new set of ideas.
I’m a huge fan of this technique! Larson’s advice is basically to take the last five design documents (or project intake decisions or services delivered or…) and synthesize into a strategy, essentially inferring what the actual implied strategy currently is. The vision then extrapolates what continuously applying that strategy would lead to.
I have seen way too many times teams write very aspirational strategic plans or other kinds of strategy that were far too disconnected from what actually is happening on the ground. No matter how compelling the future vision is, facts on the ground are always going to win out over a piece of paper. Any real change has to come from knowing both where we currently are as well as where we want to go to, and having a clear plan for how to move from one state to the other.
It’s vital to understand where we’re starting from before beginning any process of setting out on a new strategy. If you’re unsure of what the current strategy is - maybe because change is needed, or maybe because things seem to be working well and you want to codify current practice so that things stay on track - this approach of taking what is actually being done, and unearthing the implicit strategy that seems to be playing out - is an incredibly useful step. Warning: the strategy you uncover may be very different from what you think it is.
Finally, I’ll introduce experimental sibling newsletter Manager, Ph.D..
As you know, people with our backgrounds have certain strengths and certain gaps when it comes to managing a team. It makes sense to support our peer group with these more general management challenges so we can have as much impact as possible!
And that includes groups well outside of the RCT wheelhouse. So my intention for now is that material not specifically about our RCD teams or other expert teams within academic research will appear there more often than here. I’ve “launched” MPHD with a backlog of just the general management topics from previous RCT issues; that means that a few issues are missing entirely, which is why the issue numbers are different.
I’ll keep posting general “person from the world of research managing teams in an organization” material preferentially in MPHD.
Material that will show up here in RCT more consistently will cover topics like:
I’ve heard from several people know that they didn’t mind or even preferred everything in one place. But I need to try something different for my own sake - I was trying to keep the management stuff as general as possible to help as many people as possible while also having the RCD focus. I was driving myself bonkers trying to do both at once in one email.
So we’ll keep things increasingly separate for the next couple of months and reassess.
I can’t reassess, of course, without your feedback! So as always, please let me know what you think. And not just what you think of how the split of topics is going. For instance, for those who do decide to keep an eye on MPHD, let me know the pros and cons of using substack which is where I’ve got that stashed for now.
And with all that, on to the roundup!
Research Computing Teams Interviews: UCL ARC - James Hetherington, Jonathan Cooper, Donna Swann, and Chris Langridge
When I spoke with Ian Cosden (#154) about the RSE program at Princeton in general, one of the topics we covered was the career ladder he’s developed for RSEs in Princeton, the preconditions that had to be in place before then, what he had to do to make it happen, and what’s now possible.
In this interview with James Hetherington, Jonathan Cooper, Donna Swann, and Chris Langridge from UCL ARC talk, we focus specifically on their work with career ladders, and about the work they’ve put in over the past several years to improve hiring, retention, professional advancement, and job role clarity on their team. The ARC team, with its larger remit, had to consider careers for software, systems, data stewards, data scientists, and research management staff in parallel.
Here too, many preconditions had to be in place first, and the work had to be done methodically. UCL ARC took small steps which at each point solved some problem for them, and laid the foundation for the next:
While there are institutional differences, it’s the similarities that are more striking. Career ladders for these new roles can’t be started right away, are a lot of work, and the levelling requirements will depend a lot on local needs and context. But once done, they’re a powerful instrument for change not only for the working conditions in the team that originates them, but across the institution.
Structural Problems Don’t Yield to Local Solutions - Jonathan Dursi, Manager, Ph.D.
This is the sort of article which I think will appear more in Manager, Ph.D. than here - general “person from the research world managing a team in an organization” advice.
Anyway, I focus on key tools we need, and on what we can change. That means focusing more on our teams and our relationships with peers. Opening up those lines of communication to productive conversations is always the first step. But some problems are bigger than what can be fixed that way. One-on-one-ing harder won’t change them.
You can tell if there’s a bigger issue when
The key things to remember when you’ve identified such a bigger issue are:
Doing More With Less (Webinar) - Clare Lew
With things changing dramatically in tech, we’re going to see a lot more articles from tech management on ruthless prioritization - which suits me just fine.
Our teams of experts are pretty much always small, and often pretty chronically under resourced compared to what we’d like to accomplish. And yet a lot of the managers I speak with still need help with not trying to do everything.
I’ll talk about this more next week, but if we’re going to have the maximum impact with finite resources, we need to be almost monomaniacally focussed on doing the work that makes the biggest difference, and getting really good at it. Doing a little bit of this and a little bit of that won’t cut it. It fails to maximize impact in two different ways - it doesn’t focus the effort where it can do the most good, and it means our team can’t improve and grow as quickly as they can when we’re focussed on one area.
Lew’s talk on this - not just focussing but on scaling that focus - is great. About half the talk is about dealing with the possible morale fallout from a sudden big change. That’s not something that’s super relevant to our context. But the rest is dead on:
Writing an engineering strategy - Will Larson
Larson, whose advice above was so useful for understanding where strategy is now, talks about creating strategy to address new challenges. He’s influenced strongly by Rumelt’s Good Strategy/Bad Strategy book, which has made several appearances here, and writes his strategies in three sections:
There’s lots of good material in here, and if you’re interested it’s a good read. I want to emphasize one point for our context.
A disproportionate amount of what’s written out there on strategy is for corporate leaders, who can take their companies in entirely new directions if they like. That’s not an option available to us - and indeed not to most managers and leaders. We’re part of larger institutions, and need to work together. That’s true in the private sector, too. Here Larson emphasizes that the engineering strategy has to support the business strategy. A brilliant engineering strategy which doesn’t help the business isn’t, in fact, a brilliant engineering strategy.
It’s the same for us. Our teams strategies have to support the institution’s strategies. They’re working at cross-purposes otherwise. At best there’s wasted effort.
Institutions differ, but none of us have VPRs whose strategy is “support all researchers the same”, or CIOs whose strategy is “a bit of this and that - whatever people ask for, really”. At any given moment, those two organizations are marshalling efforts to help the institution grow and thrive in ways that meet the possibilities and the challenges the organization sees.
The more we can align our team’s strategies with those of the organization we’re part of, the more likely we are to be successful, and the more impact our successes will have. We’ll come back to this more next week.
Asking Your Project Team to KISS (Keep/Improve/Start/Stop) - Mark Warner
As you know, just as one-on-ones are key for managing or leading individuals, retrospectives are absolutely key to managing and leading effective teams (#137). Here Warner talks about retrospectives particularly in project work. Constructing lessons-learned documents at project closeouts is part of this, especially if there are other teams that could benefit from what your team has learned, but Warner points out this should be a continuous, ongoing process.
Warner uses the K/I/S/S formulation for brainstorming these sessions, but there’s many such frameworks for getting input, and it can be beneficial to cycle between them to keep the meetings fresh (#61).
Warner emphasizes the importance of documenting what comes up and presenting results, especially on changes made in response to people’s suggestions. This helps make team members feel heard and will encourage more contributions, and so faster improvement in the team’s work.
Warner also lists two other caveats:
Building a Great Relationship With Your Boss - Paulo André
We’re often experts, leading a team of experts, managed by someone who isn’t an expert in our area. The good news is that we’re often given a lot of autonomy to run our team’s work as we see fit. But it also means that we can become quite detached from our bosses’ goals and needs, and by extension that of the organization.
It is really important, if we’re going to get the support we need from the larger organization and in particular our boss, to line up our team’s work, at least in part, to support our bosses goals. That means, amongst other things, knowing that they are!
André gives some advice about developing a great relationship with your boss, even if right now you don’t talk to them much. He gives four pretty good questions that we really should know the answers to (and the answers will change over time):
How to write a great extended leave document - Ben Balter
In #132 we talked about the usefulness of vacation as a way of practicing delegation if you’re not doing it already. Here Balter shares his template “going on leave” document, a short document that you can keep maintained and then share if you’ll be going away -
What else would you keep on this document? Are there other things that could be usefully documented here? Let me know - just hit reply or email me at firstname.lastname@example.org; I think an RCT template for this could be really useful.
How software engineering behavioral interviews are evaluated at Meta (from an ex-Meta manager) - Lior Neu-ner
Behavioural interviews are underrated in our line of work, where we focus on expertise. But behavioural interview questions are great ways to have people demonstrate how they used that expertise in real situations.
The key for these being useful, however, is to (a) have good and relevant questions that would give some signal as to how candidates might succeed or struggle in the actual job, and (b) to have a good rubric ahead of time, with agreement about what would and wouldn’t be a good answer.
Neu-ner describes how at Meta they ask behaviourial questions to assess motivation, ability to be proactive, ability to take ownership in an ambiguous situtation, perserverence, conflict resolution, empathy, growth, and communication. And crucially they have expectations for junior, senior, and staff-level answers to thees questions.
Not all of this will apply directly to our teams, but I like the breadth of questions here and levelling. One thing I’d add is that you’ll get better answers if you let people know why you’re asking and what you’re looking for - e.g. not just “Tell me about a time when you wanted to change something that was outside of your regular scope of work,” but “Our work here means that people get pulled into a wide variety of work, and so it’s important that our team members are comfortable tackling new challenges. So, tell me about a time when you wanted to change something that was outside of your regular scope of work.”
I’ll also add that it can be very useful to share the questions ahead of time - you’ll get better and more relevant answers. The real value in the question is not the immediate answer, but in the followup questions and back and forth as you dig into how they did that thing, and why they decided to do that over something else. If people know the questions, you can go faster straight into the meat of the interview, those followups.
Ah this is interesting - a Jupyterlab desktop application, with its own jupyterlab setup for running locally, and can open sessions to remote JupyterLab servers. Amongst other things this means easier start up for new users, and (I think?) you can set all your UI settings the way you like them in the desktop app for once and all and have the remote sessions honour that…
Data Migration Tips - Josh Tolley
Migrating data sets that people rely on is, not to put too fine a point on it, kind of scary. A lot can go wrong.
Tolley gives his hard-won advice:
A reader chimes in with three other points for working with stakeholders
Recruiting developers into Site Reliability Engineering (SRE) - Ash Patel, SREpath
One of the roles I consistently see our teams struggle to hire for is “DevOps” type jobs, where the individual has to have a foot in both software development and operations.
We don’t typically need Site Reliability Engineers, which are more about keeping systems operating at high levels of reliability. But SREs have a similar mix of capabilities, combining development and infrastructure operations, and I think we can learn useful tips from Patel’s article on recruiting developers into these cross-cutting jobs.
In Patel’s estimation, it’s easier to (successfully) pull developers into these roles than sysadmins - that’s been my experience, as well. And besides the fact of the increased job prospects (literally everyone is trying to hire for these kinds of roles), they are actually kind of fun - there’s more ambiguity, more complexity, in these jobs where the entire system from infrastructure to software to external network connections are in scope.
Patel encourages developers who might be interested to:
On our side, Patel counsels us to:
This could be great for training or maybe even getting ideas for interviewing - Sad Servers is “Like LeetCode for Linux”, a set of 18 linux sysadmin problems where a server is spun up for you and you have to figure out the problem.
AGBT2023, one of the year’s big genome sequencing conferences, just ended. It’s a little early to find retrospective, but a lot of the commentary (like this twitter thread) is pointing out that there’s now a few plausible candidates for technologies that will sequence a human genome for $200 or $100 in sequencing costs for consumables. That number had been hovering around $1000 for a long time.
So for those of us with large genomics users, there’s a decent chance that in the coming few years, some of them will be collecting 5-10x as much data for some of their projects.
“We find that open source [C] code containing swearwords exhibit significantly better code quality than those not containing swearwords under several statistical tests.“
A nice 80-page introduction to deep learning models for people with a computational physics background.
A walk through of a 1950s analogue computer with 2,781 parts for determining the airspeed and altitude of fighter planes - The Bendix Central Air Data Computer (CADC).
A lot of groups are beginning to have to think about handling sensitive data for the first time. This short UN guide for privacy enhancing technologies for sensitive data analysis (like differential privacy, homomorphic encryption, secure multiparty computation, and distributed learning) is intended for decision makers at national statistical agencies, but it’s a pretty good crash course into what the differences between the tools are.
Relatedly, with fully homomorphic encryption, mathematical expressions need to be translated into operations which perform the corresponding calculation within the cryptosystem - here’s an open source FHE compiler for C++ from Google.
I find this 10 min AWS HPC video talking about life sciences customers adoption of the cloud really interesting. They find that even in this pretty specialized sub-area, there’s extremely heterogenous workloads having to run on pretty homogenous machines and the waste of resources that implies. They also talk about the usefulness of a mix of reserved, on-demand, and spot instances.
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
This week’s new-listing highlights are below in the email edition; the full listing of 165 jobs is, as ever, available on the job board.
Cryptographic Research Architect, Open Quantum Safe Projcet - Univesity of Waterloo, Waterloo ON CA
The Department of Combinatorics and Optimization at the University of Waterloo invites applications from qualified candidates for a 1.5-year position as a Cryptographic Research Architect on the Open Quantum Safe project (https://openquantumsafe.org/). This position is available immediately in Professor Stebila’s research group. You will be working with a world-wide team of researchers and developers from academia and industry on the Open Quantum Safe project. You will have the opportunity to push the boundaries of applied post-quantum cryptography and contribute to various open-source projects. You will help integrate new post-quantum cryptographic algorithms into the liboqs open-source library, and design and implement techniques for evaluating and benchmarking these cryptographic algorithms in a variety of contexts.
Lead Software Engineer - CSIRO, Hoart AU
The role of Technical Services staff in CSIRO is to provide support for scientific research in a diverse range of laboratory and field situations across a range of different research projects. This support consists of the application of accepted technical practices and the development of new practices. The work is usually carried out as a member of a centralised service
Manager-Research Computing Data Facilitation - University of Alabama at Birmingham, Birmingham AB USA
UAB Research Computing is a dynamic and high-functioning department. You will lead a growing team of scientists and research facilitators with a focus on the application of data science methodologies in support of UAB research Enjoy opportunity and autonomy to design and implement cutting edge technologies and services. UAB received a record $715M in research grants and awards for FY22. Research Computing serves an integral role in this success. Be a key contributor for changing the world!
Lead Data Analyst - Federation University, Ballarat AU
You will facilitate the development and management of strategic analytics assets that support data driven decision making across the organisation and develop new analytics that are accurate, user-friendly and insightful.
Director, Computational Biology - New Technology Analytics - Bayer, Cambridge MA USA
In Bayer Research & Early Development Oncology, we discover the next generation of cancer therapies with scientific innovation hubs in Berlin, Boston, Basel, and Oslo. Translational Sciences Oncology is a multidisciplinary department that harnesses cutting-edge technologies to enable earlier detection, precise molecular diagnosis, and personalized treatment for cancer patients. This Director role is responsible to formulate and to implement computational biology strategy to harness innovative technologies that will further our future vision of cancer diagnosis and individualized treatment. This Director will oversee technological innovation and scientific excellence for complex, quantitative analyses of specialized multi-omics, imaging, and/or real-world data. In particular, this role will establish quality assurance, confirmatory evidence, and clinical utility for new technologies, such as digital pathology and radiomics. This role will initiate, delegate, and manage innovative research projects leveraging internal and external collaborations. Through an extensive network of colleagues throughout Bayer, this role will successfully implement cross-functional solutions. This role will advance scientific leadership within the company, our network of external partners, and the broader cancer research community.
Director, Advanced Data and Cloud Technology - Providence Health Care, Vancouver BC CA
Reporting to the Executive Director, Health Informatics, the Director, Advanced Data & Cloud Technology shapes the analytics capabilities of the organization through the development of PHC-wide data platform, data engineering, data acquisition strategies. This role is responsible for developing, in close collaboration with partners, the roadmap for utilizing advanced modelling, data science, and artificial intelligence techniques for a wide range of healthcare applications. This includes ensuring this roadmap aligns with the organization’s Digital Health Strategy and ensuring sufficient resources and skills exist within the team to execute that vision. Assumes overall responsibility for the product lifecycle management of software and platforms that fall outside of IMITS direct support.
Principal Workload Performance Architect - Tenstorrent, Toronto ON CA
As a performance architect in the dynamic and motivated Tenstorrent Platform Architecture team, you will work in a cross-functional team on ML software stacks, HPC and general-purpose workloads, graph compiler, cache coherence protocols, superscalar CPU, fabric/interconnection, networking, and DPU. Perform full-stack workload characterization and performance analysis for AI, HPC, and CPU general-purpose applications. Identify representative benchmarks for the workloads. Perform data-driven analysis based on software profiling, performance model simulation, or analytical models to evaluate software and architecture solutions to PPA.
Manager of Data Sciences - Giant Tiger, Ottawa ON CA
To build, lead and direct the Data Sciences program at Giant Tiger leveraging the data to drive revenue, productivity, customer experience and profitability for the company. Reporting to the Head of Analytics and Data Sciences, the Manager of Data Sciences and their team will perform statistical analysis, build models and leverage machine learning to solve complex business problems and drive the value of data in the organization through customer interactions and operational executions.
Trusted Data, AI & ML lead - Leidos, Canberra AU
As the Trusted data, AI & ML lead you will play a role in leading the Data Practice in the management, analysis, manipulation, and interpretation of data. You will be a thought leader in the Bid/Program space to solve some of Government and Defences biggest challenges. You will proactively shape thinking of what data, how it it is leveraged, using what technologies/platforms for which business benefits. You will also advocate for improved approaches to strategic service delivery by supporting digital capabilities and contributing to information sharing frameworks.
Manager, Data Scientist - Biogen, Weston MA USA
This role will be responsible for deriving actionable insights for marketing and leadership teams by leveraging all available data assets. In this role, you will need to have the ability to influence decision-makers to adopt a quantitative approach when measuring impact, success and other outcomes. Ability to translate technical findings into actionable insights for business stakeholders is a key aspect of this role. Projects can range from patient finding initiatives across therapeutic areas , Marketing Mix models , all the way to AI driven models to support next best action This position is a high visibility role that will touch many aspects of the company.
Software Engineering Manager - Space Protection Programs - Lockheed Martin, Littleton CO USA
Space Protection Programs is seeking a skilled Software Engineering Manager who can guide a team through complex mission application software development lifecycles. Candidate must have the ability to take ownership of software development activities and drive the application of agile methods that require a multifaceted execution approach, depending on the time-phased needs multiple programs of various size and complexity. Responsible for frequent use and application of software engineering standards and techniques, including Object Oriented Design and Agile development techniques in addition to the training, coaching, and mentoring of agile teams and leaders. Will work in a highly collaborative environment with frequent and direct interaction with peers and occasional customer interactions.
Scientific Engineering and HPC Senior Manager - Alta IT, Rockville MD USA
As a Scientific Engineering Senior Manager, you will lead a team of eleven innovative and savvy engineers, focusing on building collaborative relationships with the intramural scientific community to better understand their technology needs and challenges, while leading a team of infrastructure engineers to design and implement systems that support, enable, and accelerate scientific research. Position is primarily remote. However, you must live within commuting distance to the client site in Rockville, MD in order to attend monthly or ad hoc meetings as needed.
Scientific Software Developer/ Bioinformatician Team Lead - ITBMS, Toronto ON CA
GERMLINE harmonizes and integrates Human Genetics results with other types of results (transcriptomic, knockout, etc) and provides a portal by which scientists can interactively explore the insights these results provide. As a part of the project, a pipeline has been developed that helps in finding evidence for causality of genes. The portal democratizes access to these results and eases decision making for biologists throughout the company. This has been a successful effort: multiple interesting findings have been reported in the few months since it has gone live. This is an exciting opportunity to conduct clinical-translational bioinformatics research. The successful candidate will be part of a core development team with additional leadership responsibilities.
Open Source Manager - Genomics England, London UK
We have a unique opportunity to recruit an Open Source Manager that will lead on the technical aspects of the Diverse Data initiative’s tooling, products and behaviour workstream by sourcing, curating, building and implementing tools that improve equity in genomics via a new, global open source initiative that Genomics England is leading.
Bioinformatics, Team Lead - Canon, Edinburgh UK
At Canon Medical Research Europe, helping care teams to get the best outcome for their patients is at the heart of everything we do. This is an excellent opportunity to provide scientific and technical leadership in an intellectually stimulating team-based environment, with the ability to guide and contribute to leading edge technology in the medical imaging industry. You will lead our bioinformatics data science projects and the activities of a small team of high-performing scientists, engineers, and clinical analysts, within our AI Research Centre of Excellence, reporting to the AI Research Technical Manager. You will lead the concept creation and development of innovative algorithms in bioinformatics and machine learning. Applications include prognostic prediction in cancer screening based on multi-omics data (e.g. somatic genomes, transcriptomics and immunohistochemistry) in combination with multi-modal data from health records.
Head of Bioinformatics Data, Software and Pipeline Engineering - Sanofi, Cambridge MA USA
The Computational Biology Cluster is part of the Precision Medicine & Computation Biology (PMCB) global research function at Sanofi. We are looking for a leader in Computational Biology with deep expertise in building bioinformatics solutions, software, data and analytics workflows. The post holder will lead the new Bioinformatics Data, Software and Pipeline Engineering team in the Computational Biology cluster and helps to index, integrate and new biomedical insights from biomedical big data.
Senior Manager, Data Science - University of Sydney, Sydney AU
The Senior Manager, Data Science will lead a team of data scientists to deliver relevant actionable insights and machine learning solutions to empower Sydney University staff to deliver impactful teaching and research, drive sustainable performance, and provide outstanding student experiences. This position has leadership responsibilities to guide, develop and manage a team of professionals, including managing performance and development and delegation of work activities. This is a key role in developing machine learning and analytics capability within university. You will have access to cutting edge technologies and a wide array of data sources to power our machine learning echo system.
Data Engineering & Standards Lead - GSK, Ware UK or King of Prussia or Collegeville PA USA
The Data Engineering & Standards Lead will partner closely with diverse internal stakeholder groups, including Data Standards & Governance (DS&G), Commercial (GSC) Tech, and R&D Tech, to create CMC data and recipe standards (as well as implement the governance tools necessary to maintain those standards). The role will be accountable for the practical application and embedding of those standards within Clinical Supply Chain (CSC) facilities (partnering with local resources).
Senior Manager, Data Scientist - Charles Schwab, CA USA
The Sr. Manager, Data Scientist has an experienced data engineer background, strong business acumen, an eagerness to learn and teach in a team setting, and a passion for solving meaningful problems. The individual should be tech-savvy and familiar with the best practices related to the development of advanced quantitative models.
Real World Evidence Data Scientist, Associate Director - AstraZeneca, Gaithersburg MD USA
AstraZeneca’s vision in Oncology is to help patients by redefining the cancer-treatment paradigm, with the aim of bringing six new cancer medicines to patients between 2013 and 2020. A broad pipeline of next generation medicines is focused principally on four disease areas – lung, breast, GU, GI, and hematological cancers. As well as other tumor types, these are being targeted through five key platforms -immunotherapy, the genetic drivers of cancer and resistance, DNA damage repair, HER2 and antibody drug conjugates, underpinned by personalized healthcare and biomarker technologies. Leverage routinely collected data from healthcare settings to provide health analytics and insights in a range of contexts including Public Health, Pharmaceutical Research and Development and Commercial/ Payer. Collaborate with colleagues in Epidemiology, Statistics and Payer, giving scientific and technical guidance on study design, RW data selection and best practice in RW data utilizatio
Product Manager - ReelData AI, Halifax NS CA
ReelData is a startup that brings AI disruption to the aquaculture industry. The core of our technology is backed by deep learning and machine learning algorithms that allow fish farms to reduce the intrusiveness and resources needed to understand various metrics around their fish. The newly created role of Product Manager will be responsible for product planning. They will work with internal teams to survey customers about products and tell technical teams which features new products need. The Product Manager will predict production costs and prices for future products to ensure that they’ll be on track and that new products fit the company’s vision. ReelData is currently focused on bringing our ReelBiomass product to the market. Through our underwater cameras and AI, we can monitor the weight, size, and distribution of fish populations.
Physical Scientist - National Oceanic And Atmospheric Administration, Silver Spring MD USA
This position is located in the National Oceanic and Atmospheric Administration (NOAA), Center for Operational Oceanographic Products and Services (CO-OPS), with one vacancy in Silver Spring, MD. As a Physical Scientist, you will perform the following duties: Serve as focal point for coastal ocean modeling requirements to be addressed by the Infrastructure Investment and Jobs Act (IIJA) CIFIM-10 and CIFIM-12 funding through regular engagement with the National Weather Service, the National Ocean Service, the Office of Oceanic and Atmospheric Research, and IIJA funded extramural partners. Establish and maintain the long-range objectives and requirements to be achieved with IIJA CIFIM-10 and CIFIM-12 funding. Manage and coordinate progress among internal and external partners. Identify issues and risks and mediate inconsistencies and divergence in scope and progress of IIJA CIFIM-10 and CIFIM-12 funded projects. Communicate project goals and outcomes, and organize or participate in relevant modeling conferences, meetings and panels.
Modelling and Analytics Manager - Unnilever, Bebington UK
This role sits within the Home Care R&D Digital Team where we focus on developing and deploying the digital capabilities to deliver 4 key business goals: clean future innovation, win with consumers, reduce cost & complexity and agile implementation. Modelling is a key digital pillar to enable these business goals. In 2022, Home Care has successfully demonstrated the impact of using models and optimizers to deliver > 20 million saving. To further accelerate our digital transformation, we are looking for a leader who is passionate on defining Home Care R&D Modelling programme, linking it with business goals and building a world-class in-silico capability across Science & Technology, Category and BU in Home Care.
Senior Software Engineering Project Manager - ESRI, Melbourne AU
Bring your passion for project management and GIS knowledge to a dynamic and highly successful team. Our Software Engineering team in Melbourne is looking for an experienced, creative, and hard-working GIS projects manager to help deliver leading edge cross-platform mapping and GIS solutions that run on the latest mobile devices. Whether it be managing multiple projects that can be organized into a program, or overseeing parallel independent projects that need a guiding eye, you will be integral in the delivery of output by our team.
Program Manager – AI Program, Langone Health - New York University, New York City NY USA
Looking for a highly motivated Program Manager with a passion for facilitating cutting edge research and development at the intersection of healthcare and artificial intelligence to join our interdisciplinary team of research scientists, clinicians, and engineers. As a Program Manager, you will hold a key administrative role to assure the successful startup and development of the AI program. Furthermore, you will oversee the operations of research and development activities within the department. In particular, you will coordinate and facilitate intra/inter-department AI projects, create and communicate project plans and milestones, serve as a liaison with principal investigators across multiple interdisciplinary projects, research and prepare strategic materials in collaboration with the leadership, oversee educational endeavors, and help in the growth and marketing of the program. You will operate under standard regulatory procedures/protocols and ensure compliance across the board.
Senior Manager, Bioinformatics - Natera, San Carlos CA USA
Senior Manager, Bioinformatics is a hands-on role with experience with high-throughput biological data and a strong interest in quantitative biology and bioinformatics pipeline development. The ideal candidate will manage a small team of bioinformaticians, who work closely with talented R&D Molecular Biology scientists in the Assay Development team. Main responsibility is to lead the computational aspects of assay development, focusing on analysis and optimization of new chemistries and lab techniques. This role will also work cross-functionally with multiple teams including clinical and product to validate a CDx MRD assay.
MLOps Engineering Lead - Digital Data, Large Molecule Research - Vaccines - Sanofi, Cambridge MA USA
You are a dynamic MLOps specialist interested in challenging the status quo to ensure seamless MLOps that scale up Sanofi’s AI solutions for the patients of tomorrow. You are an influencer and leader who has deployed AI/ML solutions with technically robust lifecycle management (e.g., new releases, change management, monitoring and troubleshooting) and infrastructural support. You have a keen eye for improvement opportunities and a demonstrated ability to deliver using software Leading and MLOps skills while working across the full stack and moving fluidly between programming languages and technologies.
High Performance Computing Architect - Mount Sinai Hospital, New York City NY USA
The HPC Architect, High Performance Computational and Data Ecosystem, is responsible for the architecture, design and deployment of Scientific Computing’s computational and data science ecosystem. This ecosystem includes high-performance computing (HPC) systems, clinical research databases, and a software development infrastructure for local and national projects. To meet Sinai’s scientific and clinical goals, the Architect has a deep technical understanding of the best practices for computational, data and software development systems along with a strong focus on customer service for researchers. The HPC Architect is an expert troubleshooter and productive team member. The incumbent is a productive partner for researchers and technologists throughout the organization and beyond. This position reports to the Director for Computational & Data Ecosystem in Scientific Computing. Specific responsibilities are listed below.
Data Engineering Manager (Head of Large Molecule Research Data Foundations) - Sanofi, Toronto ON CA
You are a dynamic people manager with a background in data engineering looking to drive globally scalable solutions for Sanofi’s advanced analytic, AI and ML initiatives in our LMR digital center of excellence. You are an influencer and leader who has deployed data engineering and integration solutions with technically robust lifecycle management (e.g., new releases, change management, monitoring and troubleshooting). You have a keen eye for improvement opportunities while continuing to fully comply with all data architecture, platform, quality, and governance standards. You have worked with Research & Development (R&D) in pharma or biotech and are well versed in the use of technology to improve the design of drugs and molecules.
Senior Research Team Lead - Borealis AI, Montreal QC or Toronto or Waterloo ON or Vancouver BC CA
Borealis AI is looking for a Senior Research Team Lead. The role includes managing a team of research team leads, researchers, and research engineers with the goal of delivering AI-based products for the financial services industry. In addition, you will lead initiatives that improve the effectiveness of the research organization.
Senior Technical Program Manager, Quantum Security Group - SandboxAQ, Remote USA
We are looking for a Technical Program Manager to help lead the development of the next generation of cybersecurity systems. As a Technical Program Manager, you will work with product managers, engineering leads, and various other stakeholders to help refine the product vision plus roadmap, and bring it to life by turning that vision into concrete and executable launch plans. You will manage the scope, schedule, and risk mitigation plans in order to streamline the efficiency and effectiveness of the engineering team to increase velocity, and ensure our products are enterprise-grade. The candidate will be part of a diverse team consisting of cryptographers, mathematicians, ML experts, and physicists, where they will play a key role in the efficient and effective development of the technologies being developed.