RCT #176 - Nurture your best clients first. Plus: Write and get feedback on tech specs; technical documentation guidebook; wrap up projects; talent leaks out of funding gaps; reproducibility of jupyter notebooks; 10 simple rules to manage lab data; NIST on HPC security"
Nurture your best clients first. Plus: Write and get feedback on tech specs; technical documentation guidebook; wrap up projects; talent leaks out of funding gaps; reproducibility of jupyter notebooks; 10 simple rules to manage lab data; NIST on HPC security
For the next few issues, I’m going to talk about our teams as businesses, because viewing our organizations this way is sorely needed and yet hardly ever talked about.
Now yes, every so often I attend a talk of some sort titled “The Business of Research [Computing/Core Facilities/Software Development/Data Science]”. And every single time I do, I hope to hear about:
- Developing products and services that researchers really want, and keeping them up-to-date
- Efficiency and effectiveness of teams
- Positioning our teams amongst the many many options researchers have available to them
- Basic discovery/sales/marketing conversations with VPRs and funders
- Basic discovery (#158)/sales/marketing conversations with researchers
- Writing good success stories and soliciting good testimonials
And every time, what I hear instead is a discussion of the funding compliance rules for the relevant jurisdiction, what money can and can’t be spent on, and some suggestions about how to structure things to meet those requirements.
Now, I get why that is. The rules are incredibly baroque, pretty onerous, non-compliance can have huge consequences, and yet our staff need regular paycheques. It’s a lot to deal with and has to be done. (I actually have a draft in here somewhere of an essay about how the structure, rules, and regulations around our funding, even more so than the amount, hurts science by handcuffing teams in their offerings, forcing them to support science in sub-optimal ways. That’s for another day.)
While not breaking any laws is certainly a necessary starting point, we can and should aspire to greater things than merely “spends money in a way compliant with local regulations”.
There’s a basic feedback loop, which we want to turn into a flywheel (#44), which keeps our team funded: we provide products and services for researchers, combining our expertise with there's to produce research impact, which funders adjudicate and then fund us and researchers.
Our teams’ goals are to advance research and scholarship in our communities and institutions as far as we can, given the constraints we face. To do that, we want to focus our efforts on generating as much research impact as possible.
There’s a lot of ways we can do this. Some of them take a lot of time and effort - structuring ourselves around value-add professional services, rather than simply being a cost-centre utility (#127 - tl;dr, cost centres get outsourced).
But if you’re just getting started thinking about how to increase impact, the highest reward-to-effort starting point in almost any sector is: start talking with your best clients more, and invest time in nurturing that relationship.
As leaders of service teams, we tend to be preoccupied by the communities (or even individual researchers) that we aren’t helping, that aren’t using our products and services for some reason. The fools! Don’t they know how great we are? Maybe if we taught their grad students more training courses….
Sure, growth can eventually happen there, and it’s worth thinking about at some point, but right now they aren’t using your team for some perfectly good reason. Investing more effort into courting them while not changing anything the team is doing isn’t going to be the best first-choice use of your time and effort.
Instead, it’s almost certainly more useful to spend more time with your best clients first.
Who are your best clients?
You might already have an initial sense of who they are. When I talk to people who are unsure, I encourage them to consider which groups they work with who:
- Produce high-impact work as a result of their work with you
- Your VPR pays attention to
- Is influential with fellow researchers, funders, the institution, and other stakeholders
- Do work that you can sustainably support with non-heroic amounts of effort
- Speak highly of your team
- Advocate for your team, or could conceivably do so if helped to do so
These are groups and individuals who are already successful with you, so that they understand the value of what your team provides, in a detailed and nuanced way, from a different perspective than yours. You can:
- Learn a lot from them about what your team does well and where it could improve (there are some discussion starters in the essay from issue #158)
- Quite possibly do more work with them (ask them what they need/would like you to provide/would help them do more of the work they’re already doing with you)
- Provide them what they need to market your services to their peers (give them a slide, some bullet points…)
- Provide them what they need to advocate for your group to decision makers (some talking points, know what you currently need from the administration, current stumbling blocks…)
- Get glowing testimonials and success stories from them
- Ask them to recommend you to related groups doing high-impact work or who could be doing high-impact work with your help.
There’s three reasons to focus on these groups first.
They are already doing high-impact work with you. Our teams’ purpose is to have as much research impact as possible. Doing more work like you’re doing with them will almost automatically mean you are advancing more high-impact work.
Their high-impact work is supported by you, so you and they share an immediate and common interest in making sure your team has what it needs to do its job. They can advocate for you.
Their advocacy to decision makers, or their word of mouth to other researchers doing related work, will be 10x more effective than anything you can say (even if you’re the one giving them the words / bullet points / slides to say it!). Everyone proclaims their own team to be amazing and essential; people tune it out. If a successful researcher says your team is amazing and essential, people will pay attention.
So if your team has the capacity to be doing a little more, my first recommendation is always to start prioritizing current best clients and nurturing a relationship with them. Schedule a chat with them. Ask them some of the questions from #158. Try to set up some regular call (monthly or so) between someone in your team and them or someone senior in their group to make sure everything’s going slowly. Find out other ways you can work with the group.
We tend to focus too much on weaknesses of our offerings, or areas where we have little uptake. Yes, that stuff matters. But unless you’re explicitly being directed otherwise, first build on your teams’ strengths. Take the working relationships that are going really well and invest time and energy into them. Developing those clients is by far the highest bang-for-buck area to start with. Only after deepening and growing those connections should you start thinking about investing effort elsewhere.
What have you found that works and doesn’t work when it comes to developing client relationships with research groups? Let me know - email
jonathan@researchcomputingteams.org
And now, on to the post-long-weekend roundup!
Managing Teams
Over at Manager, Ph.D. in issue #168, I laid out the argument I made in my USRSE2021 talk - that people with PhDs (or other experience working in scientific collaborations) already have the advanced management skills they need to be great managers, they just need help shoring up the basic management skills we never saw modelled for us in the academy.
Also covered in the roundup:
- Lessons from history’s great R&D labs
- Discussion of a paper showing that remote work by default still reduces the amount of mentorship happening, which we haven’t developed best practices around addressing yet but need to
- Look for mentorship, not Your Mentor
- Managing ourselves - short-circuiting negative emotional triggers
Technical Leadership
How to Write Great Tech Specs - Nicola Ballota
Getting better feedback on your work - James Cook
One of the roles of a technical lead is to make sure problems are well posed, and often that involves writing a tech spec. Ballota gives good advice on writing technical specifications.
One key thing to writing tech specs that Ballota doesn’t cover is that it’s not a one-and-done process, a good tech spec is iterative, and you get feedback from the team (and stakeholders) at different stages.
I like Cook’s approach to making sure you’re getting the right level of feedback at the right stage. Early on you’re looking for big-picture feedback: is this even the right problem, does the approach outlined make sense? Cook calls this “30% feedback”, since the document is only ~30% of the way to completion. Later on you’re looking for more fine-grained feedback - is that diagram clear, are these requirements detailed enough. Cook calls that “90% feedback”.
I like to not only make it explicit how fully-baked something is, but to make the document look like a rough draft when it is in its early stages - hand-drawn diagrams, bullet points, unformatted text. My experience is that people are much more comfortable raising big questions when the thing you’re showing them doesn’t look like a finished document.
Product Management and Working with Research Communities
Software Technical Writing: A Guidebook - James G
We know that most of our products are pretty under-documented.
The author James wrote a series of blog posts “Advent of Technical Writing” in December, and has now refactored those into a coherent book (epub, PDF) laying out a way of thinking of technical writing broadly, suggested stylistic approaches for different types of documents, and going from a bullet-pointed outline to a full document. James titles the book “Software Technical Writing” but this applies equally to research computing systems, data science or bioinformatics workflows, etc.
It’s quite good and given the way it’s written it could easily be customized or added to for particular locales and products into a local standards guide.
Maybe relatedly, Google Season of Docs, the sibling of Summer of Code, is now open - organization applications are due 2 April. This could help a lot of products out there, and if your team knows it needs to improve its documentation you could get things started with this guidebook, identify specific gaps, and have a concrete project written up in a few weeks.
The Consulting Project Wrap-Up Checklist - David A. Fields
Sort of related to the lead essay this week about nurturing relationships, I’ve commonly teams where projects we’re doing with a researcher just sort of fizzle out at the end. “Ok, we’re finished - let me know if you have any problems!”
That’s a missed opportunity. Researchers are celebratory at the end of a project, looking to get the paper out and already thinking about the next project. This is a great time to follow up with them! Close out the project with some celebratory note, ask for a retrospective and how it went, ask for a testimonial or to write up a success story, ask about what they'll be working on next and if there are things the team can do to help with them.
Beginnings are auspicious times, but so are endings. Take advantage of a shared success and ask to build on it.
Science Policy and Funding
The Value of Stability in Scientific Funding, and Why We Need Better Data - Stuart Buck
Scientific Talent Leaks Out of Funding Gaps - Wei Yang Tham et al, arXiv:2402.07235
Buck writes about the tour de force paper by Wei Yang Tham et al. They linked data from NIH ExPORTER, UMETRICS (University administrative data on grant transactions), and US Census Bureau tax and unemployment insurance records. With that they looked at a natural experiment of sorts - what happened to the people employed in labs where a funding interruption (caused for instance by the US federal government not passing funding resolutions, delaying otherwise successful grant renewals).
That might seem like an edge case, but a shocking ~20% of R01s renewed in FY2005-2018 had at least a 30 day interruption, with a median length of 88 days.
Personnel in those interrupted labs that had been funded by a single R01 had a 40% (or 3 percentage point) increase in likelihood of becoming unemployed - and they were mainly grad students and postdocs. You might think that they preferentially went off and got great jobs in industry, but when the unemployment is sudden and unplanned for it doesn’t work that way; those that did leave employment when the lab was interrupted eventually found employment at 20% less salary on average, and typically aren’t publishing.
This situation affects our own teams, too. Our funding is notoriously uncertain, and if anything I’d imagine we lose even more people when there are funding interruptions - not least of which because when we have them it’s going to be a lot longer than 30 days.
There’s not much we can do here, but we can be aware of it, and this paper might be a useful starting point for discussions with funders when we talk about the effects of funding uncertainties.
Research Software Development
Computational reproducibility of Jupyter notebooks from biomedical publications - Sheeba Samuel & Daniel Mietchen
This is a fun paper, which required a really impressive automated pipeline to do, and I think people are drawing the wrong conclusions from it.
Out of 27,271 Jupyter notebooks from 2,660 GitHub repositories associated with 3,467 publications, 22,578 notebooks were written in Python, including 15,817 that had their dependencies declared in standard requirement files and that we attempted to rerun automatically. For 10,388 of these, all declared dependencies could be installed successfully, and we reran them to assess reproducibility. Of these, 1,203 notebooks ran through without any errors, including 879 that produced results identical to those reported in the original notebook and 324 for which our results differed from the originally reported ones.
This is pretty cool! Over 22.5k notebooks from almost 3.5k papers were tested, to which, well, hats off. Super cool work.
There’s a lot of focus in discussions of this work on the fact that 22.5k notebooks, only 1.2k ran. I don’t think that’s anything like the biggest problem uncovered here.
For a one-off research paper, the purpose of publishing a notebook is not so that someone else can click “run all cells” and get the answer out. That’s a nice-to-have. It means that researchers that choose to follow up have an already-working starting point, which is awesome, but that’s an efficiency win for a specific group of people, not a win for scientific reproducibility as a whole.
There are people saying that we should have higher standards and groups who publish notebooks should put more people time into making sure the notebooks reliably run on different installs/systems/etc. That’s a mistake. For the purposes of advancing science, every person-hour that would go into doing the work of testing deployments of notebooks on a variety of systems would be better spent improving the papers’ methods section, code documentation, and making sure everything is listed in the requirements.txt, and then going on to the next project.
The primary purpose of publishing code is as a form of documentation, so that other researchers can independently reproduce the work if needed. But we know for a fact that most code lives a short, lonely existence (#11). Most published research software stops being updated very quickly (#172) because the field doesn’t need it. And trying to reimplement 255 machine learning papers showed that a clearly written paper and responsive authors were much more significant factors for independent replication than published source code (#12). If others are really interested in getting the notebook to run again, then presumably the problems will get fixed up, and the problems will be resolved. The fraction of those notebooks that will be seriously revisited, however, is tiny.
To me, the fact that 6,761 notebooks didn’t declare their dependencies is a problem because it represents insufficient documentation. That 324 notebooks ran and gave the wrong answers is a real problem, because it means there was some state somewhere which wasn’t fixed (again, an issue of documentation). That 5,429 notebooks couldn’t still have all the software installed isn’t, to me, a problem much worth fixing, nor is (necessarily) that 9,1815 notebooks installed everything but didn’t run successfully (depends on why).
Less controversially, this has been out for a while but I hadn’t noticed - Polyhedron’s plusFORT v8 is free for educational and academic research. It’s a very nice refactoring & analysis tool that even works well with pre-Fortran90 code. Most tools like VSCode or Eclipse shrug their shoulders and give up older or non-F90 Fortran code, even though that’s the stuff that generally needs the most work. If you try this, share with the community how well it works.
The value of using open-source software broadly is something like $8.8 trillion, but would “only” cost something like $4.15 billion to recreate if it didn’t exist, according to an interesting paper. That 2000x notional return on notional investment suggests the incredible leverage of open source software. Also, only 5% of projects/developers produce 95% of that value, something that would likely be seen in research software as well.
Research Data Management and Analysis
Ten simple rules for managing laboratory information - Berezin et al PLOS Comp Bio
This “ten simple rules” article is a thoughtful overview of data management in the context of biological sciences experimental data. But they’ve tied their high-level rules to the scientific process in general, not any particular experimental modality, and so the higher level rules (even if not always each of the individual suggestions within the rules) are valuable widely to any data-based scientific explorations.
It’s a nice short well-structured paper chock full of hard-won recommendations, definitely worth a read.
Research Computing Systems
NISP SP 800-223: High-Performance Computing Security: Architecture, Threat Analysis, and Security Posture (final) - Yang Guo et al
The draft version of this document came out a year ago (#155); the final version is a little more nuanced and more explicitly spells out the tradeoffs between security and performance.
I personally find that tradeoff space interesting, and I think the days are waning where we can safely assume that the operator of every HPC cluster necessarily wants to peg the needle to max performance at the expense of flexibility and security considerations. Having the tradeoffs clearly spelled out is helpful in that regard, as is the process of getting everyone on the same page of different “zones” of a large computing system and their specific threats.
Random
It looks like thanks to a new policy by the Linux Kernel team, CVE numbers are going to get weird.
codeapi for browser code playground with visualizations - and an example as interactive release notes for SQLite 3.45. (Have to admit that making release notes interactive would be a fantastic use case for something that lets you noodle around with new features).
Profila, a profiler for numba code.
Numpy 2 will break some stuff.
There’s a new configuration tool, a programming language called pkl. Surely this new configuration language will be the one that finally solves everything.
A deep dive into linux kernel core dumps.
Sometimes we overthink things. How Levels.fyi scaled to millions of users using Google Sheets as a back end.
NASA open-sourced F’, one of their frameworks for flight software on embeded systems.
The director of “Toy Story” is the one who drew the original BSD Daemon logo.
Turn most containers into WASM you can run in the browser with container2wasm.
Subset fonts to only those needed with for your website’s text with fontimize.
Another slice-and-dice CSVs utility - qsv.
That’s it…
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great day, and good luck in the coming two weeks with your research computing team,
Jonathan
About This Newsletter
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
Jobs Leading Research Computing Teams
This week’s new-listing highlights are below in the email edition; the full listing of 54 jobs is, once again, available on the job board.
Associate Director, Research Software Engineering - Princeton University, Princeton NJ USA
The Research Software Engineering (RSE) Group, located institutionally in Princeton Research Computing but extending across campus, is hiring an Associate Director of Research Software Engineering. You will report to the Sr. Director of Research Software Engineering.In this position, you will build and lead a growing team of Research Software Engineers who provide dedicated expertise to researchers to create the most efficient, scalable, and sustainable research code possible to enable new scientific and scholarly advances. You will have the opportunity, and be encouraged, to bring new initiatives, technologies, and/or approaches to the RSE group and Princeton Research Software Community.
Head of Department of Informatics - Kings College London, London UK
Located in the heart of Central London on our Strand campus, the Department of Informatics within the Faculty of Natural, Mathematical & Engineering Sciences is a vibrant hub for interdisciplinary research and education of the highest quality. The department is in an exciting position, having grown by 30% over the last four years to 81 academic staff, with a student population of 1,500 undergraduates, 300 postgraduates and 150 PhD students. The successful candidate will have a substantial track record in research, teaching and professional achievement within an Informatics or Computer Science discipline, and evidence an aptitude and motivation towards inclusive leadership.
Data Engineering Manager - The Weather Company (Weather Underground, The Weather Channel), Remote USA
The Weather Company is the world’s leading weather provider, helping people and businesses make more informed decisions and take action in the face of weather. Together with advanced technology and AI, The Weather Company’s high-volume weather data, insights, advertising, and media solutions across the open web help people, businesses, and brands around the world prepare for and harness the power of weather in a scalable, privacy-forward way. The Weather Company is looking for a Data Engineering Manager to join our team. The ideal candidate for this position must possess solid experience in implementing high availability and scalable cloud solutions, showcasing their proficiency in ensuring reliable and efficient system performance. Additionally, they should demonstrate a strong understanding of data structures and algorithms, enabling them to design and optimize systems effectively. Proficiency in both functional and object-oriented programming languages and techniques is essential, as well as a solid grasp of concurrency and concurrent programming techniques. Furthermore, the candidate should have a thorough understanding of distributed computing techniques and the ability to operate with data on a large scale, ensuring the smooth functioning of complex systems.
Software Development Manager, Data Pipeline for Generative AI - Autodesk, Boston MA USA
As a Software Development Manager for the AEC Solutions group, you will lead your team to build data pipelines to power our work in artificial intelligence, deep learning, generative AI, machine learning, reinforcement learning, information retrieval, and natural language processing. You will coach your team to collaborate across organizations with a versatile group of AI Researchers, ML Engineers, Software Architects, and Experience Designers that are building generative AI tools for the AEC industry. You'll also help ensure that we maintain privacy and security standards respecting and safeguarding our customers' data.
Artificial Intelligence/Machine Learning Manager - HP, Spring TX or Boise ID or Palo Alto CA USA
As the AI/ML (Artificial Intelligence/Machine Learning) Manager, you will be responsible for leading a team of AI/ML engineers and data scientists to develop, deploy, and maintain cutting-edge AI/ML solutions. You will collaborate with cross-functional teams to identify business opportunities, define AI/ML project goals, and drive the implementation of innovative AI/ML solutions to solve complex business challenges. Additionally, you will stay abreast of the latest advancements in AI/ML technologies and industry trends to ensure the organization remains at the forefront of innovation.
Applied Research Manager, Computer Vision - Runway, Remote USA
We’re looking for a Research Director to help us manage and grow Runway’s Applied ML Research team to create cutting-edge machine learning tools for AI-based content creation. This person should have a strong research background in computer vision, and be deeply interested in managing and enabling other researchers to develop state-of-the-art research.
This is a cross-functional role that requires strong communication and collaboration skills; you will have a chance to work with the rest of the engineering organization to oversee the transition from research into production, as well as with the Product team to initiate and develop research projects that deliver clear user value.
Program Manager, Digital Health and Discovery Platform - University Health Network, Various CA
The DHDP is a scalable, multi-use platform to digitally-enable national and international collaboration to advance next-generation precision medicine technologies. The DHDP is funded ($49M) from the Government of Canada’s Strategic Innovation Fund Stream 4 program. The DHDP will apply state-of-the-art data governance principles and technology to transform collaborative research and stimulate commercialization from home grown research discoveries. We seek a highly motivated, collaborative, and knowledgeable professional who will be a key member of the team serving as the link between all operational areas of the organization. The Program Manager will contribute to the overall success of the DHDP through effective planning, execution and follow up of DHDPs organizational strategies, policies and administration.
Manager, Data Engineering - Royal Bank of Canada, Toronto ON CA
RBC Canadian Banking Operations (CBO) Data and Analytics Center of Excellence (COE) is hiring a Manager, Data and Analytics COE that will lead the enhancement of our data foundation through the engineering of critical data pipelines supporting strategic insights for CBO using cutting edge data capabilities.
Engineering Manager - Platform Management - SensorUp, Remote CA
SensorUp, the global leader in enterprise software for methane emissions Measurement, Reporting, Verification, and Repair (MRVR), is serving the needs of oil and gas companies worldwide. SensorUp is seeking an exceptional and passionate Engineering Manager for the Platform team, providing technical leadership, mentorship, and management to a team of talented engineers. The mandate of the Platform team at SensorUp is to build and operate an internal developer platform for software delivery in a cloud native environment. In this role, you will have the opportunity to partner closely with our cross-functional Engineering organization to identify, build, and scale services that make our technical infrastructure practical and accessible for our developers. As an Engineering Manager, you will push SensorUp’s technology and people processes forward as we expand our company. This role will report to the Director of Engineering.
Machine Learning Research Team Lead - Borealis AI, Toronto ON CA
Borealis AI is looking for a Research Team Lead. The role includes managing a team of researchers, and research engineers with the goal of delivering AI-based products for the financial services industry. In addition, you will lead initiatives that improve the effectiveness of the research organization.
Applied Machine Learning Manager - Microsoft, Redmond WA USA
We are seeking a dynamic and experienced Applied Machine Learning Manager to lead Machine Learning (ML) efforts to detect cybersecurity threats at scale within Email. This role involves overseeing the implementation and maintenance of our Machine Learning stack and pipelines, understanding how ML is being deployed and developed impacts customer protection, and managing a diverse team of Data Scientists, ML engineers and Software developers.
Head, Data Science Training & Consultation - Stanford University, Stanford CA USA
This role is responsible for broad support for quantitative, computational, and algorithmic analysis of research data, including aspects of data management, analysis methods, workflow reproducibility, and ethical considerations. The Head of Data Science Training &Consultation continues a long tradition of university-wide service for data-driven research scholarship, while growing Research Data Services' capacity as a hub for digital and computational research support. In collaboration with other members of Stanford Libraries, the successful candidate will support this mission through campus-wide outreach efforts as well as the development of Library services that address the growing need for support of data science methods.
Director of Research Data Science - Stanford University, Stanford CA USA
This position is part of a new initiative incubated within Stanford Data Science, part of the Vice Provost for Research / Dean of Research. Stanford Data Science (SDS) is a dynamic and rapidly growing unit within the VP/Dean of Research. For more than four years, SDS has sought to advance data science and its application to all fields of study. Our community ranges broadly across all seven schools on campus, consisting of esteemed alumni, world class faculty, post-doctoral fellows and PhD students, dynamic staff and administrators. In realizing our mission, our staff are critical to supporting our organization’s goals and enabling Stanford faculty and students to accomplish their mission conducting cutting-edge research and innovation around how we learn from data, the tools we use, and the new methods needed to tackle the data-intensive future.
R&D Manager - Commercial Quantum Computing - Quantinuum, London UK
Our team of scientists is leading the way in the development of quantum computing. As world's largest integrated quantum computing company, we united Cambridge Quantum's advanced software development with Honeywell Quantum Solutions' high-fidelity hardware to accelerate quantum computing. With full-stack technology, we're scaling quantum computing and developing applications to solve the world's most pressing challenges. We could be even better with you! At Quantinuum, we believe quantum information systems will revolutionize the way we work and live. We are leading the way by helping our customers develop quantum-enabled solutions that provide a competitive-edge in their markets.
Centre Manager - Centre for Doctoral Training in Safe AI - University of York, York UK
We are seeking a pro-active, effective, and reliable individual to lead the operational delivery of the new UKRI UKRI AI Centre for Doctoral Training (CDT) in Lifelong Safety Assurance of AI-enabled Autonomous Systems (SAINTS). SAINTS is a prestigious new centre, funded to train 60 PhD students to become future leaders in Safe AI, whether in industry, academia, policy or regulation, and brings multiple disciplines together. Year 1 students will arrive in York in September 2024, and their application journey has already begun.
AI Institute Training Manager - University of Surrey, Surrey UK
An exciting opportunity for a Training Manager within the new Institute for People-Centred AI at the University of Surrey. We’re embarking on a new chapter for Artificial Intelligence, putting people at the heart of AI and augmenting human capabilities to deliver an inclusive and responsible force for good.
Research Manager (Digital Twin, AI, IoT) - MBN Solutions, London UK
MBN Solutions have been retained by this company to bring on board an experienced Research Manager to join their Digital Twin Research Group. They are are committed to combining research and innovation to transform businesses, lives and society.
Software Engineering Manager - Trilateral Research, London UK
At Trilateral, the Software Engineering Manager is the trusted voice by business and technical teams for modern and efficient ways of working. They will have a critical role in enabling and delivering ethical AI SaaS solutions for Trilateral for a dedicated product. They will supervise a team of engineers who develop, modify and create and test our SaaS solutions. This is inclusive of coordinating production systems, quality assurance, maintenance and testing. They will support and ensure we are working in an agile manner, building a community of autonomous, customer value-driven teams that are solution orientated.
Data Platform Manager - St Vincent's Health Australia, Sydney NSW AU
We have an amazing opportunity for a Data Platform Manager to join our Data Governance & Analytics team within Digital & Technology. As Data Platform Manager, you will be critically important in the strategic direction, scaling, design, development, and operation of Enterprise Data Platform that includes the Azure Data Environment and Power BI Analytics Platform. You will spend your days leading a team of data engineers and data analytics specialists, working to understand data use cases in the business; then, design, build, and deploy scalable data products for internal and external stakeholders.
Manager, Research Scientist - Generative AI - Meta, Seattle WA USA
The Generative AI Org at Meta is seeking a strong technical leader to join our team and work on the next generation of LLM. As a technical leader of our team, you will play a critical role in building Meta AI. We have the opportunity to help people everywhere get stuff done better and faster.
Principal Quantum Engineering Manager - Microsoft, Santa Barbara CA USA
As a Principal Quantum Engineering Manager, you will be responsible for managing a diverse team of specialists working on Computer-Aided Design (CAD)/Layout, radio-frequency simulations, optimal control, and thermal management. You will present recommendations to the team regarding optimal designs that maximize readout and control efficiency for quantum chips. You will communicate with the broader quantum team spanning multiple locations worldwide and integrate their feedback into chip design. Additionally, you will contribute to decision making and strategy for the work undertaken by Microsoft Quantum.
Director, Strategic Data Analytics - Abbott, Alameda CA USA
This position will oversee the data strategy, insights, and storytelling across the customer journey and marketing touchpoints, initiatives, and enterprise objectives. This individual will be responsible for looking end-to-end across our data ecosystem, enabling new capabilities, & managing ongoing operations. This leader directs a team that helps define, build, and optimize the connected data portfolio, acting as a central management function that operates across multiple teams, including quality, marketing, finance, commercial teams, strategy, etc.