“Contact us for pricing”.
Say we’re looking at a potential new tool, for us or the team as a whole. When we want to learn about whether it’s feasible as a solution to our needs, we see “Contact us for pricing”. Or occasionally, even for information about the product itself, “Contact us for details.”
“Fantastic!”, we think. “After a few emails and a call or two, not only will we get the information we want, maybe we’ll make a new friend!”
We of course do not think this, because no human being has ever thought this. More likely, we reasonably view it as an imposition on our time, an artificial barrier to getting the information we want. Most of the time we likely bounce, never looking at the page again. Sometimes, we may reluctantly agree to the process, and trepidatiously fill out the form. Perhaps we use a burner or obscured email address, perhaps we don’t. Either way we kind of resent it.
Because it’s not rocket surgery, right? When we’re looking for services or products, we want to know as precisely as possible what the service or product will do for us, and cost to us. Cost meaning both in money and, most valuable of all, our time.
For complex services, we realize we can’t know everything up front, but we still want to have as sense of the process and the timeline. If I leave a message for this plumber, will they call back in an hour or in a couple of days? Will they come to check out the problem tomorrow or a week from now? Will I have to be prepared for their visit at home for an hour, or the whole day? What are the steps going to be? If they won’t tell me up front, I will absolutely choose someone who will. And if it will take a month, well to heck with that, may as well watch some videos online and give it a shot myself.
So we know this, and yet at some level many research computing and data teams behave like they don’t really believe it applies to them.
When researchers come looking at a research computing & data team’s web page, they have some pretty basic things they want to know - what services do they offer? I have this problem - can they help me with it? Have they successfully solved problems that I view as similar in the past? If so, what is the process? What will it take of me, in terms of my and my group’s time, and expertise, and grant money? Am I looking at waiting an hour, a day, a month, a quarter?
These are not unreasonable things to want to know!
Have you ever visited a restaurant webpage and needed like 4 or 5 clicks to get to the menu and their hours? If the restaurant took the menu off the website entirely and you instead had to file a ticket so you could ask specifically if they made spaghetti carbonara, that’s what most research computing centre websites are like for researchers.
and it hasn’t gotten better, even for very service-oriented groups like custom research software development teams. “Contact us for details on the product” would actually be an improvement; more often it’s “contact us to find out what we can do for you in the first place”.
These teams commonly feel discomfort at some level with thinking of themselves as a service organization. “We’re more like collaborators”, goes the thinking, rather than something as grubby as a “vendor” that solicits and works with “clients”. Collaborators don’t tout something as tawdry as a list of service offerings.
Such teams are reliably puzzled by how little is known about them in the larger organization, outside of the usual suspects of the teams they already work closely with. “I can’t believe they didn’t know we could do that for them…” - but how would they know, exactly? Where does it say that?
Starting to view our websites and communications as we would if we were looking for a service or product is an important first step towards growing our impact and supporting research more effectively. Starting to build a “service catalogue” or “product list” may seem like a big deal, but getting started is easy. All it takes is to pick one process your team gets a lot of requests about, that your team does often enough to have an informal “usual way”. Put something together. One page with the overall process, initial timeline, and some examples of successful outcomes for that particular thing you do - that’s all it takes. You’ve started. That one entry can clarify things to clients and internally about that process and offering, and iterate on the page. An SOP can be assembled. The catalogue can slowly grow outward from there.
Have you seen teams lacking this approach, or worked in one? How did you see them start developing and sharing some clarity about what they offer researchers? Was any particular nudge, from the inside or out particularly helpful? Let me and the community know - reply or email to firstname.lastname@example.org.
For now, on to the roundup!
How To Do Less - Alex Turek
Four Steps to Organizational Change Without the Drama - Deiwin Sarjas
Turek walks through the steps of digging yourself and your team out of a hole via absolutely ruthless prioritization - picking exactly one priority and only advancing that goal. That means advancing it either directly through work on the priority, or through making work more effective by changing how and what work is being done.
The hardest part of doing this is the communications with others, and not letting yourself be sidetracked or buffeted by external requests to dillute your efforts. He helpfully gives useful lines to use, both for management and stakeholders:
MAIN_PRIORITY, so we aren’t going to do it until at least
Right now our priority is
ONE_SENTENCE_JUSTIFICATION, and this is 100% our shipping focus.
I agree this sounds like a really useful feature - once we finish
MAIN_PRIORITY, should we consider dropping
SECOND_PRIORITYand do it?
and to the team, after you’ve discussed the new approach with them:
MAIN_PRIORITY, that we all said we’re going to work on.
What if we don’t do this? What can we do without it?
Is this a requirement or a nice to have? Will it speed up
Can we put this onto our (New Feature/Maintenance) Roadmap after our current priority finishes?
I think we can finish
Turek also closes out his article with, almost in passing, an observation I don’t see made often enough. Small teams are often doing “iterate” work, making small changes rapidly to get feedback, and/or “invest” work, where there’s a big push needed to get a single something done. Standing up a system or creating an initial MVP is “invest” work, as is paying off a tranche of technical debt. New features/services or bug fixes/service improvements are “iterate” work. Turek says that at any given moment a small team should be in one or the other - trying to do both simultaneously squanders effort and confuses communications.
Sarjas also emphasizes the importance of communications when making a change (echoing Lara Hogan’s “Don’t YOLO the Comms”). He recommends an outline and process - Situation, Task, Intent, Concerns, Calibrate, or STICC - to organize the communications, and a roll-out plan with the example of promoting a team member to be a manager. You want communication of a change to result in people saying “Makes sense” and not “Wait, What!?”. Being thoughtful in your communications, keeping that goal in focus, can reduce unnecessary drama around big changes. Some drama may happen, and may even be necessary! But you don’t want to contribute to it needlessly through poor communications. As covered way back in #34, good leaders reduce chaos, they don’t add to it.
Making The Leap from Individual Contributor to Engineering Manager - Natalie Rothfels & Doa Jafri, Reforge
How Engineering Managers Are Set Up To Fail - Pat Kua
Speaking of promoting a team member to manager - Rothfels and Jafri talk about the challenges faced by an individual contributor taking on their first management role, and the skills they need to learn. I like their discussion of them, binned into five categories:
These are a lot of big skills to learn! New managers or team leads need a lot of support as they go through these changes. On the other hand, Kua talks about the main way he sees managers set up to fail. They are:
Honestly, the above describes the situation of pretty much every new manager in academic research computing, and many outside of academia.
Were you given any support, training, help, or peer support in your first research computing lead or manager job? What worked for you and didn’t? Let me know!
There are absolutely people in jobs that make unreasonable demands of them, whose job will chew them up and spit them out with burnout. c.f. the last two years in healthcare.
And, in research computing, I see a lot of smart high achievers choosing to take responsibility personally for unfeasible amounts of accomplishment with far too little resources. For pushing ambitious agendas through organizational structures that are at best unsupportive and at worst actively opposed.
When internal expectations and external possibilities are unrealistically mismatched, people can burn out. It’s not necessary, but it happens all the time. Janssen pushes us to take a cooler look at what’s possible, what you want, and whether the two are aligned.
To do that requires a gimlet-eyed look at the situation and organization you’re part of. Sridharan has good advice about knowing how your organization actually works and what it actually values. Knowing this will tell you how to effectively pitch projects, how to build relationships with teams you need to collaborate with, and how to demonstrate success so that you can get resources for the efforts you want to advance.
In the last article, Whiles talks about the importance of taking some time out of your day from work tasks to do exactly the kind of learning Sridharan advocates for. Developing organizational knowledge, learning new skills, and building cross-organizational relationships.
Is the technology you use boring enough?
I really like Dan McKinley’s 2015 talk, Choose Boring Technology, especially the bit where he recommends frugally and reluctantly allocating “innovation tokens” to use in part of a solution. Using shiny newness is expensive. It means constantly fighting against the unknown and solving problems you didn’t know you were going to have. It’s swimming upstream.
This is especially true in research computing! The researchers are solving a new problem, using a new technique. That means the underpinnings should be rock solid and reliable. If there’s something surprising in the output of a result, is that due to intriguing new behaviour in the system being studied, or is it brokenness in the researcher’s new method? Wherever it is, it shouldn’t be from new libraries or systems you’ve unnecessarily provided because you were really excited about using them. That stuff should be capital-B Boring wherever at all possible.
Leroux provides a checklist to ensure that the technology you are choosing is boring enough. Less cheekily, its a way of highlighting common downsides of choosing a new and cool-looking tool that doesn’t have the history, documentation, and well-understood workarounds of a solution that’s been around the block.
Cranshaw’s article describes the database choices made over time for powering Tailscale. Their approach goes even beyond “boring”, having a very strong preference for “as simple as possible”. It turns out that’s frequently simpler than you’d think! For 18 months, and a few orders of magnitude in growth, their “database” was a JSON file. Then they used etcd. They are now powered by sqlite, using litestream for streaming replication. As you know, this is principally a sqlite/embedded db fan newsletter, so Cranshaw’s article getting mentioned was a foregone conclusion.
Defensive Programming and Debugging - Geert Jan Bex, Mag Selwa, Ingrid Barcena Roig
This is an online book to go alongside a PRACE MOOC on defensive programming and debugging specifically for C/C++/Fortran HPC-type codes. The Five-week MOOC has a very modest cost, but the book is completely free. Covered is coding style, error handling, unit testing, using the compiler to help, and debugging tools and techniques. The book is a work in progress and they’re looking for contributions.
In A style guide for creating new backlog tasks, the author recommends a couple of simple and clear style guide recommendations for new backlog tasks. The first is that the title should be an outcome-focused, actionable sentence (“Update copy on signup screen, “Investigate ways to increase speed of CI for each build”). The second is to keep the description relevant to the audience the ticket is for (e.g. distinguish between a product backlog and a developer’s ticket), and keep it clear.
What else should go in a backlog style guide?
A plea to make units explicit in parameter names. One of the few things I learned in my physics training that I still routinely use is dimensional analysis. Units matter!
I’m fascinated by where the typing work in Python is going, and in particular Variadic Generics aimed specifically at NumPy. Making the type and shape (not just rank!) of an array allows static checkers to go Fortran-style deep into function calls, which is amazing. That connects to the new PyData Array API standard to allow some pretty cool things (including GPUs, yes, but also other accelerators)
C++20 range composition with the pipe operator is also pretty exciting! Functional programming-like composition in C++!
Thread Alert: First Python Ransomeware Attack Targeting Jupyter Notebooks - Assaf Morag, Aqua
GitLab addresses critical account hijack bug - Adam Bannister, Daily Swig
Two serious software vulnerabilities in packages relevant to reaserach computing teams - self-hosted GitLab, and Jupyter notebooks.
A lot of RCD teams have Jupyter notebooks easily set up for labs with inadequate authorization, because it’s just a notebook, right? The ransomware attack is a surprise - I would have just assumed someone would do crypto mining. The Aqua team set up a honeypot and watched an attack in progress if you want the details.
The GitLab bug really requires the interaction with OmniAuth, resulting in hardcoded passwords set and credentials taken over.
Putting Composability Through the Paces on HPC Systems - Jeffrey Burt, The Next Platform
Will Open Compute Backing Drive SIOV Adoption - Daniel Robinson, The Next Platform
Burt reports on two efforts in Texas, where Liqid (for NSF’s ACES) and GigaIO (for a Lonestar6 pathfinder) are building composable computing prototypes or testbeds. These are disaggregated systems of network, compute, storage, accelerators, and persistent memory which are tied together by extremely high speed fabrics, so that you can compose combinations of them as needed. Kind of software defined nodes.
The idea here is to have cloud-like flexibility in instances. Rather than having a whole bunch of nodes of one or a small number of kinds that users have to figure out how to fit to their job, the idea is to fit the hardware to the workflow at runtime.
This has been tried before - Infiniband was partly conceived as an off-box PCI, decomposing server hardware onto an IB fabric. But networks are now much faster than they were in 1999, when 8Gb/s was a big deal but clock speeds were already reaching today’s plateau. If this ended up being a feasible approach, it would make on-premises systems much more attractive for the highly-mixed workflows of research computing.
Related to these efforts is device virtualization. Robinson’s article talks about Intel and Microsoft’s Scalable I/O Virtualization (SIOV) specifications, donated to the Open Compute Project, for faster and more scalable virtualization of devices. If you have some accelerator or persistent memory device on one of these composable systems, you may well want to allow two different ‘nodes’ to pick them up, with two different virtual machines running on them. If that’s the case you’ll need some way to partition them (which is an internal-to-the-device problem) and expose them as separate (which is external). That then allows them to be shared in a way that’s less prone to noisy neighbour problems, and better for security. It’s that second issue that SIOV, and the current standard for PCIe devices, SR-IOV, aims to solve.
Well that should do it: RFC 9225, released on Friday - “Authors MUST NOT implement bugs”.
Someone brought OpenZiti, a suite of libraries and tools for zero-trust (ZT - get it?) overlay network and tunnelling. Do you know anyone who’s used it in a research computing context, presumably for sensitive data and systems? Would love to hear of any use cases.
Run Doom in your Grafana dashboard, for some reason.
Add an sshd to your secure web server and proxy, for some reason.
This is interesting - Amazon has a postdoctoral science program.
A deep dive into unwinding a stack and frame pointers using the Linux kernel community’s new ORC debug format.
A song composed of nothing but dial-up modem samples.
If your work uses Google Workspace for mail/calendar/etc, you’ll now have calendly-style booking options built in.
Fascinating network debugging story, within AWS, that seems to be a chef or linux issue but ends up being about MTU discovery and AWS Transit Gateways.
Something I hadn’t really considered before is how Python’s byte code makes possible a number of very accessible introductions to compilers. Here’s an tutorial on building control-flow graphs from a subset of Python.
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
This week’s new-listing highlights are below; the full listing of 161 jobs is, as ever, available on the job board.
Technical Lead - DataCite, Remote EU timezones
DataCite is a leading global non-profit organization that provides persistent identifiers (DOIs) for research outputs. DataCite was founded in 2009 to support the research community in identifying, locating, accessing, and citing research outputs with confidence. DataCite develops services that enable easier access to research and provide a way for researchers to share and get credit for the outputs they generate. Work closely with Product Designer and Software Engineering Manager to architect and develop new features and services, from conception to launch. Be responsible for the overall systems development life cycle including the design, development and maintenance of discovery and harvester services. Investigate design approaches, prototype new technology and evaluate technical feasibility.
Manager, Data Science - Hamilton Health Sciences, Hamilton ON CA
CREATE’s mission is to invent the future of health care. We use machine learning, artificial intelligence and novel digital health applications to improve patient care and enhance medical research. Our team has two major areas of strength: Data Science (applied machine learning/artificial intelligence) and Digital Health (the creation of new digital health applications in partnership with clinicians and researchers). The role of the Manager, Data Science is to manage the full lifecycle of data science projects, from initial planning to deployment to operations. This includes project scoping, overseeing scientific activity, and resolving issues with stakeholders and customers. The Manager will be responsible for leading the Data Science team and contributing to the strategy and business development of CREATE. The manager will have the opportunity to contribute as a coauthor and primary author on scientific papers.
Program Manager, Confidential Computing - Microsoft, Cambridge UK
Microsoft Research Cambridge (UK) is looking for a Project Manager in confidential computing, to join the team building and maintaining the Confidential Consortium Framework (https://github.com/microsoft/CCF), as well as services making use of it. You will work closely with researchers and engineers of the Confidential Computing group and collaborate with other technical and business units within Microsoft in general. The work will be done in the context of Microsoft product group collaborations, such as teams responsible for Azure Confidential Ledger (https://azure.microsoft.com/en-gb/services/azure-confidential-ledger/), as well as entirely novel developments.
Head of Research Software Engineering - University of Manchester, Manchester UK
You will develop and grow an established Research Software Engineering team; you will oversee the wellbeing and development of the people in the team, providing leadership, mentoring, guidance and direction; and you will, with a group of senior RSEs, collaborate directly with researchers at all career stages to specify, cost, develop and maintain bespoke research software required for their research. You will also oversee the development, operation, and maintenance of our research applications portfolio. And you will work collaboratively with other areas of Research IT, IT Services and senior University stakeholders to define and manage the portfolio of services offered by Research IT, and the funding models to underpin them.
Senior Research Software Engineer, Confidential Computing - Microsoft, Cambridge UK
Microsoft Research Cambridge (UK) is looking for a Senior Research Software Engineer in confidential computing, to join the team building and maintaining the Confidential Consortium Framework as well as services making use of it. Much work is done in the open on GitHub, and is designed to remove Microsoft from the trusted compute base. You will work closely with researchers and engineers of the Confidential Computing group and collaborate with other technical and business units within Microsoft in general. The engineering will involve advancing the state-of-the-art in confidential computing methods by taking advantages of new hardware platforms for real-world applications. The work will be done in the context of Microsoft product group collaborations, such as teams responsible for Azure Confidential Ledger (https://azure.microsoft.com/en-gb/services/azure-confidential-ledger/), as well as entirely novel developments.
Lead Data Scientist - Digital SOlutinos - Johnson & Johnson, High Wycombe UK
The Lead Data Scientist - Digital Solutions is responsible for the design, development, programming methods, processes, and systems to consolidate and analyze unstructured, diverse “big data” sources to generate actionable insights and solutions for Quality teams across the Quality organization, as well as building and expanding Data Analytics capabilities. Together with the business partners they develop and deploy applications in new areas/sites, applying and inspiring the adoption of advanced data science and analytics. They develop software programs, algorithms, and automated processes that cleanse, integrate, and evaluate large data sets from multiple disparate sources. Identifies and communicates meaningful, actionable insights from large data and metadata sources to Quality Subject Matter Experts. Selects develops and evaluates personnel ensuring the efficient operation of the function. They drive the application of predictive, preventive, and proactive data and analytics applications at the site Quality Teams in collaboration with the Data Analytics team and under the data architecture designed by the Quality & Compliance and Supply Chain Digital Strategy.
Enterprise Architect - Infrastructure & High Performance Computing - AbbVie, San Francisco CA USA
AbbVie (NYSE:ABBV) is a global, research-based biopharmaceutical company formed in 2013 following separation from Abbott. Facilitates the advancement of an enterprise strategy and architecture for AbbVie’s High Performance Computing capabilities, including alignment of network, compute, storage and containerization strategies. The role provides enterprise architecture leadership for AbbVie’s High Performance Computing strategy, ensuring alignment with Business Technology partners, key business stakeholders and major industry partners. The role will drive execution with infrastructure peer organizations for new technology solutions necessary to deliver public and private cloud High Performance Computing capabilities.
Senior Program Manager, Azure HPC & AI - Microsoft, Redmond WA USA
As Azure’s portfolio of solutions for scalable HPC and AI workloads expands and evolves, we are looking for a program manager to help lead our current and next generation of HPC cluster and supercomputer VM products for HPC workloads (e.g. CFD, FEA, molecular dynamics, cryptanalytics, genomics, proteomics, risk analysis, rendering, high energy physics, energy research and exploration, weather and climate modeling, chemistry, quantum chromodynamics, high performance data analytics/machine learning, etc.)
Quantum Research Scientist, Software - AWS, Pasadena CA USA
The AWS Center for Quantum Computing (CQC) is looking for a research scientist to join our growing software team. You will work closely with our physics team to enable their work measuring novel quantum devices. You will also work cross-functionally with our fabrication and design teams to ensure they have access to the design data they need. · Building tools to automate the calibration of quantum systems · Building control software to collect data from scientific instruments · Mentoring other scientists to actively contribute to the codebase
Research Computing Lead - Garvan Institute of Medical Research, Sydney NSW AU
The Research Computing Lead (system and cloud administration) provides executive leadership and management for the design, planning, and implementation of research computing services. A primary focus for the next two years will be driving the migration from on prem to cloud computing. The role requires a depth of technical knowledge, demonstrated capacity to guide a team and engage researchers and staff to provide high-level technological support in alignment with Garvan’s strategy. Primary responsibilities include: further development of Garvan’s cloud computing strategy, managing a decrease in on prem footprint, supporting researchers through the migration process, contributing to national genomics infrastructure in collaboration with external researchers and horizon scanning to identifying opportunities for improvement to our computational capabilities.
Associate Vice President for Research Computing - Rice University, Houston TX USA
Rice University invites nominations and applications for the newly created position of Associate Vice President for Research Computing (AVP-RC). Reporting to the University’s Vice President for Information Technology & Chief Information Officer, the AVP-RC provides vision and leadership in the development of computing infrastructure and services that advance the research mission of the University. The AVP-RC oversees the Center for Research Computing and works with the Research Computing Committee, a component of faculty governance. The Center for Research Computing (CRC) compromises one large supercomputing cluster, known as NOTS, and a range of other on-premise and cloud-based private and commercial environments. The Center’s well-regarded technical staff provide expert assistance in accessing and using these solutions.
Research Computing Program Manager, Libraries - Columbia University, New York City NY USA
The Research Computing Program Manager will lead the activities of the Foundations for Research Computing program. The aim of the program is to train Columbia researchers in computational skills and overall computational literacy. As part of the Columbia University Libraries’ Digital Scholarship unit, the Program Manager will advance the program and special events for researchers in close cooperation with other colleagues in the Libraries, Columbia University Information Technology, and the Office of the Executive Vice President for Research.
Manager, Infrastructure and IT Security, Research Institute - McGill University Health Centre, Montreal QC CA
Under the Research Informatics and Information Technology Division Director, the incumbent plays a vital role in collaborating with and providing support to the User community at the Research Institute of MUHC. He or she is responsible for designing, implementing and maintaining cloud services, shared services, and platform solutions to address complex business issues and provide technical leadership for the research community. The ideal applicant will thrive in a highly collaborative workplace and actively engage in the development process.
This is an excellent opportunity for a manager with hands-on with cloud computing, platform management, and excellent technical design background, combined with outstanding interpersonal skill.
Technical Project Manager, National Robotics Engineering Center - Carnegie Mellon University, Pittsburgh PA USA
We are looking for a Technical Project Manager to work with a multi-disciplinary project team to execute the scope for one or more Army sponsored research projects. NREC is a leader in robotics technology development and commercialization and serves as the technology transfer arm for the world-renowned Robotics Institute. As a Technical Project Manager (TPM) on selected projects, you will be expected to deliver successful projects while adding new value for our sponsors, licensing NREC and CMU technologies, and generating follow-on business. Excellent project management is critical to the successful transition of robotics technology into the industries that NREC serves.
Scientific Project Manager, EGA - Centre for Genomic Regulation, Barcelona ES
We are seeking an enthusiastic project manager to complement our team. The person will be involved in the management and coordination of partnerships and collaborations in the frame of several International projects in genomics and health. The successful candidate will be coordinating tasks such as the curation of clinical data and the development of EGA services tailored to the needs of each project. They will be scheduling meetings, leading working groups, tracking deliverables, writing reports, and keeping in touch with our research and development teams to measure their progress. We need a person eager to learn, and to push the team forward with good energy. Knowledge in biological science, and more specifically bioinformatics, is highly appreciated. A proactive attitude is absolutely required.
Linux Senior Systems Administrator, School of Geography and the Environment - University of Oxford, Oxford UK
You will be a senior member of the SoGE IT team to ensure all aspects of IT are managed well. Key responsibilities include the design, implementation, monitoring and support of complex services running wholly or partly on Linux platforms. You will be expected to contribute to, or backfill for, projects that the SoGE IT Team are managing or leading. You will be expected to lead on developing well-managed platforms, particularly in relation to configuration management, and monitoring of systems.
Aquatic Informatics - Team Lead, Data Science - Remote - Hach, Various CA
Are you passionate about data science, mentoring others, and research and development? This role is an excellent opportunity to join our Predictive Analytics team to support Aquatic Informatics’ suite of digital twin and decision support products. These core growth products provide municipal drinking and wastewater customers with the predictive and prescriptive insights they need to operate optimally. This leader will have the opportunity to directly contribute and cultivate a dynamic, multi-disciplinary team of engineers and data scientists applying cutting-edge technology to deliver solutions that have a tangible impact on the environment and human health.
Program Manager, Responsible AI Health Research - Google, London UK
Work closely with research scientists, engineers, product managers, and business development to deliver impactful projects with internal and external partners. Lead team planning for a set of established projects in the team’s portfolio, including Objectives and Key Results and longer-term roadmapping. Use technical skills to understand and make project trade offs, prioritize partner needs, challenge assumptions, and evaluate project work and risks. Coordinate cross-organization efforts to improve visibility, including running weekly technical presentations, organizing events, etc. Communicate both vision and details (such as project status and risks to various levels of the organization); persuasion through reasoning and data; and resolution of conflicts over goals, priorities, and approach.
Program Manager - Data Sharing Services - AWS, Various EU or UK
You will own a strategy, in liaison with tech teams, on how to develop data spaces that will connect fragmented and dispersed data sets from various ecosystems, from the private and public sectors. You will liaise with highly visible customers and external stakeholders such as Gaia-x, the International Data Spaces Business Association (IDSA), the Eclipse Foundation, Fraunhofer research institutes and more.