Research Computing Teams #148, 3 Dec 2022
Process vs Expectations; Coaching questions practice; UCL ARC always hiring; Invest in uncertainty reduction; Manager career leveling; Product lessons from Amazon Omics; Coping with too many code projects
I was part of a discussion recently about a disengaged team member, and the topic of process vs expectations came up.
I write and talk a lot about expectations and processes. I’m a big believer in both! I think they should both be documented, discussed, and violation of either should result in some discussion.
But they’re not the same, and they’re not substitutes.
I think we’ve all worked in (or heard about) places where there was some restrictive process in place, because “one time, this guy Frank - you weren’t here, you wouldn’t know him - well, he…” did something. This kind of process reaction to an expectation violation is stifling, and it gives process a bad name, because everyone knows it’s a cop-out. The unsuitability of this approach came up in the discussion, and everyone knew the failure mode being described, because it’s common enough to be a trope.
If someone violates some kind of expectation, the solution is almost always discussions with that person, and to work with them to ensure that one way or another the expectation is met in the future. It isn’t to impose a new process on everyone. The process approach is almost always an avoidance maneuver, an attempt at technical solution to a people problem. Talking with people about missed expectations is hard, takes time, and doesn’t scale. And it’s the right solution.
There are times when some bad outcome genuinely does highlight some systemic issue, where some kind of change is needed, and not just a discussion. Changes may need to be made to how work is done, or in the supports and resources available. Certainly if a particular expectation keeps being violated by different people, there’s some deeper issue that needs investigation. Some new or changed might end up being needed here.
But processes describe how something’s done; expectations describe the outcomes activities should have. Process describes the path, expectations are the boundary conditions. Nailing down the path that must be taken when what really matters are the boundary conditions being met is overly prescriptive, takes autonomy away from team members, removes room for creative solutions, and saps motivation.
It’s good and kind and respectful to have and describe high expectations for how your team members operate, and for teams to have high and explicit expectations of how they work together. We all have expectations, and it’s not kind or respectful to let people constantly fail to meet those expectations without letting them know. If you were failing to meet your boss’s expectations, or your team’s work was failing to meet a researcher’s expectations, you’d be mortified to hear that they felt that way and couldn’t be bothered to tell you.
Shared, stated, explicit expectations are how you and team members can be confident they’re doing a good job, and how a culture of trust and collaboration can form within the team. They are how team members know that they can rely on each other. They are the measuring sticks by which we all can measure performance, and growth. We’re all motivated people in this line of work; we want to set high bars for ourselves both technically and as team members, and to clear the bar, and then raise it another notch higher. And on occasions we miss, we want to know it so we can do better next time.
Processes, at their best, are time- and cognitive energy-saving devices. They are a shared way of doing things so that people don’t have to invent their own way to perform common work. They are a known-good set of steps so that routine stuff can get done, and others know what to expect, so that everyone’s mental efforts can be spent focussed on where creativity is valuable.
They can and should work together. There may be a process in place for (say) rotating the running of standup meetings, so that when it’s someone’s turn they know what to do and everyone knows what to expect at stand-up. On top of that there should be expectations around treating each other respectfully during the meeting. There may be processes for submitting code for review before pushing to main, and expectations about acceptable turn-around times and code (and review) quality. There may be SOPs for bringing a new database server online, and expectations around how the appropriate users are notified.
One way or another, processes shape how we interact with tasks, while expectations shape how we interact with people.
In the case of a disengaged team member who wasn’t getting work done in a timely manner, the solution almost certainly isn’t to create process guardrails. There’s clearly an expectation, and the team member isn’t meeting it. If nothing’s being done for the violation of expectations, would anything really going to be done for violations of the process? Do they know they’re not meeting the expectation? Do other team members share the expectation? Is the violation an expectation an issue of willingness, confidence, knowledge, or skill? Is it a temporary thing, or has it been ongoing?
The solution to the disengaged team member isn’t an easy or simple one, and it doesn’t involve sketching out a flowchart. It requires developing a strong professional line of communication with the team member, which means one-on-ones if they’re not in place; after that’s established (which will take weeks), it will require feedback and coaching.
The most likely outcome is that over some time the team member improves their performance, because people genuinely do want to meet their team’s expectations.
But the feedback may escalate without improvement, and the team member may end up leaving. That uncertainty, and (justified!) fear of the bad outcome, leads us too often to shy away from this approach.
But it’s not kind to let a team member struggle with work they don’t want to do, and it’s not kind to watch other team members have to do extra work and grow resentful as nothing is done. It’s certainly not kind to impose some kind of process on everyone because someone isn’t meeting expectations.
We don’t have an easy job, but it’s important. Like us, our team member deserve to know when they’re doing well, and they deserve to know when they’re not. They deserve team members they can rely on, and they deserve to hold each other accountable. That’s how we grow professionally, and it’s how our teams grow in impact.
And with that, on to the roundup!
Managing Teams
Six Creative Ways To Use Coaching Questions - Lara Hogan
We’re all experts of one form or another, and it’s hard sometimes not just to blurt out an answer, recommendation, or advice when our team members hare having a problem. There’s nothing wrong with doing that when it’s warranted, but in the long term it’s better for both you and the team member if most of the time they’re coming up with their own answers.
Hogan points us to her (PDF) list of open questions, and some suggestions for getting into the habit of using them:
- Delivering one at the tail end of giving effective feedback (question, [situation], behaviour, impact, question)
- Redirect the instinct to give advice into a brainstorming session focussed on one of the questions
- Kick off your part of a one-on-one with one
The list of open questions isn’t crucial here - the key is to practice staying in coaching mode by asking open questions. If you do, the team member may come to a good answer themselves. Or, you may find areas where the team member does need some support to be able to tackle the problems themselves; areas where there are gaps in willingness, confidence, knowledge, or skills. Addressing those gaps will be more productive and a longer-term solution than providing answers as required.
At University College London’s Advanced Research Computing team, they’re moving to a model where they have positions (Data scientists, Data Stewards, Research Infrastructure Developers, and Research Software Engineers) permanently open. If at some point they need something specific, they’ll adjust the appropriate role:
In order to maintain balance within the team, our recruitments may at times require a particular specialism, and this will be noted within the advert.
Do we have anyone from UCL ARC reading, or any other teams who have tried to do something similar? I’d love to hear the mechanics of how this worked with HR and leadership. Hit reply or email me: jonthan@researchcomputingteams.org
Technical Leadership
Removing uncertainty: the tip of the iceberg - James Stanier
On and off I’ve discussed the importance of managers in reducing uncertainty about processes, people, and products (e.g., #34), but the same is true in our role as technical leaders.
The work we’re doing is almost always uncertain - we’re working in research! By definition, we’re pushing forward human knowledge somewhere it hasn’t gone before, or supporting work doing the same. There may or may not be dragons there, but we know we’re unsure what we’ll find.
Stainer writes in the context of product development, but the principle is very much applicable to our work: make sure that you’re investing up front in reducing uncertainty as well as making forward progress:
You reduce uncertainty by doing: prototyping, designing, writing code, and shipping. Each of these actions serve to reduce the uncertainty about what is left to build.
We’re people of science, and we know all about testing hypotheses. That’s the approach here - test some hypotheses, reduce the amount of uncertainty about the path forward, make forward progress, then test hypotheses again.
Stanier gives some advice about how to do this when there’s multiple streams to the work (define contracts up front) and mocking as much as possible. Some of the specific examples won’t be relevant if you’re not building web applications, but the basic guidance is good - invest effort in reducing uncertainty.
Managing Your Own Career
As the year winds down, many of us naturally start to do some retrospection — how did things go, how did we do, what should we personally work on next for our development.
Sadly, almost none of us get useful feedback or guidance about this from our own managers/directors. (One of my goals for this newsletter is to support and encourage RCD managers and leads who want to treat their work with professionalism. The hope is that some of you will then choose to go onto more senior leadership positions, and do better for the next generation of managers and leads. Our teams’ work is too important for the status quo to continue).
But, as ever, we can learn from managers and leads elsewhere. This engineering management career ladder from Dropbox, and this one from Stripe (PDF - managerial roles start at level 5), are useful starting places. Some of the evaluation criteria might not be meaningful for your roles. But even deciding in a principled way which ones are and which ones aren’t priorities for you is a useful exercise.
Of the ones that do seem meaningful, how do you feel like you’re doing? What are areas where some modest improvement might have the biggest impact - for the team, or for you personally? (Yes, it’s ok for us to think about ourselves as well as our teams).
Are there areas you’d particularly like to grow in, or hear more about, in the coming year? As always, don’t hesitate to hit reply to send an email just to me, or to email me directly at jonathan@researchcomputingteams.org.
Product Management and Working with Research Communities
Introducing Amazon Omics – A Purpose-Built Service to Store, Query, and Analyze Genomic and Biological Data at Scale - Channy Yun, AWS Blog
AWS re:Invent was this past week, and there were some interesting items for our research computing and data community.
Further down I’ll talk about some technical re:Invent news of interest about compute resources. Here I’d like to highlight some of the product management aspects of a new(ish) set of services, “Amazon Omics”. The components here aren’t entirely new for AWS, but they’ve been stitched together and positioned in a really interesting way that we can learn from.
- The product is for a very specific community: It is absolutely clear who this product is aimed at, and who it is not aimed at. It’s not for life sciences researchers broadly, and certainly not for computational science work in general. It’s for people with lots of sequencing data to process and analyze - people who match that use case will want to find out more, everyone else will continue looking at other offerings. Could others make it work for their use cases? Yes, maybe, but it’s not for them, and AWS will have no hesitation about making it worse for those users if doing so makes it better for the genomics customers.
- It’s a mix of compute and data services: What’s more, there’s two (arguably three) different data services. Compute is useless without data, data is useless without compute, and different kinds of data call for different kinds of handling. There are data storage, access, management, and analysis components that comprise Amazon Omics. Databases, object stores, pipelines, and compute. We’re well past the point where a product team targeting researcher use cases can focus on individual pieces of that puzzle.
- There are clear interfaces defined, but the implementation details are intentionally fuzzy: “A sequence store is similar to an S3 bucket….” AWS is exposing a set of interfaces, and as underlying technology changes, or as they discover more how the products are being used, they can make implementation changes to improve performance and reduce their costs without users necessarily knowing.
- It builds on existing expertise, of both technology and researcher needs: S3 buckets, databases, and AWS Batch/Step Function are already things AWS has, and has deep expertise in. Similarly, AWS is extremely familiar with how people use their existing tools for genomics work - they’ve collaborated with researchers very closely in the past. That existing knowledge, and existing infrastructure, was bundled up into something that will be very compelling for a specific subset of users.
- It addresses a complete workflow: There aren’t gaps here. References and sequences go in, pipelines get run, output goes in something queryable in almost arbitrary ways via SQL. The work doesn’t get stranded at any stage. It’s a complete minimum viable workflow. Further, it’s part of a larger ecosystem of products that can be used to get the data in or post process results.
By bundling up existing technology pieces, in ways informed by previous collaborations, that addresses a complete workflow, AWS has made its existing technology offerings significantly more attractive (and discoverable) for a particular group of potential users, users who aren’t interested in choosing and tuning the pieces themselves.
By keeping the services comparatively high level, AWS can make things better (performance and cost) as they attract more usage for this particular use case and see how it works.
Bundling AWS’s technology and expertise up into a specific, complete offering like this means they can get better and better at both attracting and supporting genomics users at the same time. Such is the power of good product management.
Research Software Development
Coping strategies for the serial project hoarder - Simon Willison
This is a talk Willison gave at DjangoCon. It describes strategy for handling a large number (185!) of projects of by himself, but some lessons may be very relevant to many research software development teams, where there’s a large number of projects they might be intermittently called to contribute to, but a small team.
Summarizing it, I see two overarching strategies in the talk.
The first is to keep the work as “stateless” as possible, by making sure everything is self-contained - the projects themselves, and contributions to the project. For him, that means he doesn’t have to keep 185 projects in his head. For us it would make it easier to onboard new people, and to invite others to contribute. He does this by:
- Having commits/PRs change one thing, but include tests and documentation
- Having discussion (even if it’s just with himself) about every planned piece of work take place in an issue (a “lab notebook” for that work), where decisions can be recorded, diagrams can be added, description of work in progress can be added, and - bonus! - everything is date stamped
- Planning for enhancements can live in an issue for a very long time before being acted on - adding discussion to the issue doesn’t necessary mean that code is imminent
- Everything links back to the issue (and, eventually, the PR or commit)
- Documentation lives with the code
The second is to build tooling and processes to support scaling:
- Include writing about what was done (in a blog post, a twitter thread, release notes or somewhere) as part of the “definition of done”
- Cookie cutter recipes for new projects, that build a standard way for getting started (including a test suite with documentation testing set up) so that everything’s done in a good way and a recognizably similar way to minimize cognitive load
Research Computing Systems
An Initial Look at Deep Learning IO Performance - Mark Nelson
Moving HPC applications to object storage: a first exercise - Hugo Meiland, Azure HPC Blog
A lot of systems that were initially architected for typical PDE simulation use cases are now struggling with the very different I/O demand of data intensive (and in particular deep learning) workloads. Big PDE simulations, after reading in some small input files, or maybe a single large checkpoint, are then completely write-dominated, periodically outputting a small number of very large outputs, synchronized across nodes.
That’s a hard problem, but it’s a very different hard problem than the I/O patterns of deep learning training workloads, which are constantly re-reading in small distributed batches from large datasets.
If you’re already very familiar with I/O patterns of such workloads, Nelson’s article may not teach you much. But if you’re just starting to wrestle with them, Nelson’s explorations here as he tries to tune and reproduce some benchmarks with both TensorFlow and PyTorch will likely be of use.
Maybe relatedly, Meiland’s blog post discussions moving another high-compute data-intensive workload, Reverse Time Migration, from high-performance cloud POSIX file systems to a Blob (object) store, with drastically reduced storage costs for the runs (although in this case, the costs of the compute vastly outweigh even the filesystem costs).
AWS Tunes Up Compute and Network for HPC - Timothy Prickett Morgan, The Next Platform
This is a good roundup of the HPC hardware side of re:Invent. Here the product management is also relevant, but the main interest is in how AWS picked a particular and at the time quite idiosyncratic approach for large scale technical computing (HPC, AI, and more) and continues to push it forward, with genuinely intriguing results.
My main takeaways are:
- With the tweaked “Graviton3E”, Arm is increasingly attractive for research computing workloads (disclaimer: I have an interest here, as I work at NVIDIA, but between Fugaku and Gravition 3/3E, or Oracle Cloud’s Ampere instances, I don’t think this is a particularly controversial take, especially as power use becomes increasingly important).
- AWS greatly advanced the state of the art of hardware-offload-of-networking-and-control-plane with Nitro, and continues to push that SOTA forward. After demonstrating that it meant that even untrusted multi tenant systems could securely provide bare-metal performance, we can expect to see more and more of this even in on-prem data centres, which (again, I have an interest here) to my mind will make a lot of interesting things possible. Also, have we ever seen even a cartoon diagram before of AWS cluster network topologies?
- AWS continues to double down on it’s different take on HPC networking from Azure’s, focussing on driving down tail latencies to a variety of endpoints (storage as well as compute) rather than minimum latencies in (say) ping-pong tests. Whatever one might think about that, it’s fantastic to see a completely different tack being taken than the usual HPC approach, and it’s one which manifestly already works extremely well for say single rack jobs.
Random
There was a lot of AI model news in the past week or two - Stable Diffusion v2 is out, and Midjourney v4 and the latest version of GPT-3 are available. If you haven’t tried them I’d urge you to give them a go: GPL Chat or the Midjourney Discord Beta make it extremely easy. This article has an argument I find compelling why these tools are going to be useful in creative work (including science) in a way they’ve struggled to have an impact in, say, robotics or self-driving cars. Failure-is-ok attempts in workflows where there’s a human in the loop is going to go much better for creative work than in heavy machinery.
Maybe related - “Hey, GitHub” makes GitHub Copilot available via voice for improved accessibility.
The “Sovereign Tech Fund”, funded by the German Ministry for Economic Affairs and Climate Action, is funding development of the Fortran package manager and LFortran. It’s great that these non-traditional bodies (STF and the Chan Zuckerberg foundation) are funding foundational research computing software tools. But why haven’t research funding agencies caught up?
There’s an Emacs Conference, I guess?
That’s it…
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Jonathan
About This Newsletter
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
Jobs Leading Research Computing Teams
This week’s new-listing highlights are below in the email edition; the full listing of 188 jobs is, as ever, available on the job board.
Director of Computational Biology - Isomorphic Labs, London UK or Lausanne CH
This is an extraordinary opportunity to join a new Alphabet company that will reimagine drug discovery through a computational- and AI-first approach. We are assembling a world-class, multi-disciplinary team who want to drive forward groundbreaking innovations. As one of the first members of this pioneering organisation, you will play a meaningful role in building this team, embodying an inspiring, collaborative and entrepreneurial culture. This early-stage venture is on a mission to accelerate the speed, increase the efficacy and lower the cost of drug discovery. You’ll be working at the cutting edge of the new era of ‘digital biology’ and advancing a new type of biotech that will deliver transformative social impact for the benefit of millions of people.
Head of Research Informatics, Kennedy Institute of Rheumatology - University of Oxford, Oxford UK
The Kennedy Institute sits within the major biomedical campus of the University with unrivalled access to patient samples and clinical data as well as cutting-edge genomic and imaging capabilities. Research Informatics is therefore a strategic growth area in the Institute, currently supporting a wide range of research projects. These include clinical studies on knee injuries (KICK), hand pain (HOPE-c) and large collaborations with industry, such as Cartography (Janssen) and STEpUP-OA (Novartis, UCB). To develop this work, we are looking for a leader in the implementation and management of research informatics capabilities across the Institute with a team of informaticians, web developers and programmers. This role therefore provides a unique opportunity to develop novel informatics solutions in partnership with clinical and discovery scientists from the Institute, Oxford departments, academia and also with industry partners.
Senior Research Engineering Manager, Urban Big Data Center - University of Glasgow, Glasgow UK
In this role you will provide strategic leadership in research software engineering, data science, and information governance for the Urban Big Data Centre (UBDC) at the University of Glasgow. The postholder will be responsible for managing a team of research software engineers, data scientists and information professionals in support of UBDC’s national data service and research activities (currently funded by UKRI-ESRC and University of Glasgow) as well as other research activities across the College of Social Sciences (CoSS). In addition, the postholder will be responsible for developing, managing, and implementing the research computing, data, and information services strategies and policies within UBDC. Leading on engagement with key internal and external stakeholders including researchers, data and technology service providers, UKRI and other funders, University services, and College and University senior management.
Principal Research Product Manager - Privacy Preserving AI - Microsoft, Redmond WA USA
At Microsoft, we operate the largest collaboration services in the world with 100s of millions of consumer/enterprise mailboxes, documents, and conversations. It represents the world’s largest platform of human collaboration for personal, business, and educational use. Within our Microsoft wide Privacy and Confidentiality Preserving Machine Learning initiative, we strive to enable the next generation productivity experience while maintaining confidentiality and preserving trust. We are an Applied Research team driving mid- and long-term product innovations. We closely collaborate with multiple researcher teams and product groups across the globe who bring a multitude of technical expertise in privacy-preserving AI technologies; from encryption protocols and hardware to differentially private training and adversarial attacks. Beyond this, our goal is to encourage new innovations which are not limited to a combination of these approaches and strive for making real world impact.
Geospatial Data Science and Applications Group Lead - Idaho National Laboratory, Idaho Falls ID USA
Does a career focused on changing the world’s energy future intrigue you? If so, we might have just the opportunity you’re looking for! The Radiological Security and Mechanical Engineering Department at Idaho National Laboratory is seeking an eager, self-motivated, mid-career geospatial data scientist to be the managerial and technical lead for the Geospatial Data Science and Applications Group. This team works a remote 9x80 schedule located at our Idaho Falls facility with every other Friday off.
Manager, Data Analytics - Brown Uniersity, Providence RI USA
The Graduate School seeks a Manager of Data Analytics to oversee the design, production, and analysis of data and reports on graduate students and graduate programs at Brown. The Manager designs an overall strategy and approach to data management and analysis for graduate education at Brown that furthers the mission of the Graduate School. The Manager effectively communicates plans, initiatives, and process changes to support this approach, and coordinates the work of several offices and colleagues to accomplish goals.
Chief of Staff - Center for AI Safety, San Francisco CA USA
The Center for AI Safety is a research and field-building nonprofit. Our mission is to reduce catastrophic and existential risks from artificial intelligence through technical research and promotion of safety in the broader machine learning community. We’re looking for a Chief of Staff to bring organizational and managerial expertise to the executive team. As Chief of Staff, you will work closely with the CEO to set organizational goals and strategy, to implement key internal processes, and to manage and scale our team. Depending on fit, this role may extend to become our Chief Operating Officer (COO).
Principal Software Engineer - Bioinformatics - Roche, Various CA
As a Principal Backend Software Engineer, you are joining a passionate software engineering team to build sequencing products to change patients’ lives. You will design, implement, and test software features & product infrastructure, primarily from a backend perspective while working with cloud technology - AWS, Serverless computing, Java, distributed platform, Spring Boot, and more. Ultimately, the software you produce will impact patient care globally. You have experience building scalable server side applications, have a passion for reliability and security and are curious about the trends in web development. Work with management to set priorities. Excellent communication skills and teamwork is a must!
Lead Software Engineer, Scientific Software Engineering Center - Georgia Tech, Atlanta GA USA
The Scientific Software Engineering (SSE) Center at Georgia Tech—one of the four inaugural Virtual Institute for Scientific Software Centers sponsored by Schmidt Futures—aims to support the development of better quality, more sustainable scientific software. To achieve this goal, the Center will (1) give scientific researchers access to full-time professional engineers and state-of-the-art technology and techniques, (2) support long-term scientific platforms and systems, and (3) encourage best practices in open science. The SSE Center is therefore building a team and is looking for professional software engineers at different levels of seniority who can help it achieve its goa. In this context, the College of Computing is requesting to hire a Lead Software Engineer for SSE. This will be a new position for CoC that will be fully funded by the gift received from Schmidt Futures and that will report to the SSE Center Director. The Lead Software Engineer will work directly with the Director to further the mission of SSE, manage the projects pursued by SSE, lead and mentor the team of engineers working in the Center, and interact with other internal and external stakeholders (e.g., GT scientists and scientists at national labs).
Director of Data Science (Digital Solutions) - Stantec, Winnipeg MB CA
Stantec Digital Solutions is a unique product and services consulting team. We pride ourselves on being customer obsessed and highly focused on Digital Science & Engineering transformation. As a company of 26,000 Scientists & Engineers we have a large internal and growing external customer base consisting of highly technical staff. Your primary focus will be leading a team of data scientists create value through the development & delivery of data products built by combining industry specific feature engineering combined with Machine Learning (ML) processes. The position calls for using disparate data to develop models that represent physical & natural systems. If you’re eager to apply data science and ML to a diverse array of enterprise Architectural, Environmental and Physical Engineering use cases this is likely the position for you.
Lead Trainer and Curriculum Developer – Open Source Specialism - Center for Scientific Collaboration and Community Engagement, Oakland CA USA
You’ll bring significant, proven experience in delivering training to support POSE grantees in scaling from projects to Ecosystems – which will include topics such as governance, community engagement and organizational roles and team coordination, as well as curriculum development expertise. You’ll be attentive to learners’ needs, responsive in your communications, and take joy in providing challenging, supportive and impactful learning experiences. CSCCE’s teaching philosophy combines practical resources and contextualized examples with space for reflection, experimentation, and collaboration. You’ll be equally comfortable creating structured activities and holding spaces for learners to explore in community with one another.
Assistant Director, Center for Statistics and Machine Learning - Princeton University, Princeton NJ USA
The Center for Statistics and Machine Learning (CSML) was established in 2014 as Princeton University’s hub for education and research activities in statistics, machine learning, and the data sciences. CSML seeks a professional experienced in the research environment to manage the operations and strategic growth of the Center. The assistant director represents the Center and the Director, serving as a liaison both internally and externally. The assistant director reports to the Director, with a secondary reporting relationship to the Senior Manager for Academic Administration in the Office of Human Resources.
Technical Project Lead, HPC Initiatives - Oak Ridge National Laboratory, Oak Ridge TN USA
The HPC Custers Group in the National Center of Computational Sciences Division (NCCS) at Oak Ridge National Laboratory (ORNL) is seeking a highly motivated, customer-centric, and operationally minded individual to serve as Technical Project Lead to provide support to HPC initiatives supporting collaborative research projects with external partners. You will work closely with the line management of NCCS to manage the milestones of the research, development, and production projects within the group to support division efforts. You will manage the scope, schedule, and cost of the various projects within the constraints of the available resources and the operational environment. You will have a direct interface with the project/program sponsor and department oversight personnel, responding to requests for information, reporting on progress, leading review meetings, and generating regular sponsor reports. You will champion new initiatives and lead strategic planning activities to enhance and grow the HPC portfolio.
Lead, Software Engineering, Decision Management Platform - Mastercard, Vancouver BC CA
Do you want to be part of a team which helps prevent fraud on every mastercard transaction in this world? The Decision Management program enables intelligent decision based products through streaming analytics with the ability to govern these decisions and manage their outcomes with business agility. This program leverages business rules & AI engines, a streaming big data cluster, an in memory data grids, APIs, & UIs to deliver real time decisions at global scale