Research Computing Teams #108, 5 Feb 2022
Hi, everyone:
Several readers responded about their version of the challenges I described last week - most teams are still working remote; some had reduced-occupancy office space available for those who wanted to come in, although it sounds like during the Omicron wave that was sparsely taken up where allowed at all. And those who mentioned it agreed uniformly on how hard it was to hire these days.
I think in some places we need to come up with our own solutions to these problems that meet the challenges our teams face (small, relatively slowly-growing teams with wide scope and demands that can change month-to-month and weird funding), and in other places there are opportunities to learn from other kinds of groups. This newsletter is a great medium for sharing resources, but it’s a bit of a slow way to have a back-and-forth on topics of timely interest! I’ve mentioned it before and included it on lists of resources, but the Rands Leadership Slack is a great community for technical managers and leaders, largely from tech and startups; if that’s something that interests you, dear reader, let me know you’ve joined or are considering joining and we can create a research computing and data channel for discussion of our particular needs.
In the meantime, news on my front. I’ve been thinking about people leaving jobs a lot lately, because we just found out one of my team members is leaving, leaving a big hole in the team - but also I’ll be changing jobs, too. (More about this in a bit). So we’ve been going through two off-boarding processes at the same time, one (mine) much lengthier than the other, involving two somewhat different groups of people, and requiring knowledge transfer at different levels of granularity, which is an interesting perspective into the situation.
The one thing that hits home most is that I really thought I had absorbed the lessons of an article I’ve referenced a couple times in the roundup, Always Be Quitting. That article described the advantages for stakeholder clarity, for your own perspective, and for growth opportunities for your team members of continually documenting what you do and the state of things, and of teaching team members who want to grow in particular directions different parts of your job. But when I started this offboarding process at the end of last year, I discovered how superficial my efforts in the direction really had been.
Writing a many-page document describing my activities and the state of tasks so that responsibilities could be handed off to various people was an incredibly clarifying experience. It was also one in which I realized how much had gone undocumented previously. Had I written that document a year earlier, and seen everything in one place, I would have focused my energies in the past year in subtly but noticeably different ways. Writing bullet points of priorities and needs and todos scattered in various places didn’t have nearly the same clarifying impact on me, much less on others. And I would have done a much better job of delegating the right tasks to the right people.
I’m also surprised as I work on offboarding myself how much information I thought had conveyed many many times in many many channels and yet hadn’t been really taken. I’ve seen this come up in many handoff meetings. Luckily most of these things were already documented somewhere, but just because it was written down doesn’t mean people committed it to memory. After all, it’s Jonathan’s job to know that stuff.
So in next jobs, I will take the “making myself obsolete” part of the job much more seriously; this has been very eye opening. But I’m really pleased with how the process is going, and how the team members are stepping up. Honestly, after five years the team and the effort are ready for and will benefit from a change; while there’ll be some disruption in the short term, the emerging new leadership team will be stronger than just having me at the helm.
I’ll talk more about my new job in the next issue - it’s a bit of a return to roots, and to being an IC again. But for now, the roundup!
Managing Teams
The Manager’s Stack - Jamie Lawrence
What are your favourite life-of-a-manager or -lead tools? Not team tools - Slack, Zoom, GitHub, etc - but the things you personally use to make yourself productive as a lead or manager?
I can’t imagine work life anymore without I and others using some kind of book-a-meeting-with-me tool like Calendly (even though that particular tool is now getting some pushback). I’ll never do the “actually, Tuesday afternoon’s not good for me” dance five times a week again. For group meetings, setting up a proper doodle account with my calendar linked means whenever I respond to a doodle, it automatically fills in my available times.
I’ve got my mutant bullet journal approach just the way I like it for daily note taking (in an Artist’s Loft journal, like a Leuchtturm but much cheaper, with 120gsm paper being perfectly suitable for my preferred pens). Now that we’re purely remote, I no longer rely on online note taking for things like 1-on-1s, and have a paper template I like. Shared documents like quarterly goal setting with each team member are still done in Google docs running documents, which would doubtless horrify both HR and IT, so don’t tell them. For writing my own stuff, I like Dropbox Paper for live documents, mainly because it exports to Markdown, and for taking linked notes while learning stuff I’ve been using Foam. Sadly, I still haven’t found a personal task manager I like. Most recently I had been playing with Trello for that purpose, but, well, the search continues.
In this article Lawrence talks about his favourite tools as a manager. Not surprisingly, calendar tools play a big role. He prefers SavvyCal to Calendly, uses Reclaim to sync his personal and home calendars, and Fantastical on the Mac for viewing/editing his calendar. One tool he calls out which is really interesting is Krisp, intended for removing the user’s background noise from video calls. However, it also keeps track of what fraction of the meeting he’s talking, which is something we have to watch out for as managers and leads.
What are your favourite tool discoveries of the pandemic? What could you just not live without as a manager? Anything you made yourself, whether a document template or something else? Hit reply or email me at jonathan@researchcomputingteams.org and next issue I’ll include a summary.
Not everyone can become a great leader… - Matt Schellhas
This is a good first-person account of how pernicious the whole “great leaders are born not made” nonsense is. Schellhas talks about being held back from becoming a manager for years because he didn’t look or seem the part. He did, in the end, become a (successful) manager and leader, and recognizes how at least he got the chance eventually, many never do.
On top of discouraging people who don’t look and act like they came out of central casting for managers from gaining more responsibility and power, this really toxic “good managers have certain personality traits” belief also discourages people from learning how to be better managers, better leaders. How can you learn to be better at something that is innate, right?
Anywya, Schellhas’ article is worth keeping in your back pocket to send to people. We have to stomp out the nonsense “manager == personality type” idea. Being a good manager and leader is no different than any other profession - you have to develop certain skills and behaviours, and continue your professional development over the years to refine your own abilities and grow them in others.
14 strategies to shorten lengthy meetings - Hanna Ross, Fellow.app
It’s been a little while since we’ve had a good meeting article in the roundup. This one covers some ground we’ve seen before, but meeting management is so fundamental that it’s always worth reviewing.
Ross’s points, slightly trimmed for length:
- Have a clear purpose and agenda for the meeting in advance, and send background materials - can’t emphasize this enough. Agendas and materials are important, but without an underlying purpose, you have no way to assess whether the meeting was good or not. The agenda is in service of the purpose, and the materials in service of the agenda.
- Limit number of attendees - including yourself, say no unless there’s a compelling reason for you to be there
- Assign meeting roles - facilitator, meeting taker, chair
- Assign lengths of time to agenda slots, with a parking lot for topics that come up.
- Say no to going off topic.
- In recurring meetings, shuffle the order of topics. (I haven’t seen this before - that’s a good idea!)
- Start at an usual time - we’ve been regularly having meetings start at 5 minutes past the hour and half-hour, which is great, but it’s especially handy if you..
- Use an unusual duration - are you sure that 55 minute meeting can’t be done in 45 minutes? Or 40?
- Have meeting ground rules for behaviour
- Do Q&A’s asynchronously.
Managing Your Own Career
Know how your org works (or how to become a more effective engineer) - Cindy Sridharan
Managing Up: The Do’s And Don’ts And Why It’s Important For Success - Sam Milbrath
As we grow more senior, we want our work to have more impact. To do that, we need to better understand the context of our work - within the institution, across our specific research community, and our funders. Doing the best possible work on one technology or technique as a community is moving to another is a recipe for wasted effort. Great ambitions to expand a project without understanding what funders are looking for and your institutions’ priorities is a recipe for frustration.
Sridhan gives some specific examples for understanding what is going on in your institution. The context of her article is the private sector, but much of it carries over. In particular, to be successful in the long term, it’s important to learn
- how to build lasting relationships with other people on your team or organization that will ultimately dictate the success of a project
- how to effectively pitch projects or improvements and see these through to completion
- how to manage conflicting priorities or expectations
- how to best deal with setbacks
- how to identify and drive quick wins
- how to use this knowledge to judiciously pick battles
A specific case of knowing how your organization works, and how your work fits inside that context, is “managing up”. Like “networking”, “managing up” has acquired a gross, smarmy reputation. But “networking” is just developing professional relationships with people in your community. And “managing up” is just growing a productive professional relationship with your boss.
The usual challenges to growing that productive relationship is that:
- your boss may have a very different work and communication style than you do
- may have a lot else on their plate than your work, and
- that other work may give context to yours that you don’t know about.
These aren’t that different from challenges working with your team members. They see things from their hands-on point of view that you don’t. They have communications styles different than your own. They face challenges you’re unaware of.
Like both, the way you overcome these challenges are regular conversations, learning about opportunities and needs, and chipping in where possible to help both the people and overall mission succeed.
Milbraith’s article gives some do’s and don’ts for managing your single most consequential professional relationship.
Career Advice Nobody Gave Me: Never Ignore a Recruiter - Alex Chesser
If you’re interested in working in the private sector at all, this is a worthwhile read. No, you shouldn’t hop on a call with everyone who sends you span on LinkedIn. But as Chesser says, there are real jobs out there with good recruiters sourcing for them. And you can’t always tell who is who from the messages.
Chesser provides some scripts for interacting with recruiters. The scripts aim to weed out low-value “just reaching out” messages while building a conversation with the recruiters that have career-changing possibilities. The scripts are well worth stealing.
In research and especially academia we really limit ourselves by not actively recruiting for new team members. Yes, recruiters are expensive - two months salary for a successful hire isn’t uncommon. But not being willing to pay that is more evidence that we aren’t willing to invest in excellent staff. (See also: not wanting to pay for good tooling). Almost everywhere in research you’ll see flowery paragraphs saying “people are our most important resource.” But too many teams are willing to spend months of people’s time creating and evaluating RFPs for equipment, while treating hiring as an afterthought worthy of little effort and less money.`
Research Software Development
Fixing Performance Regressions Before they Happen - Angus Croll
Performance matters for research software, although how much it matters depends on the application. I’ve seen groups with performance tests as part of their CI/CD pipeline – the Chapel language nightly performance tests come to mind — but performance is tricky. There’s always some noise in runtime numbers. Other system processes might be taking up CPU time or memory bandwidth. The state of various caches will vary.
Here’s how Netflix keeps an eye on performance in their development pipeline. They run performance (time and memory) tests of some sort on every commits. They run each test three times to get a median. Then they flag a significant change if the results differ by more than four standard deviations from the recent mean-of-medians.
We probably don’t have the same development velocity as Netflix, so some of the level of automation they have isn’t necessary. Having regular time and memory testing (integration and some unit tests) of our code with a clear metric for when things change enough to merit investigation is something that’s attainable for most of our groups, though, if those measures are important to us.
How to fix your scientific coding errors - Jeffrey M. Perkel, Nature
There won’t be much new here for you, my colleague, but it’s heartening to me to see how mainstream once niche discussions of challenges of scientific software development are becoming. Here, in a feature in Nature(!) there’s discussion of code review, version control, testing, and automation and repeatability. There’s even a couple of favourably described stories of researchers issuing errata for publications after finding a bug, in a pretty transparent attempt to normalize such things (which is great! In Nature!!). Roundup veterans like Lorena Barba #39 and friend of the newsletter C Titus Brown (#98, #90) make an appearance.
One of the interviewed PIs, Dr Julia Strand in Psychology at Carleton College, was so affected by an potentially-catastrophic but in one piece of her software that she developed an approach to handle errors in research more generally. Her resource “Error Tight,” aims to reduce the frequency and severity of outcome of errors across the research process. This is starting to look more like safety-by-design approaches in safety-critical areas like aviation. That’s a fascinating development, and the resource is well worth a look. Also, it introduced me to PsyArXiv, preprints for the psychological sciences, which I’m ashamed to say I didn’t know existed.
How Software in the Life Sciences Actually Works (And Doesn’t Work) - Elliot Hershberg, New Science
This too covers much familiar ground to readers of the newsletter. The inadequacy of existing research funding mechanisms aimed at supporting research outputs like experiments and papers for research inputs like production research software are pretty well known.
But this is a well written article by a grad student, and it has a bit of a different perspective than most articles on the topic. For one, it mentions with a fair amount of hopefulness new institutions and funding organizations like Arcadia, Arc, and related. Second, it’s positive about a well-established but often ignored funding mechanism for research inputs — selling a product for money. Faving simulation and analysis tools be open-source is important for transparency of science. But still, there are mechanisms like hosted/managed SaaS offerings or charging for feature development which are under-explored in our ecosystem.
Research Data Management and Analysis
Intel oneAPI’s Unified Programming Model for Python Machine Learning - Janakiram MSV, The New Stack
It’s interesting and maybe surprising to those of us who have been here a while how big the Python data analysis ecosystem has become, and how robust it is to big changes. Various accelerated computing frameworks, such as NVIDA and now Intel with OneAPI are meeting data scientists where they are, accelerating Python frameworks like pandas or scikit-learn or even bumpy with their lower level tools rather than trying to move them to something else.
And, in something of a tribute to those python frameworks, this approach seems to be largely successful. The APIs defined by those libraries seem to be perfectly suitable to drastic re-implementation with quite different technologies.
This article has a description of and links to Intel’s oneAPI implementation of acceleration for scikit-learn and pandas. oneAPI is an industry-wide (but, it must be said, Intel-led) effort to have a common and vendor-independent programming framework for a wide range of CPUs and accelerators. The author promises updates with tutorials for installing the toolkits and training models.
Research Computing Systems
NHR @ Göttingen Security Workshop Talks - From 16 Dec 2021
Last December, GWDG held an HPC Security workshop; most of the slides are here. There are three interesting looking talks:
- Security in HLRN - Concept and Experiences, Tim Ehlers, which covers the basic approaches used when designing spinning up Hamburg’s HLRN system, and how they played out
- ”Supercomputers offline across Europe”: Forensic investigation of the Taurus HPC cluster, Pascal Brückner, which gives a pretty detailed overview of what happened on one system during the HPC cluster security catastrophe of spring/summer 2020, and the analysis they performed
- Security in HPC with Containers, Holger Gantikow, where Gantikow from Atos goes over the pros and cons of containers for HPC, particularly around a software bill of materials
Sarus achieves container portability with near-native performance - Raluca Hodoroaba, CSCS
Saurus - the Saurus team
If you’re not tired of playing with CharlieCloud and Shifter and Podman and Singularity/Apptainer and others, there’s another OCI-compliant container image and execution system which aims to be specifically for HPC, by which it means:
- Plays well well POSIX-type security approaches (e.g. based on local userids and permissions as opposed to cloud-native key-based approaches)
- Provides hooks for MPI and SSH and Slurm
- Has higher-level image definitions which makes it easier for an image to take advantage of different architectures (I don’t understand this part, but it seems to be the focus of the CSCS article)
Has anyone looked at this? I’m pretty sure I don’t understand the distinguishing features here. There’s a paper after which I remain confused.
Cray’s Slingshot Interconnect is at the Heart of HPE’s HPC and AI Ambitions - Timothy Prickett Morgan
A good overview of the high performance networking competitive environment in which Cray/HPE’s slingshot interconnect finds itself, and how central it is for HPE’s current plans. There are some hints as to the different forces in play which will shape upcoming generations.
Emerging Technologies and Practices
Real-time machine learning: challenges and solutions - Chip Huyen
An increasing use-case for computing, data, and software is online or real-time machine learning - both training and inference. Data and results come in and a model is updated, or a measurement comes in and a prediction or classification is made. This is a challenge for the kinds of systems or analysis tools or data stores that we’re used to dealing with. We’re pretty capable of putting together complete solutions that handle batch modes of operation, but anything “online” or “near-real-time” puts very different requirements on all aspects of software, systems, and stores.
This long read by Huyen talks about the challenges of real-time learning - first with online prediction, then with continually-updated training. She then walks through a staged approach to get from our more usual batch approaches to something like real-time prediction or training.
A lot of us are going to have to support use-cases like this in the near future, so even if this isn’t a challenge you have right now, it’s worth reading to understand the needs and what’s involved.
Random
Implementations of wordle in google sheets, Word ’97, over ssh, and for the best “personal digital assistant” of the 1990s, the Palm Pilot.
Connect to a database using CSS.
Emacs org-mode as a SQL playground.
The case against ZFS on Linux.
Live incremental backups of Qemu VMs.
Performance of large JSON queries in Postgres - tl;dr HSTORE is good if you just need string key-value pairs, otherwise JSONB is the winner, and either way you should use compression because once rows get bigger than 2kB everything (not just JSON) gets way slower. PS I like the experimental design baseline choice - prefixes of a BYTEA column.
Open-source is good, but not because it magically conveys improved security.
I’ve been hearing about gitpod as a slightly different approach to some of the same usecases as GitHub codespaces, and with a more predictable (not necssarily better, just more certainty upfront) billing model - anyone have any experience?
That’s it…
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Jonathan
About This Newsletter
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.
Jobs Leading Research Computing Teams
This week’s new-listing highlights are below; the full listing of 148 jobs is, as ever, available on the job board.
Software Engineering Manager, Extreme-Scale Computing - Intel, Hillsboro OR USA
We are looking for a manager for a team of 5+ engineers who develop software for advanced computer architecture pathfinding and applied research. The team develops SW stacks from the ground up for novel architectures, testing those SW stacks on FPGAs and prototype hardware and porting SW applications frameworks to the new architectures. As a manager, you will set priorities for the team, get results across boundaries, ensure an inclusive work environment, develop employees, and manage performance. You will also provide architectural and technical guidance, and ensure that sound engineering practices are followed.
Software Engineering Manager, Data Centre Group - Intel, Shannon IE
The Data Center Group (DCG) is at the heart of Intel’s transformation from a PC company to a company that runs the cloud and billions of smart, connected computing devices. The data center is the underpinning for every data-driven service, from artificial intelligence to 5G to high-performance computing, and DCG delivers the products and technologies—spanning software, processors, storage, I/O, and networking solutions—that fuel cloud, communications, enterprise, and government data centers around the world.
Assistant Director, Scientific Applications, Anvil - Purdue University, West Lafayette IN USA
As the Assistant Director of Scientific Applications, you will provide leadership in the support of computational science within Purdue’s campus cyberinfrastructure, and in support of the NSF-funded “Anvil” system. In collaboration with campus and national stakeholders, implement and oversee the strategic direction for research computing at Purdue. You will work extensively with the user communities to maintain the highest level of service and satisfaction through effective service delivery, collecting feedback on user needs, potential use case scenarios, and user priorities to be integrated in the future direction for computational resources. As well as working closely with faculty members, IT leaders, and peers at other institutions. Assist the Executive Director of Research Computing with the operational oversight of Research Computing services including budgeting, resource planning and creation of policy, procedures, and standards.
Manager, Research Software Engineering - Princeton University, Princeton NJ USA
The Research Software Engineering (RSE) Group, within Princeton Research Computing, is hiring a Manager of Research Software Engineering. You will report to the Director of Research Software Engineering for Computational and Data Science. In this position you will build and lead a growing team of Research Software Engineers who provide dedicated expertise to researchers by creating the most efficient, scalable, and sustainable research code possible in order to enable new scientific advances. You’ll oversee and encourage the professional development of the research software engineers by engaging with the broader research community, and find ways for them to give back by providing training workshops and/or participating in conferences. Finally, as part of the RSE management team, you will contribute to the strategic vision and mission for Research Software Engineering at Princeton.
Director of Research Computing - OVPR - Carnegie Mellon University, Pittsburgh PA USA
This role will serve in a senior leadership capacity for both the Office of the Vice-President for Research (OVPR) and the Office of the Chief Information Officer (OCIO). This leadership role requires proven management and leadership skills with exemplary communication skills to act as a research computing evangelist across the university’s colleges and programs. In the first twelve months in the role, the Director will be expected to work with a committee of academic and administrative designates to understand computing and data needs, current practices, and opportunities for all of the university’s colleges and programs. From there, the Director will be tasked to work with the committee and leadership to charter, build and maintain the needed service offerings, funding model(s), resourcing plan, and technology roadmap for the program to help all constituents meet their research computing, data, and security requirements. The successful candidate will have proven abilities to manage several operations simultaneously while effectively setting priorities to achieve defined goals.
Manager, Scientific Collaborations - Optina Diagnostics, Montreal QC CA
At Optina, we are shaping the future of diagnostics. With its simple eye test, Optina is about to transform neurodegenerative and other systemic diseases and improve the lives of millions of worried patients and their families around the world. In this role, you will be leading scientific oversight and guidance of scientific research proposals for Optina Deep retinal imaging platform in both Machine and Deep Learning. Working with Key Opinion Leaders and Subject Matter Experts at collaborating institutes and universities, you will support the progression of early phase portfolio, including managing assets, supporting clinical-in-use assessments, and ensuring the regulatory and quality objectives are met.
Scientific Project Manager - Mila - Quebec Artificial Intelligence Institute, Montreal QC CA
Set up, support and manage a portfolio of Prof. Bengio’s research projects and activities in AI and molecule discovery. Create and maintain a collaborative and consultative relationship with the numerous project stakeholders, including the main researchers, students and external collaborators. Provide day-to-day scientific management, support for students and research professionals, and coordination of different project components.
Monitor the progress of scientific deliverables. Mentor and assist in the supervision of Prof. Bengio’s students who are involved in AI projects for molecule discovery. Contribute to the implementation and support of the research data management plan. Assist in setting up requests for computing resources (Compute Canada). Prepare scientific papers, presentations and other documents related to the managerial mandate.
Head of Data Management and Engineering - CSL Behring, Berne CH
CSL Behring is a global leader in developing and delivering high-quality medicines that treat people with rare and serious diseases. The Head of Data Management and Engineering is responsible for leading a team of hands-on data managers and engineers and providing data oversight across the Research project portfolio. The incumbent will lead the creation of Research project data governance, architecture, quality controls and processes for study data (human and non-human) and big data (sequencing and imaging) management, including ingestion, validation, and curation. S/he manages strategic partnerships and external talent to facilitate and administer Research Data Science flows and data management platforms (these may include cloud-based High Performance Computing platform, QIAGEN OmicSoft, Proscia Concentriq, and Electronic Lab Notebook).
Sr. Product Manager, Terra - Verily Life Sciences (Google), Cambridge MA or San Francisco CA USA or Waterloo ON CA
As a Senior Product Manager on our team, you’ll lead cross-functional initiatives across multiple organizations in the development and commercialization of Terra, which aims to transform biomedical research by making the world’s most scientifically valuable data discoverable, accessible, and analyzable. We are building a data ecosystem that brings together the most valuable data with the most powerful tools and you’ll play a critical role in building that ecosystem.
Executive Director, Academic Technologies, Innovation & Research Computing - Georgia Tech, Atlanta GA USA
The Executive Director position for Academic Tech, Innovation and Research Computing serves to support a strategic focus on the research and academic technologies of the Institute by partnering with research and academic entities, and leading and executing the aspects of IT strategy that enable the Institute to achieve its research, teaching and learning, and innovation goals. Responsible for establishing group/departmental/division goals, determining the resources needed to meet those goals, assessing group/departmental/division performance feedback, and making pay decisions. This position will interact on a regular basis with: staff, researchers, faculty, and students. This position typically will advise and counsel: staff This position will supervise: staff
Senior/Consulting Manager, HEM (Health Economics Modeling) - Certara, Berlin DE
In the past 6 years, over 90% of companies that received new drug approvals by the FDA used our software or services. Our integrated and proprietary end-to-end platform with biosimulation, regulatory science, and market access solutions combined with our strong team of scientists and subject matter experts enable our clients to have: informed decision-making & reduced clinical trial burden, higher R&D productivity, improved patient outcomes & increased patient access to medicines, accelerated regulatory approval.
Certara is a global team that fosters and nurtures a diverse and inclusive culture. We are proud to be an Equal Opportunity Employer. The collective sum of our individual differences, life experiences, knowledge, innovation, self-expression, and unique capabilities and talent represents a significant part of our culture, our reputation and our achievement.
Engineering Manager, Modelling & Simulation, Defence - Improbable, London UK
Your mission is to lead and grow our new modelling components team within our decision science product development division. This role could be considered similar to a data science manager. You’d be leading a team of applied scientists who use diverse techniques across data science, machine learning, agent-based modelling, engineering and research. They are responsible for the development of scientific models that describe and animate the synthetic environments for our “Synthetic Environment Starter Pack”. These models will focus on creating representative behavioural models of civilians in large scale simulations, covering areas such as geographic movement, communication (social networks) and demographic attributes.
Senior Manager, Bioinformatics - Exact Sciences, Oxford UK
Based in Oxford, our Exact Sciences center of innovation aims to have a major impact on personalised medicine with TAPS, a breakthrough technology that makes epigenetic analysis of DNA methylation more sensitive, accurate and cheaper than ever before. We have an opportunity for a Senior Manager, Bioinformatics to drive the creation of new methods for the analysis & interpretation of Next Generation Sequencing (NGS) data generated using TAPS. Working with a highly skilled interdisciplinary team at the interface of chemistry, molecular biology, bioinformatics and software you will establish tools and methods to enable the analysis of DNA sequence and methylation data in support of our R&D projects and clinical studies.
Associate CIO - Advanced Research Computing - New Jersey Institute of Technology, Newark NJ
The Associate CIO - Advanced Research Computing will provide leadership in advancing NJIT’s research by architecting platforms that fuel strategic research growth at this premier technology institute. A researcher at heart, with deep expertise in networking, advanced research computing, and data sciences, this individual will envision the next-generation of research computing at NJIT by fostering collaboration across NJIT’s diverse technology research faculty. This position will lead engagement and outreach with the university’s research faculty, expand collaboration with national and international research organizations, and ensure outstanding service to the university’s research faculty. The position will build a team to manage the infrastructure, provide training, and related services to create an enabling experience for the university’s research faculty. The Associate Vice Provost of Research Computing will lead the successful journey of modernizing legacy platforms leveraging cloud, enhance governance and decision making, and establish predictable and consistent services to the university’s research community. All duties will need to be performed in accordance with university policies, procedures and core values.