Principal Systems Engineer - Big Data (DevOps)
We are a growing SaaS company based in Redwood Shores, CA that offers its employees best in-class problems to solve, a bit of travel, and a remote work opportunity.
The world’s leading energy companies turn to AutoGrid to integrate all distributed energy resources, turn on new revenue streams and drive deeper engagement with their customers. Our suite of Energy Internet applications allows utilities, electricity retailers, renewable energy project developers and energy service providers to deliver clean, affordable and reliable energy in a distributed energy world.
How do we do this? By pioneering the science of flexibility management. This innovative approach enables energy providers to mine the Energy Internet’s rich data lode to extract flexible capacity from distributed energy resources. In turn, flexible capacity can be used to balance energy supply and demand in real time, increase the productivity and value of energy assets, and deliver new energy services to customers.
Established at Stanford University in 2011, we have assembled a team of world-class software architects, electrical and computer engineers, data scientists and energy experts who apply cutting-edge analytics and in-depth energy data science to solve the world’s most critical energy problems.
We’re looking for a Principal Systems Engineer - Big Data (DevOps). We offer:
remote work with a globally disparate team;
an opportunity to join an AWS Advanced Technology Partner;
and unrivalled Kubernetes global footprint providing critical services to energy providers.
The ideal candidate will have at least five years experience with Hadoop and Kafka -- in particular Strimzi. You will currently be working with AWS EMR and Kubernetes. You will understand Ansible and be prepared to triage, diagnose, and remediate production failures at scale. As such, you have first-hand experience with CloudWatch, Lambda, and the like.
The successful candidate will be designing, implementing, and operating AWS EMR and Kubernetes-based Kafka solutions. If so, you love emerging technology and want to work with cloud- and Kubernetes-native components -- we got that, too.
You have been using Linux for a long time. Systemd is in your DNA, and you solve problems correctly with Capabilities and SELinux enforcing.
We manage our infrastructure with Git as the source of truth. It is home to Ansible we use to do all the heavy lifting. We have proper immutable infrastructure using CICD declarative pipelines with Jenkins.
Why Should You Apply?
Do you live to quench your technical curiosity? Do you have a work ethic that rivals your peers? Are you the one who answers the hard questions? Do you get tired of waiting on others and replace them with code? If you have AWS EMR, Kubernetes, and immutable infrastructure in your blood please read on.
We manage a portfolio of solutions for large energy providers around the world. And, as such, we have a global server and data footprint. We have a need for someone who has a strong proficiency in Hadoop and Kafka to assume the role of Big Data subject-matter expert.
We are infrastructure as code – maintaining configuration in Ansible. The estate is cloud native in AWS. Identity is managed with Kerberos, LDAP, and OIDC. Our estate is complex, and we need a rock star to realise the next milestones while planning the next revolution.
You will work independently and lead from the front. The source of truth is Github and CICD with Cloudbees Jenkins pipelines. You love the pressure that comes with high stakes delivering complex systems as code. We have a complex cloud-native distributed system that requires experience and discipline. You are very collaborative, reliable, and clever. If you got what it takes then please get on with it and submit your CV.