Google Cloud Platform (regular)
Get to know us better
CodiLime is a software and network engineering industry expert and the first-choice service partner for top global networking hardware providers, software providers and telecoms.
We create proofs-of-concept, help our clients build new products, nurture existing ones and provide services in production environments.
Our clients include both tech startups and big players in various industries and geographic locations (US, Japan, Israel, Europe).
While no longer a startup - we have 250+ people on board and have been operating since 2011 we’ve kept our people-oriented culture. Our values are simple :
Act to deliver
Disrupt to grow.
Team up to win.
The project and the team
Our client is a Stealth Startup, SaaS platform for creating / managing MLOps pipelines used for validating given hypotheses and steering base application flows in a feedback loop.
The product aims at enriching the base application's functionality for customers from different industries. Therefore, apart from the core functionality, the tool also assumes integrations with third-party tools and APIs.
Our project is supposed to handle large amounts of information and act instantaneously on their base, so handling big data is our bread and butter.
What else you should know :
The team consists of less than 15 people including an architect, project manager, ML developers, and multi-tech and multi-language engineers familiar with numerous APIs, data structuring and processing techniques, presenting output in multiple ways depending on the business needs.
We use SCRUM / Agile methodology.
We are planning to port our existing cloud solution also to AWS as a second cloud vendor.
The client is based in the US.
As a part of the project team, you will be responsible for :
Leading the deployment, setup, and initial configuration of software products
Deploying code to GCP and / or AWS environments using scripts and automation frameworks
Working with internal and external engineering teams on automating deployments, CI / CD pipelines, release management, monitoring, and continuous improvement initiatives
Creating cloud services using Terraform scripts and GitHub Actions
Setup authentication / SSO using pre-configured workflows
Working with the broader DevOps team to ensure best practices and standards are followed. This will be related to infrastructure / hosting, data security, user access management, permissions, monitoring, patching, and logging / auditing
Keeping internal operations playbooks, knowledge base information, and automation scripts up-to-date
Do we have a match?
As a Senior DevOps Engineer you must meet the following knowledge / experience criteria :
Cloud concepts and technologies with GCP and / or AWS provider
Automation scripting using Terraform, Ansible, etc.
Cloud-native services (e.g., GCP Cloud SQL / RDS, S3, Google Cloud Function / Lambda) from mentioned providers (GCP and / or AWS)
Basic idea of Big Data on GCP BigQuery, Dataproc, and Dataflow
Profound experience with Linux and / or Windows / Mac administration and scripting
Experience working with CI / CD pipeline and continuous testing tools
Solid understanding of DevOps principles and infrastructure
Version control (GitHub)
Good communication skills, English (B2 level), ability to confront technical solutions with the team and the customer’s technical representatives to validate the solution with the client
More reasons to join us
Flexible working hours and approach to work : fully remotely, in the office or hybrid
Professional growth supported by internal training sessions and a training budget
Solid onboarding with a hands-on approach to give you an easy start
A great atmosphere among professionals who are passionate about their work
The ability to change the project you work on