Infrastructure Engineer
Dataiku is The Universal AI Platform™, giving organizations control over their AI talent, processes, and technologies to unleash the creation of analytics, models, and agents. Providing no-, low-, and full-code capabilities, Dataiku meets teams where they are today, allowing them to begin building with AI using their existing skills and knowledge.
The Platforms Infrastructure team is seeking a driven individual to join us in addressing our evolving infrastructure challenges. At Dataiku, our mission is to architect, deploy, and maintain robust data platforms centered around our core product. We also provide industrialization and production configurations for mission-critical web services.
As domain experts, we proactively develop and disseminate reference infrastructure, best practices, knowledge, and tooling across all technical teams at Dataiku. This role encompasses a broad spectrum of responsibilities, from leveraging high-level managed services at hyperscalers to performing in-depth, low-level Linux debugging.
As a key contributor to the deployment of our core product, the team actively participates in development initiatives related to infrastructure and Linux system interactions.
We foster a collaborative environment and seek a like-minded individual to engage in highly collaborative projects. Effective communication skills are essential for interacting with diverse teams throughout the company.
This position is based in Paris and may be considered for remote work.
Your Responsibilities
- Design, deploy, and manage our core product based internal data platforms.
- Design, deploy, and configure production environments for multiple web services.
- Collaborate with teams to architect technical solutions, and subsequently deploy and manage those solutions.
- Develop core product features around cloud services and Linux integration.
- Guarantee optimal service capacity and robust security for all running workloads and services.
The role might be a good fit if you have
- Proficiency in infrastructure automation using tools such as Terraform.
- Expertise in container technologies and managed Kubernetes clusters.
- Advanced scripting skills in Python.
- Practical experience in networking and computation within a major cloud provider (AWS, Azure, or GCP).
- Demonstrated resilience in resolving complex technical challenges, with a relentless pursuit of root cause analysis.
Bonus point for any of these
- Experience in implementing authentication and authorization systems, including LDAP, SAML, and OAuth2.
- Experience with configuration management tools such as Ansible, Puppet or Chef.
- Knowledge of additional programming languages, particularly Go, is considered a strong asset.
- Experience in networking and compute across multiple cloud providers (AWS, Azure, or GCP).