Senior Software Engineer, Platform - Data + AI (Back-End)

C3.ai, Inc. (NYSE:AI) is a leading Enterprise AI software provider for accelerating digital transformation. The proven C3 AI Platform provides comprehensive services to build enterprise-scale AI applications more efficiently and cost-effectively than alternative approaches. The C3 AI Platform supports the value chain in any industry with prebuilt, configurable, high-value AI applications for reliability, fraud detection, sensor network health, supply network optimization, energy management, anti-money laundering, and customer engagement. Learn more at: C3 AI

C3 AI is looking for Senior Software Engineers to join the rapidly growing Data org within the Platform Engineering department. Successful candidates will get the opportunity to work on high-value technologies at the intersection of large-scale distributed systems, data infrastructure, and machine learning. You will design, develop, and maintain various features in a highly scalable and extensible AI/ML platform for large-scale applications, involving data science, distributed systems, and multi-cloud strategy.

You will be given opportunities to take ownership of components, collaborate to drive technical direction, and work on interesting, impactful projects. Join us in building the next-generation AI/ML platform at petabyte level scale that powers some of the world’s largest companies in Energy, Financial Services, Utilities, Health Care, Aerospace, Defense, etc. Accelerate your career in the leading enterprise AI company that is in a hyper-growth trajectory.

Responsibilities:

  • Design and develop infrastructure and services to enable data pipelines for petabyte level scale and more.
  • Design and develop abstractions over datastores such as Cassandra, PostgreSQL, Snowflake, etc.
  • Design and develop file system abstractions over AWS S3, Azure Blobs, HDFS, etc.
  • Design and develop connectors to various external data stores.
  • Design and develop distributed system components for stream processing, queueing, batch processing, analytics engines, etc.
  • Develop and maintain industry-leading, high-performance APIs for AL/ML applications.
  • Develop and maintain features for distributed computations over large-scale data for ML workflows.
  • Design and develop ML-specific data-systems such as feature stores and behavioral frameworks such as recommendation engines.
  • Design and develop integrations with distributed computing technologies such as Apache Spark, Ray, etc. for data exploration and ML workload orchestration.
  • Design and develop integrations with data analysis libraries such as Pandas, Koalas, etc.
  • Develop and production AI/ML models for failure prediction, data schema inferencing, etc.
  • Work on frameworks for performance, scalability, and reliability tracking over different components of a highly extensible AI/ML platform.
  • Work with architects, product managers, and software engineers across teams in a highly collaborative environment.
  • Participate and provide insights in technical discussions.
  • Write clean code following a test-driven methodology.
  • Deliver commitments promptly following agile software development methodology.

Qualifications:

  • Bachelor of Science in Computer Science, Computer Engineering, or related fields.
  • Strong understanding of Computer Science fundamentals.
  • High proficiency in coding with Java, C++, C#, or some other compiled language. Python would also be acceptable.
  • Strong competency in object-oriented programming, data structures, algorithms, and software design patterns.
  • Experience with version control systems such as Git.
  • Experience with large-scale distributed systems.
  • Experience with any public cloud platform (AWS, Azure, GCP).
  • Some familiarity with distributed computing technologies (e.g., Hadoop, Spark, Kafka). Familiarity with managed versions of these technologies on public cloud platforms is also acceptable.
  • Familiarity with technologies in the modern data science/analysis and engineering ecosystem (e.g., Pandas, Koalas).
  • Good verbal and written technical communication ability to facilitate collaboration.
  • Thrive in a fast-paced, dynamic environment and value end-to-end ownership of components.
  • Intellectually curious and open to challenges.

Preferred Qualifications:

  • Advanced degree in engineering, sciences, or related field.
  • Experience with Agile development methodology.
  • Experience developing and working with REST and/or GraphQL APIs.
  • Experience building scalable and reliable data pipelines.
  • Experience with integration of data from multiple sources.
  • Experience working with analytics and/or data processing engines.
  • Experience developing distributed computation over large-scale data.
  • Experience working with distributed computing frameworks (e.g., Hadoop, Spark, Kafka).
  • Experience with data science/analysis libraries (e.g., Pandas, Koalas).
  • Experience with task schedulers in distributed computing (e.g., Spark, Ray, Dask).
  • Familiarity with machine learning workload orchestration in a distributed computing environment.
  • Familiarity with workflow execution and/or optimization using DAGs, ideally for machine learning use-cases.
  • Conceptual understanding of orchestration and resource provisioning systems (Kubernetes)

C3 AI provides excellent benefits, a competitive compensation package and generous equity plan. 

California Pay Range
$145,000$187,000 USD

C3 AI is proud to be an Equal Opportunity and Affirmative Action Employer. We do not discriminate on the basis of any legally protected characteristics, including disabled and veteran status. 

Apply for this job
logo C3.ai Software Engineering Full-time 💰 155K - 190K Onsite 📍 Redwood City, CA Apply Now
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Newsletter

Subscribe and stay updated.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Join our newsletter