DataBricks Engineer

Job Description

We are seeking a skilled DataBricks Engineer to join our team. In this role, you will be responsible for engineering reliable data pipelines for sourcing, processing, distributing, and storing data in various ways using cloud data platform infrastructure. Your primary focus will be on developing data engineering techniques to automate manual processes and solve complex business problems. Additionally, you will ensure the highest level of data quality and validation checks within enterprise data lakes.

Roles & Responsibilities

  • Design and engineer reliable data pipelines for sourcing, processing, distributing, and storing data using cloud data platform infrastructure.
  • Develop data engineering techniques to automate manual processes and address challenging business problems.
  • Implement and maintain data quality and validation checks within enterprise data lakes.
  • Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.

Skills

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 4+ years of experience working with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes.
  • 2+ years of experience with Databricks capabilities related to data engineering (Legacy Migration, Delta Lake, Pipeline modernization).
  • Proficiency in writing code in either Python or Scala, with strong skills in SQL (stored procs & triggers, dynamic SQL, query optimization).
  • Hands-on experience with at least one of the leading public cloud data platforms (Azure / AWS / GCP).
  • Familiarity with data validation and quality frameworks, as well as data lineage frameworks.
  • Preferred: Databricks Certified Data Engineer Associate/Professional Certification.

Experience

4-7 Years

Location: Bhilai, Indore, Bengaluru

Personal Information

Tell us something about yourself






    CV or Resume

    Upload your CV or resume