Data Engineer - Zurich Asuransi Indonesia
Job Summary
Responsible for identification, assessment and design of specific data engineering solutions, infrastructure and systems to support data-driven decision-making and analysis They enable the organization to effectively and efficiently manage, process, and analyze data, thereby supporting data-driven decision-making and helping to unlock valuable insights from data
Key Requirements
- 3+ years’ experience with SparkSQL, Python and PySpark for data engineering workflow
- Strong proficiency in dimensional modeling and star schema design for analytical workloads
- Experience implementing automated testing and CI/CD pipelines for data workflows
- Familiarity with GitHub operations and collaborative development practices
- Demonstrated ability to optimize engineering workflow jobs for performance and cost efficiency
- Experience with cloud data services and infrastructure (AWS, Azure, or GCP)
- Proficiency with IDE tools such as Visual Studio Code for efficient development
- Experience with Databricks platform will be a plus
Key Accountabilities
Designs develops and validates data processes. They develop data pipelines and support their implementation ensuring data solutions align with business objectives and looking ahead to understand future technology options for the business. They serve as technical expert in a specific process or product area conducting process reviews and initiating change in order to contribute to continuous improvement of services to internal customers. efficiency and quality. Research externally primary data sources select relevant information continually evaluate key themes in technology make recommendations to inform policy andor product development in own area of IT.
Key Responsibilities
- Design and implement ETL/ELT pipelines using Spark SQL and Python within Databricks Medallion architecture
- Develop dimensional data models following star schema methodology with proper fact and dimension table design, SCD implementation, and optimization for analytical workloads
- Optimize Spark SQL and DataFrame operations through appropriate partitioning strategies, clustering and join optimizations to maximize performance and minimize costs
- Build comprehensive data quality frameworks with automated validation checks, statistical profiling, exception handling, and data reconciliation processes
- Establish CI/CD pipelines incorporating version control, automated testing including but not limited to unit test, integration test, smoke test, etc.
- Implement data governance standards including row-level and column-level security policies for access controls and compliance requirements
- Create and maintain technical documentation including ERDs, schema specifications, data lineage diagrams, and metadata repositories
Why Zurich
At Zurich, we like to think outside the box and challenge the status quo. We take an optimistic approach by focusing on the positives and constantly asking What can go right?
We are an equal opportunity employer who knows that each employee is unique - that’s what makes our team so great!
Join us as we constantly explore new ways to protect our customers and the planet.
- Location(s): ID - Head Office - MT Haryono
- Remote working: Hybrid
- Schedule: Full Time
- Recruiter name: Ayu Candra Sekar Rurisa
- Closing date: