I am currently working on an exciting Data Engineer position for a banking client in Europe who need someone experienced on Python/PySpark and Azure. Key ResponsibilitiesDesign, develop, and optimize scalable data pipelines and ETL processes using Azure Data Services and Databricks.Implement data transformations and analytics using PySpark and Python.Configure and manage Databricks Unity Catalog for secure data governance and access control.Orchestrate workflows using Apache Airflow for scheduling and automation.Collaborate with cross-functional teams to deliver high-quality data solutions in an Agile environment.Set up and maintain CI/CD pipelines for data workflows and deployments.Manage version control and branching strategies using Git.Write and execute unit tests to ensure code quality and reliability.Monitor, troubleshoot, and optimize data processes for performance and cost efficiency.
Responsibilities
Job Requirements
Apply now