Senior Data Engineer | Azure | Snowflake | Fivetran | DBT | ETL / ELT | APIs - ALL REQUIRED!Rate: £225 per day (Outside IR35)Contract: Initially 9 months - likely to extendStart: ASAPWorking model: Remote depending on location or Hybrid 1 day p/wWorking hours: UK (GMT)Required: Azure, Fivetran or Airbyte, Snowflake, Python & APIs (Non Negotiable)ℹ️ Only candidates with relevant FIVETRAN or AIRBYTE experience will be considered for this role. 🚨 Non‑Negotiables - Please only apply if you have ALL of the following 🚨Core technologies:Snowflake (data warehouse)Fivetran or Airbyte (data connectors)DBT (data transformation)Python (including building custom API connectors)Environment:Strong experience working in Azure‑based data environmentsProven, hands‑on experience as a Data EngineerStrong experience building and maintaining ETL/ELT pipelinesSolid understanding of data warehousing and data modelling conceptsKey skills:Experience with Fivetran or Airbyte, Snowflake, Python, and APIsStrong knowledge of connecting to APIs for data ingestionFamiliarity with similar tools (Airbyte, Airflow, etc.) acceptable if you understand the core principles of API‑based ingestionStrong SQL skills and experience with structured and semi‑structured dataExperience ensuring data quality, consistency, and governanceAbility to work UK (GMT) hoursHigh‑level written and spoken English, with the ability to communicate clearly with a global teamRole OverviewWe’re looking for an experienced Senior Level Data Engineer to join a fast‑moving team responsible for building and scaling a modern, Azure‑native data platform. You’ll play a key role in designing, developing, and maintaining robust data infrastructure that supports analytics, reporting, and business decision‑making across the organisation.This is a hands‑on role focused on data pipelines, ETL/ELT processes, data modelling, and data quality -working closely with analysts, data scientists, and software engineers in a collaborative environment.Key ResponsibilitiesDesign, develop, and maintain data infrastructure including data warehouses, data lakes, databases, and ETL pipelinesBuild efficient, scalable ETL/ELT pipelines to ingest data from multiple sources, including API‑based ingestionDevelop custom API connectors in Python where requiredDesign and implement data models to support business and analytical requirementsDefine data schemas, structures, and relationships between data entitiesOptimise data models and pipelines for performance, scalability, and reliabilityCollaborate with cross‑functional teams to understand data needs and ensure seamless data access and integrationSupport system integration efforts to ensure accurate and consistent data flow between platformsImplement and maintain data quality controls, validation processes, and governance frameworksMonitor data trends and analyse datasets to identify patterns, risks, and opportunities for improvementDocument data engineering processes, data flows, and system configurationsClearly articulate complex technical concepts to both technical and non‑technical stakeholdersNice to HaveExperience with modern orchestration tools (e.g., Airflow, Dagster)Experience with big data technologies or streaming platformsExposure to analytics, BI, or data science workflowsExperience working in fast‑paced or scale‑up environmentsIf you tick all of the above boxes and can start ASAP, we’d love to hear from you.
Responsibilities
Job Requirements
Apply now