London
Full-time
Not specified
Mid-Senior level
Salary
Sponsorship
15% more than your current base salary
SAVE
APPLY
👥
45
Clicked Apply

Job Description

About RipjarRipjar specialises in the development of software and data products that help governments and organisations combat serious financial crime. Our technology is used to identify criminal activity such as money laundering and terrorist financing, enabling organisations to enforce sanctions at scale to help combat rogue entities and state actors.Data infuses everything Ripjar does. We work with a wide variety of datasets of all scales, including an ever-growing archive of billions of news articles covering most languages going back over 30 years, sanctions and watchlist data provided by governments, and vast organisation and ownership datasets.About The RoleWe see a Data Engineer as a software engineer who specialises in distributed data systems. You'll join the Data Engineering team, whose prime responsibility is the development and operation of the Data Collection Hub, a platform that ingests data from many sources, processes/enriches it, and distributes it to multiple downstream systems.We're looking for someone with 2+ years of industry experience building and operating production software who enjoys working across data pipelines, distributed systems, and operational reliability.What You'll DoEngineer distributed ingestion services that reliably pull data from diverse sources, handle messy real-world edge cases, and deliver clean, well-structured outputs to multiple downstream productsBuild high-throughput processing components (batch and/or near-real-time) with a focus on performance, scalability, and predictable cost, using strong profiling and measurement practicesDesign and evolve data contracts (schemas, validation rules, versioning, backward compatibility) so downstream teams can build with confidenceOwn production quality: write maintainable code, strong unit/integration tests, and add the observability you need (metrics/logs/tracing) to diagnose issues quicklyImprove platform reliability by hardening pipelines against partial failures, retries, rate limits, data drift, and infrastructure issues—then codify those learnings into better tooling and guardrailsContribute to CI/CD and developer experience: faster builds, better test signal, safer releases, and automated operational checksParticipate in design reviews, code reviews, incident retrospectives, and iterative delivery—making pragmatic trade-offs and documenting them clearlyTechnology Stack Languages: Predominantly Python and Node.jsDistributed/data platforms: HDFS, HBase, Spark, plus increasing use of Kubernetes and cloud servicesStorage/search: MongoDB, OpenSearchOrchestration: Airflow, Dagster, NiFiTooling: GitHub, GitHub Actions, Rundeck, Jira, ConfluenceDeployment/config: Ansible (physical), Terraform / Argo CD / Helm (Kubernetes)Development environment: MacBook (typical)RequirementsEssential:2+ years building and operating production software systemsFluency in at least one programming language (Python/Node.js a plus)Experience debugging moderately complex systems and improving reliability/performanceStrong fundamentals: data structures, testing, version control, Linux basicsNice to have:Spark/PySpark experienceHadoop ecosystem exposure (HDFS/HBase)Workflow orchestration (Airflow/Dagster/NiFi)Search/indexing (OpenSearch, MongoDB)Kubernetes and infrastructure-as-codeDegree in Computer Science or numerical degreeBenefitsCompetitive salary DOE25 days annual leave + your birthday off, in addition to bank holidays, rising to 30 days after 5 years of serviceRemote workingPrivate Family Healthcare35 hour working weekEmployee Assistance ProgrammeCompany contributions to your pensionPension salary sacrificeEnhanced maternity/paternity payThe latest tech including a top of the range MacBook Pro

Responsibilities

Job Requirements

Apply now
Read Full Description

More job openings