- Take ownership of data strategy and architecture from the ground up.
- Flexible, remote-first working with visits to a high-impact biotech lab.
- Cutting-edge work combining AI, genomics, and diagnostics to save lives.
- A Master’s or PhD in Computer Science, Data Science, Bioinformatics, or similar/equivalent experience
- 4+ years in a data engineering or similar role, building and maintaining data pipelines
- Experience with various cloud platforms (as opposed to specialised in one) such as AWS, GCP and Azure
- Strong experience with Python and SQL
- Experience working with data lakes, warehouses, and large-scale datasets
- Understanding of data privacy and security principles
- Exposure to life sciences, genomics, or regulated medical environments (such as ISO 13485)
- Comfortable working across both on-premise and cloud-based data systems
- Clear communicator, able to translate complex data concepts to cross-functional teams
- DevOps tools like Docker, Kubernetes, CI/CD
- Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams
- Genomic data formats and tools
- Cold and hot storage management, ZFS/RAID systems, or tape storage
- AI/LLM tools to accelerate data workflows