About the Role
The Data Acquisition Team is the entry point to WEX’s Data-as-a-Service (DaaS) platform—responsible for ingesting, validating, and orchestrating raw data from dozens of internal systems and third-party providers.
As a Software Engineer 2 - Data Acquisition (Data Engineer), you’ll play a key role in designing and building robust, scalable, and extensible pipelines that feed the entire data ecosystem at WEX. You’ll work across multiple data domains and ingestion patterns—batch, streaming, and event-driven—while ensuring quality, performance, and governance are embedded in every step.
WEX is undergoing a data platform transformation—and this team builds the foundation. Every pipeline you create contributes directly to powering analytics, automation, and product intelligence across all business domains.
If you’re passionate about scaling data platforms from the ground up, this is your chance to help shape how WEX ingests and leverages its most valuable asset: data.
What You’ll Do
Design and implement moderately complex ingestion pipelines that integrate with internal and external systems.
ÂDevelop reusable components for data transformation, validation, and logging.
ÂContribute to both batch and streaming ingestion flows, ensuring scalability and maintainability.
ÂSupport platform observability by enhancing monitoring, alerting, and error-handling features.
ÂParticipate in design discussions, code reviews, and incident investigations.
ÂPartner with data consumers to understand requirements and translate them into ingestion solutions.
ÂImprove automation and testing coverage to reduce manual effort and increase pipeline reliability.
Â
Â
What You Bring
B.Sc. in Computer Science, Engineering, or related technical field (M.Sc. preferred). Equivalent experience considered.
2–4 years of experience as a data or software engineer, ideally working with data pipelines or distributed systems.
ÂSolid programming skills in Python, Java, or Scala, with ability to write maintainable, production-ready code.
ÂHands-on experience with ETL/ELT pipelines, schema management, and data modeling concepts.
ÂFamiliarity with streaming (e.g., Kafka, Kinesis, Spark Streaming) or batch frameworks.
ÂUnderstanding of CI/CD, version control, and testing practices.
ÂExposure to observability practices such as logging, metrics, and tracing.
ÂStrong sense of accountability and eagerness to take ownership of assigned deliverables.
Learn more about this Employer on their Career Site
