SonicJobs Logo
Left arrow iconBack to search

Data Integration Engineer

Bridgehead IT
Posted 5 months ago, valid for 16 days
Location

San Antonio, Bexar 78288, TX

Salary

Competitive

Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • The Data Integration Engineer will be responsible for designing, building, and operating scalable data pipelines and warehouse models for analytics and reporting.
  • Candidates should have a minimum of 4 years of experience in Data Warehousing and Data Engineering, with strong SQL skills and familiarity with ETL tools like Azure Data Factory and Fivetran.
  • The role involves integrating diverse data sources and optimizing workflows for performance and reliability, as well as managing data warehouse platforms such as Snowflake and Azure Synapse.
  • The position offers a competitive salary, although the specific amount is not provided in the job description.
  • The ideal candidate should be able to work collaboratively with stakeholders and provide technical guidance to team members.

Position Summary:

The Data Integration Engineer will own the design, build, and operation of scalable data pipelines and warehouse models that power analytics and operational reporting. You’ll integrate diverse sources (databases, APIs, SaaS apps, flat files), and engineer performant ELT/ETL in the cloud. You’ll collaborate closely with analytics, app dev, and business stakeholders to turn requirements into trusted, production-grade datasets.



Duties/Responsibilities:

Data Pipeline Development

  • Design and implement scalable ETL/ELT pipelines (batch and near-real-time) to ingest from databases, APIs, SaaS, and flat files into Snowflake, Azure Synapse, or similar.
  • Build integrations using tools such as Azure Data Factory (ADF), Fivetran, CData Sync, and Boomi; extend with custom code where needed.
  • Write clean, maintainable code (primarily SQL, plus Python or PHP when required for custom connectors, transformations, or microservices).
  • Optimize workflows for performance, reliability, and scalability (partitioning, parallelism, incremental loads, CDC, idempotency, retry/rollback).
  • Manage Datawarehouse platforms such as Azure Synapse and Snowflake
  • Troubleshoot data pipeline failures and errors.

Data Warehouse Management

  • Develop and maintain data models, schemas, views, and stored procedures; manage staging/core/mart layers and source to target mappings.
  • Implement data quality validation and monitoring (null/dup/range checks, schema drift detection, reconciliation).
  • Respond to and troubleshoot errors identified.
  • Apply warehouse best practices (clustering/partitioning, cost governance, role-based access, tagging/lineage).

SQL & Python Support

  • Complete complex SQL queries, refactors (window functions, CTE chains), and performance tuning (explain plans, pruning, join strategy).
  • Create and/or troubleshoot Python notebooks (packaging, scheduling, secret mgmt) and integration into pipelines.

 Collaboration & Documentation

  • Partner with data analysts and stakeholders to clarify requirements and acceptance criteria and translate into source to target and technical designs.
  • Maintain technical specs, data flow diagrams, and operational procedures; contribute to standards and reusable patterns.

 

Qualifications:

The ideal candidate will possess the following abilities, attributes, experience and skills:

  • 4+ years’ experience in Data Warehousing and Data Engineering.
  • Strong experience with Data Warehouse as a Service (DWaaS) platforms (Snowflake, BigQuery, etc).
  • Strong SQL skills and ability to write queries and data extracts.
  • Experience working with different database types
  • Experience working with and troubleshooting different ETL tools such as Azure Data Factory, Boomi, Fivetran, and CData Sync.
  • Strong understanding of DWaaS database architecture and ability to design and build optimal data processing pipelines.
  • Demonstrated skill in designing highly scalable ETL processes with complex data transformations, data formats including data cleansing, data quality assessment, error handling and monitoring.
  • Design, develop, manage, and monitor complex ETL data pipelines and support it through all environment runways.
  • Experience with Python/JavaScript or other scripting languages is a plus.
  • Provide Support and troubleshooting for data platforms.
  • Manage and prioritize multiple assignments.
  • Ability to work individually and as a team.
  • Provide technical guidance and mentoring for other team members.
  • Good communication and cross functional skills.

 

Bridgehead IT is proud to be an equal opportunity workplace and is an affirmative action employer.


Location

San Antonio, Texas

Department

Systems Development

Employment Type

Full-Time

Minimum Experience

Experienced




Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.