SonicJobs Logo
Left arrow iconBack to search

Principal Data Engineer

Prodege
Posted 3 months ago, valid for 17 days
Location

El Segundo, CA 90245, US

Salary

$110,000 - $132,000 per year

info
Contract type

Full Time

Paid Time Off
Life Insurance

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • Prodege is seeking a Principal Data Engineer to lead the design and modernization of their data platform, focusing on technologies like Snowflake, dbt, and Iceberg.
  • The role requires a minimum of six years of hands-on experience in data engineering, particularly in high-volume environments, and expert-level proficiency in SQL and Python.
  • Responsibilities include architecting Lakehouse solutions, optimizing data pipelines, ensuring data quality and governance, and supporting AI/ML initiatives.
  • The anticipated base salary for this position ranges from $160,000 to $195,000, with final offers dependent on experience and qualifications.
  • Prodege offers a comprehensive benefits package, including flexible PTO, paid holidays, and stock purchase options, while promoting a diverse and inclusive workplace.

Job Description:

Strategic Imperative:

The Principal Data Engineer role is integral to the success of Prodege’s core business by serving as a highly technical, hands-on leader responsible for designing, building, and modernizing the next-generation data platform. The engineer will be at the center of transforming our data architecture across Snowflake, dbt, Iceberg, and real-time processing, ensuring the data foundation is scalable, reliable, and "AI-ready" to power our flagship products (Swagbucks, MyPoints, Insights) and complex business domains (CX, Rewards, Performance Marketing).

 

Prodege:

A cutting-edge marketing and consumer insights platform, Prodege has charted a course of innovation in the evolving technology landscape by helping leading brands, marketers, and agencies uncover the answers to their business questions, acquire new customers, increase revenue, and drive brand loyalty & product adoption. Bolstered by a major investment by Great Hill Partners in Q4 2021 and strategic acquisitions of Pollfish, BitBurst & AdGate Media in 2022, Prodege looks forward to more growth and innovation to empower our partners to gather meaningful, rich insights and better market to their target audiences.

As an organization, we go the extra mile to “Create Rewarding Moments” every day for our partners, consumers, and team. Come join us today!

Primary Objectives: 

  • Architecture & Modernization: Lead the design and implementation of the Lakehouse architecture (Iceberg/Trino) and refactor complex legacy data systems into modern patterns.

  • High-Performance Pipeline Delivery: Design, build, and optimize high-scale, reliable ELT/ETL data pipelines using expert-level SQL, Python, Snowflake, and dbt.

  • Data Quality & Governance: Own the observability, lineage, quality, and governance frameworks for mission-critical datasets across the multi-product ecosystem.

  • AI/ML Enablement: Directly support Data Science and ML Engineering teams by delivering production-grade data sets and optimizing feature engineering pipelines.

  • Engineering Excellence & Mentorship: Elevate the engineering bar across the team, championing best practices and utilizing AI-assisted development tools to accelerate workflow.

 

Qualifications - To perform this job successfully, an individual must be able to perform each job duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

Detailed Job Duties: (typical monthly, weekly, daily tasks which support the primary objectives)

  • Architecture & Modernization

    • Architect, design, and implement components of the next-generation Lakehouse platform, leveraging Iceberg, Trino, and Snowflake.

    • Lead the simplification and refactoring efforts for complex, high-volume legacy pipelines, migrating them to modern, declarative ELT patterns (primarily via dbt).

    • Define and implement best practices for data storage, partitioning, clustering, and schema evolution to optimize performance and reduce cloud compute costs.

  • High-Performance Pipeline Delivery

    • Design, build, and maintain scalable, reliable data pipelines (batch and near real-time) using Python, expert-level SQL, and orchestration tools (e.g., Airflow, etc.).

    • Develop and enhance Snowflake data models, dbt models, and high-performance analytical data marts for consumption by BI, reporting, and product applications.

    • Own the entire pipeline lifecycle: requirements gathering $\to$ design $\to$ build $\to$ unit/integration testing $\to$ deployment $\to$ monitoring $\to$ iteration.

  • Data Quality & Governance

    • Implement and enhance data lineage, quality checks (via dbt tests/Great Expectations), observability, and alerting across core data pipelines.

    • Collaborate with Data Governance and Security teams to enforce data access controls, PII handling, and retention policies.

    • Continuously monitor and tune pipeline performance to meet strict data SLAs (Service Level Agreements) and SLOs (Service Level Objectives).

  • AI/ML Enablement

    • Work closely with Data Science and ML Engineering teams to understand and enable their training and serving data needs.

    • Design and optimize data feeds for high-volume Machine Learning workloads, including the development of feature stores and model-serving pipelines.

    • Ensure data consistency and integrity for critical AI-driven applications across consumer and business products.

  • Engineering Excellence & Mentorship

    • Actively use AI-assisted development tools (like GitHub Copilot, Gemini, etc.) to accelerate coding, generate documentation, draft tests, and simplify complex spec generation.

    • Set high technical standards for code quality, testing, and documentation within the Data Engineering team.

    • Provide technical leadership and mentorship to junior and mid-level engineers, running design reviews and driving consensus on architectural trade-offs.

What does SUCCESS look like?

  • Platform Stability: Core data pipelines are highly reliable, meet defined SLAs/SLOs, and data quality incidents are rare, quickly detected, and resolved.

  • Modernization Achieved: Significant progress has been made in migrating key legacy pipelines to the modern Lakehouse architecture (Iceberg/Trino/Snowflake/dbt).

  • AI Velocity: The team effectively leverages AI-assisted development tools, resulting in a demonstrable increase in delivery velocity and code quality (e.g., faster time-to-production, reduced defect rate).

  • Data Democratization: High-quality, governed datasets are easily discoverable and accessible to Data Scientists, Analysts, and downstream product applications.

  • Technical Influence: The engineer is recognized as a technical leader, driving architectural decisions and raising the collective skill level of the data organization.

 

The MUST Haves: (ex: job cannot be done without these skills/competencies, education, experience, certifications, licenses

  • Bachelors’ degrees in Computer Science or equivalent are of study, or equivalent years of relevant experience.

  • Six or more (6+) years of hands-on experience in data engineering, ideally in multi-product, high-volume, or consumer-scale environments.

  • Expert-level proficiency in SQL, strong Python, and extensive experience building robust ETL/ELT workflows.

  • Strong experience with Snowflake and dbt (Data Build Tool) for data transformation and analytics engineering.

  • Proven experience with modern data modeling techniques (e.g., Kimball, Data Vault, semantic layers) and performance tuning of large queries.

  • Experience with Iceberg, Trino, or similar open table format/query engine ecosystems in a Lakehouse architecture.

  • Ability to navigate and refactor complex, interconnected data systems with an Ownership Mindset (you build it, you run it).

 

The Nice to Haves: (preferred additional skills/competencies, education, experience, certifications, licenses

  • Experience with Kafka, Kinesis, or Apache Flink for streaming ingestion and event-driven data architectures.

  • Familiarity with feature stores, model-serving pipelines, and MLOps practices.

  • Professional experience using AI-driven development tools (e.g., GitHub Copilot, etc.) for coding, testing, or documentation generation.

  • Prior experience in a consumer rewards, survey, or performance marketing ecosystem.

Pay Transparency:

The anticipated base salary range for this position is $160,000 to $195,000. The final salary offered to a successful candidate will be dependent on several factors that may include, but are not limited to; the type and length of experience within the job, type and length of experience within the industry, the type and length of knowledge and skills for the position, education, training, etc. Prodege is a multi-state employer and final compensation within this range could be impacted by work location. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits.

Prodege Benefits:

Prodege offers a comprehensive benefits package to US Full-time employees including medical, dental, vision, STD, LTD and basic life insurance. Employees receive flexible PTO, as well as paid sick leave prorated based on hire date. US Employees have eight paid holidays throughout the calendar year. Employees receive an option to purchase shares of Company stock commensurate with their position, which vests over four years. 

Equal Employment Opportunity Statement

At Prodege, we are committed to creating a diverse and inclusive environment. We are proud to be an

Equal Opportunity Employer and do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, disability, veteran status, or any other characteristic protected by law. We encourage individuals of all backgrounds to apply.

FCIHO

Employers will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of FCIHO.




Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.