Job DetailsJob Location: Medelin, Antioquia 050021Position Type: Full TimeSalary Range: $2,700.00 - $3,700.00 Salary/monthTravel Percentage: NoneJob Category: Information TechnologyWho We Are Cadex Solutions Corporation is an international holding company formed by Trivest Partners LP to build the premier provider of commercial order-to-cash management solutions. With a history spanning nearly 100 years, Cadex is uniquely positioned with in-depth experience that builds relationships alongside results. Our team of industry experts brings innovation and data insight, improves your processes with hands-on help, and provides custom solutions based on specific needs. Cadex has approximately 800 employees serving over 1,000 clients across all industries from locations including the United States, Colombia, Brazil, Romania, Italy, India, Singapore, and South Africa. Since 2019, Cadex has been putting together a strong portfolio of ARM companies, including A.G. Adjustments, formed in 1974 and headquartered in Melville, NY D&S Global Solutions, formed in 1997 and fully remote ABC-Amega, formed in 1929 and headquartered in Buffalo, NY TranSubro, formed in 2012 and headquartered in Oceanside, NY DAL, formed in 1974 and headquartered in Clifton Heights, PA Insurance Recovery Group, founded in 1994 and headquartered in Marlbourgh, MA. Receivables Control Corporation, founded in 1970 and headquartered in Maple Grove, MN. Â Job Title: Team Leader ETL Python Developer Location: Remote - Colombia (only) Department: Enterprise Data & Analytics / Technology Reports to: Chief Technology Officer About the Role We are seeking a Senior ETL Python Developer to architect and build high-performance data pipelines that power critical business insights. Moving beyond traditional ETL, you will design scalable Python-based frameworks to process and transform complex datasets, optimize query performance, and implement rigorous data quality controls. This is a hands-on architectural role requiring deep expertise in Python, SQL, and modern data engineering practices. Key Responsibilities Design, develop and maintain ETL pipelines and data processing workflows Write clean, efficient and testable Python code to support data ingestion,transformation and integration Collaborate closely with data engineers, analysts and other stakeholders to define data requirements and optimize pipeline performance; Implement monitoring, logging and alerting to ensure pipeline reliability Optimize and refactor legacy ETL processes for scalability and improved performance Participate in code reviews, provide technical guidance and mentor junior developers; Troubleshoot production issues and contribute to root-cause analysis and continuous improvement efforts. Guide technical direction, enforce code standards, provide mentorship, and ensure delivery ownership Key Responsibilities Design, build, and optimize complex ETL/ELT pipelines capable of handling massive datasets. Define the technical roadmap for migrating legacy processes to modern, scalable Python frameworks. Write and review clean, modular, production-ready Python code. Leverage libraries such as Pandas for high-performance data validation, cleansing, and transformation. Implement rigorous frameworks for data validation and quality control, ensuring accuracy and consistency across petabyte-scale environments. Lead code reviews, enforce software engineering best practices (testing, CI/CD, version control), and provide direct technical mentorship to junior and senior developers. Partner with Data Architects, Analysts, and Product stakeholders to define data requirements and optimize query performance against relational/NoSQL databases (MySQL, BigQuery, etc.). Oversee monitoring, logging, and alerting strategies to ensure 99.9% pipeline reliability. Lead root-cause analysis and troubleshooting for critical production incidents. Take full ownership of project delivery timelines. Guide the team in breaking down complex epics into manageable sprints while maintaining velocity and quality. Required Qualifications: 5+ years of professional experience in Data Engineering, with at least 2 years in a lead or senior capacity (Team Lead, Tech Lead, or Manager). Advanced proficiency in Python: Demonstrated ability to write highly optimized, maintainable code. Deep experience with Pandas for data manipulation is mandatory. Extensive experience designing and maintaining ETL/ELT workflows handling high-volume, large-scale datasets. Strong expertise in SQL and database systems (MySQL, PostgreSQL, or NoSQL variants). Proven experience building or managing REST APIs / microservices to support data integration. Deep understanding of the software development lifecycle, including CI/CD pipelines, unit/integration testing, and Git workflows. Preferred Qualifications: Working knowledge of JavaScript for building internal tools or custom API integrations. Experience with cloud platforms (GCP or Azure). Familiarity with containerization and orchestration (Docker, Kubernetes). Knowledge of modern data warehousing solutions (BigQuery, Snowflake, or
Learn more about this Employer on their Career Site
