The work:
- Design and develop data centric microservices using Python, Java, C#, Spring Boot, Hibernate, REST, and gRPC
- Build scalable AWS based data services leveraging SQS, SNS, Lambda, SWF/Step Functions, and other cloud-native components
- Participate in data architecture and solution design reviews, multimodule planning, and long-term data strategy
- Conduct research, prototypes, and feasibility analyses for new data features, modernization, and cloud transformations
- Develop, optimize, and maintain PostgreSQL-backed data services, schemas, and performance tuned queries
- Build and maintain CI/CD pipelines using Jenkins, Maven, Git, and GitLab
- Containerize and deploy data services to EKS using Docker and Kubernetes
- Write unit, integration, and system tests using JUnit 5; support data validation, release testing, and beta cycles
- Troubleshoot performance, reliability, and integration issues across distributed data systems
- Collaborate with architects, PMs, and customers to refine data requirements, perform demos, coordinate installations, and ensure alignment
- Document data APIs, architecture diagrams, ETL/ELT processes, deployment procedures, and runbooks
- Provide technical leadership, mentoring, and knowledge sharing within the data engineering team
- Participate in SAFe Agile ceremonies and contribute to continuous improvement of data practices
Here's what you need:
- Experience 8+ years of professional experience working as a Data Engineer or in a similar role building data centric systems
- At least 4+ years of hands-on experience working with AWS cloud solutions or Java C#, or Python, Spring Boot, and Hibernate for data driven services
- Hands-on experience with AWS data and messaging services (SQS, SNS)
- Strong understanding of microservices, event-driven architectures, and relational databases (PostgreSQL preferred)
- Proficiency with JUnit (JUnit 5 preferred) and automated testing frameworks
- Experience in Agile or SAFe development environments
- Comfortable working directly with customers for data requirements, demos, data onboarding, and deployment coordination
Preferred Qualifications:
- Experience with CI/CD tools: Jenkins, Maven, Git, GitLab, Jira
- Experience with containerization technologies: EKS with Docker and Kubernetes
- Experience building and consuming RESTful APIs; familiarity with gRPC
- Experience with AWS workflow and compute services such as SWF/Step Functions and Lambda
- Background in PostgreSQL schema design, optimization, and performance tuning for large datasets
- Knowledge of Python or ReactJS for internal data tools, pipelines, or integrations
- Experience supporting data-intensive production systems or mission critical environments
- Prior technical lead experience or participation in data architecture reviews
- Exposure to large-scale data processing, analytics pipelines, ETL/ELT operations, or high-volume distributed systems
- Bachelor's degree in computer science, Software Engineering, or related field
As required by local law, Accenture Federal Services provides reasonable ranges of compensation for hired roles based on labor costs in the states of California, Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, New Jersey, New York, Washington, Vermont, the District of Columbia, and the city of Cleveland. The base pay range for this position in these locations is shown below. Compensation for roles at Accenture Federal Services varies depending on a wide array of factors, including but not limited to office location, role, skill set, and level of experience. Accenture Federal Services offers a wide variety of benefits. You can find more information on benefits here. We accept applications on an on-going basis and there is no fixed deadline to apply.
Learn more about this Employer on their Career Site
