This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
CVS Health is building a world-class health solutions company. The Senior Data Engineer - GCP role focuses on creating scalable data pipelines, developing microservices, and collaborating across teams for impactful digital projects in healthcare technology.
Job Responsibility:
Design, develop, and optimize data pipelines on GCP using BigQuery, Dataflow, and Pub/Sub
Build and maintain microservices to support data processing and analytics
Develop real-time and batch data ingestion frameworks using Kafka
Work with NoSQL databases (e.g., Firestore, Cassandra, MongoDB) to support high-throughput data applications
Ensure data integrity, governance, and security best practices in the data pipeline
Optimize performance, scalability, and cost efficiency of data solutions
Collaborate with data scientists, analysts, and software engineers to deliver end-to-end data products
Implement monitoring, logging, and alerting for data pipelines and services
Participate in agile development methodologies, attend meetings, and contribute to project planning
Explore new tools, frameworks, and techniques to improve digital solutions and drive innovation
Create and maintain technical documentation while sharing knowledge with team members
Requirements:
5+ years of overall experience in large-scale software development with at least 2+ years of experience in a technical lead role
5+ years of experience in data engineering with a focus on microservices-based data solutions
5+ years of designing and developing highly scalable Microservices, Webservices, and Rest API’s
3+ years of hands-on experience with GCP services like BigQuery, Dataflow, Pub/Sub, and Cloud Storage
3+ years of containerization (Docker, Kubernetes) and CI/CD for data pipelines
3+ years of experience developing within an agile development
Nice to have:
Experience designing and developing in a Test-Driven Development environment
Proficiency in Python or Java for data processing
Experience with monitoring and logging tools like Prometheus, Stackdriver, or ELK
Understanding of distributed computing and data partitioning strategies
Strong problem-solving skills and ability to work in an agile environment
Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes, etc.)
Experience with Terraform or Infrastructure as Code (IaC)
Familiarity with Apache Beam or Flink for large-scale data processing
Knowledge of machine learning pipelines and integration with data engineering workflows
Experience in performance tuning and cost optimization in cloud environments
Experience with new SQL databases
What we offer:
Affordable medical plan options, 401(k) plan (including matching company contributions), and employee stock purchase plan
No-cost programs for all colleagues including wellness screenings, tobacco cessation and weight management programs, confidential counseling, and financial coaching
Paid time off, flexible work schedules, family leave, dependent care resources, colleague assistance programs, tuition assistance, retiree medical access
Welcome to
CrawlJobs.com
– Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.