CrawlJobs Logo

Cloud Technical Architect / Data DevOps Engineer

https://www.hpe.com/ Logo

Hewlett Packard Enterprise

Location Icon

Location:
United Kingdom, Bristol

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

The role involves designing, implementing, and optimizing scalable Big Data and cloud solutions while collaborating with internal and external teams. It requires expertise in a range of technologies including AWS, Kubernetes, containerization, and Infrastructure as Code. The position focuses on delivering client outcomes and technical excellence, aligned with HPE's culture of innovation and inclusion.

Job Responsibility:

  • Detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems
  • Participating in the full lifecycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between
  • Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies
  • Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader
  • Assisting with solution improvement activities driven either by the project or service
  • Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation
  • Cloud Engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code
  • Provide technical challenge and assurance throughout development and delivery of work
  • Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership
  • Work independently and/or within a team using a DevOps way of working

Requirements:

  • An organised and methodical approach
  • Excellent time keeping and task prioritisation skills
  • An ability to provide clear and concise updates
  • An ability to convey technical concepts to all levels of audience
  • Data engineering skills – ETL/ELT
  • Technical implementation skills – application of industry best practices & designs patterns
  • Technical advisory skills – experience in researching technological products / services with the intent to provide advice on system improvements
  • Experience of working in hybrid environments with both classical and DevOps
  • Excellent written & spoken English skills
  • Excellent knowledge of Linux operating system administration and implementation
  • Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc.
  • Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc.
  • Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc.
  • Observability - SRE
  • Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem
  • Edge technologies e.g. NGINX, HAProxy etc.
  • Excellent knowledge of YAML or similar languages
  • Experienced in Cloud native technologies in AWS
  • Experienced in deploying IaaS/PaaS in Multi Cloud Environments
  • Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates
  • Experienced in being able to act as bridge from the infrastructure through to user facing systems

Nice to have:

  • Jupyter Hub Awareness
  • Minio or similar S3 storage technology
  • Trino / Presto
  • RabbitMQ or other common queue technology e.g. ActiveMQ
  • NiFi
  • Rego
  • Familiarity with code development, shell-scripting in Python, Bash etc.
  • Experienced in Kubernetes Containers
  • Experienced in the use of Automation tools e.g. Terraform, Ansible, Foreman, Puppet and Python
  • Experienced in different flavours of Linux platform and services
What we offer:
  • Extensive social benefits
  • Flexible working hours
  • Competitive salary
  • Shared values
  • Equal opportunities
  • Work-life balance
  • Evolving career opportunities
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing

Additional Information:

Job Posted:
March 20, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.