#310 Mid-Level DevOps Engineer with Databricks Experience
We are seeking a talented Mid-Level DevOps Engineer with specialized expertise in Databricks to join our dynamic team. The ideal candidate will have a strong foundation in DevOps principles and practices, coupled with extensive experience working with Databricks platforms. As a key team member, you will play a pivotal role in designing, implementing, and maintaining robust infrastructure solutions to support our organization's evolving needs.
Responsibilities:
- Collaborate with cross-functional teams to understand infrastructure requirements and design scalable, reliable, and secure solutions leveraging Databricks technology.
- Implement and maintain continuous integration/continuous deployment (CI/CD) pipelines for deploying Databricks applications and services.
- Manage and optimize Databricks clusters to ensure high performance, reliability, and cost-effectiveness.
- Automate infrastructure provisioning, configuration management, and monitoring using tools such as Terraform, Ansible, and Prometheus.
- Develop and maintain scripts and tools for monitoring, logging, and troubleshooting Databricks environments.
- Implement the best security, compliance, and data governance practices within Databricks environments.
- Collaborate with data engineers and data scientists to streamline data workflows and optimize data processing pipelines.
- Stay abreast of emerging technologies and industry trends related to Databricks and DevOps practices.
- Provide technical guidance and support to junior team members as needed.
Requirements
Requirements:
- MUST HAVE HANDS-ONexperience working withAWSservices like EC2, S3, VPC, EKS, MWAA, etc.
- Design, develop, and implement data engineering solutions on theDatabricksplatform.
- Demonstrated expertise in deploying and managing Databricks clusters, notebooks, and jobs.
- Experience with infrastructure as code (IaC) tools such as Terraform and configuration management tools like Ansible.
- Proficiency in scripting languages such as Python, Bash, or PowerShell.
- Familiarity with containerization technologies such as Docker and container orchestration platforms like Kubernetes.
- Solid understanding of networking, security, and IAM concepts in cloud environments.
- Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed systems.
Education/Experience:
- Bachelor’s degree in Computer Science, Software Engineering, Computer Engineering, or a related engineering discipline
- Total of five (5) years of DevOps experience, with three (3) years of experience working with AWS services such as EC2, S3, RDS, CloudFormation, Lambda, and IAM; and with at least one (1) - two (2) years of hands-on experience working with Databricks.
- Strong communication skills and the ability to effectively collaborate with cross-functional teams.
- Experience in developing and deploying scalable cloud solutions and services for thefederal sector (required)
- HHS/CMS AWS Cloud experience (preferred)
- Possessing one (1) professional and one (1) specialty AWS certification is desirable.
Preferred Qualifications:
- Databricks Certification(s) is a plus.
- Experience with big data technologies like Apache Spark, Hadoop, or Kafka.
- Familiarity with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or Splunk.
- Knowledge of software development methodologies and version control systems (e.g., Git).
General:
- Strong organizational and communication skills
- Ability to manage multiple tasks and prioritize workload based on the needs of the client
- Ability to deal with ambiguity and frequent changes in priorities
- Ability to work with minimal supervision
- Excellent technical writing skills and proven experience in systems with complex requirements
- Excellent teamwork and interpersonal skills with the ability to team with others to meet project objectives
- Understanding of the system development lifecycle as implemen