Senior Hadoop Administrator

  • Full-time
  • Department: IT

Company Description

PubMatic is the automation solutions company for an open digital media industry. Featuring the leading omni-channel revenue automation platform for publishers and enterprise-grade programmatic tools for media buyers, PubMatic’s publisher-first approach enables advertisers to access premium inventory at scale. Processing nearly one trillion ad impressions per month, PubMatic has created a global infrastructure to activate meaningful connections between consumers, content and brands. Since 2006, PubMatic’s focus on data and technology innovation has fueled the growth of the programmatic industry as a whole. Headquartered in Redwood City, California, PubMatic operates 11 offices and six data centers worldwide.

Job Description

PubMatic Data center team is looking for a Senior Hadoop Administrator who will be responsible for assisting with the design, implementation, and ongoing support of the Big Data platforms. 

As a Senior Hadoop Administrator, you will be responsible for installing, configuring and maintaining multiple Hadoop clusters. you will be responsible for design and architecture of the Big Data Platform, work with development teams to optimize different Hadoop services and deploy and code in to multiple environment

Major Duties and Responsibilities:

  • Manage large scale Hadoop cluster environments, handling all Hadoop environment builds, including design, capacity planning, cluster setup, performance tuning and ongoing monitoring. 
  • Evaluate and recommend systems software and hardware for the enterprise system including capacity modeling.
  • Architecture of our Hadoop infrastructure to meet changing requirements for scaling, reliability, performance and manageability.
  • Work with core production support personnel in IT and Engineering to automate deployment and operation of the infrastructure. Manage, deploy, and configure infrastructure with Ansible or other automation tool sets.
  • Creation of metrics and measures of utilization and performance.
  • Capacity planning and implementation of new/upgraded hardware and software releases as well as for storage infrastructure.
  • Responsible for monitoring the Linux community and report on important changes/enhancements to the team.
  • Ability to work well with a global team of highly motivated and skilled personnel - interaction and dialogue are requisites in this dynamic environment. 
  • Research and recommend innovative, and where possible, automated approaches for system administration tasks. Identify approaches that leverage our resources, provide economies of scale, and simplify remote/global support issues. 

Requirements:

  • 9 years of professional experience supporting production medium to large scale Linux environments.
  • 5 years of professional experience working with Hadoop (HDFS & MapReduce) and related technology stack.
  • A deep understanding of Hadoop design principals, cluster connectivity, security and the factors that affect distributed system performance.
  • Experience on Kafka,Hbase and Hortonworks is mandatory. 
  • MapR and MySQL experience a plus.
  • Solid understanding of automation tools (puppet, chef, ansible)
  • Expert experience with at least one if not most of the following languages; python, perl, ruby, or bash.
  • Prior experience with remote monitoring and event handling using Nagios, ELK.
  • Solid ability to create automation with chef, puppet, ansible or a shell.
  • Good collaboration & communication skills, the ability to participate in an interdisciplinary team.
  • Strong written communications and documentation experience.
  • Knowledge of best practices related to security, performance, and disaster recovery.

Qualifications


  • BE/BTech/BS/BCS/MCS/MCA in Computers or equivalent
  • Excellent interpersonal, written, and verbal communication skills



Additional Information

All your information will be kept confidential according to EEO guidelines.