Hadoop Admin, Data Platform

  • Full-time
  • Job Family Group: Technology and Operations

Company Description

Common Purpose, Uncommon Opportunity. Everyone at Visa works with one goal in mind – making sure that Visa is the best way to pay and be paid, for everyone everywhere. This is our global vision and the common purpose that unites the entire Visa team. As a global payments technology company, tech is at the heart of what we do: Our VisaNet network processes over 13,000 transactions per second for people and businesses around the world, enabling them to use digital currency instead of cash and checks. We are also global advocates for financial inclusion, working with partners around the world to help those who lack access to financial services join the global economy. Visa’s sponsorships, including the Olympics and FIFA™ World Cup, celebrate teamwork, diversity, and excellence throughout the world. If you have a passion to make a difference in the lives of people around the world, Visa offers an uncommon opportunity to build a strong, thriving career. Visa is fueled by our team of talented employees who continuously raise the bar on delivering the convenience and security of digital currency to people all over the world. Join our team and find out how Visa is everywhere you want to be.

Job Description

  • Set up and maintain Hadoop Clusters (Hortonworks and Open source)
  • Estimate, design and size the cluster needs
  • Security implementations and checks with Apache Knox, Apache Ranger & Kerberos
  • Administer the NoSQL Data stores
  • Design & optimize analytical jobs and queries against data in the HDFS/Hive environments
  • Develop bash shell or python scripts, LINUX utilities & LINUX Commands
  • Develop data models that helps in platform analytics & hardening
  • Develop self-healing system at scale
  • Able to help and Guide L1/L2 support engineers to fix day-to-day Operational issues
  • Perform Tuning and Increase Operational efficiency on a continuous basis
  • Build framework to gather all deep system level metrics across platforms centrally as part of the control center 
  • Develop central dashboards for all System, Data, Utilization and availability metrics
  • Build high volume data integrations

Qualifications

  • Excellent troubleshooting skills. 
  • Knowledge of Hadoop (preferably Hortonworks distribution), HDFS, Hive, Kafka, Spark, Oozie, HBase
  • Strong Linux knowledge and scripting, Python
  • Good knowledge of Kerberos, TLS, Apache Knox, Apache Ranger, data encryption
  • Minimum 3 years of work experience in developing, maintaining, optimization, issue resolution of Hadoop clusters, supporting Business users
  • Minimum of four-year technical degree in computer science or IT related required
  • Experience in Linux / Unix OS Services, Administration, Shell, awk scripting
  • Experience in building and scalable Hadoop applications
  • Experience in Core Java, Hadoop (Spark, Kafka, HBase, HDFS, Yarn, Zookeeper)
  • Experience in Hadoop security (Kerberos, Knox, TLS)
  • Hands-on Experience in SQL and No SQL Databases (HBASE/Cassandra/Mongo DB)
  • Experience in building large scale real-world backend and middle-tier systems in Java
  • Basic Knowledge of Application servers  Tomcat (or any equivalent App server), Restful Services, JSON

Additional Information

REF16420H

Privacy Policy