Hadoop Platform Engineer - Staff SW Engineer
- Palo Alto, CA
Common Purpose, Uncommon Opportunity. Everyone at Visa works with one goal in mind – making sure that Visa is the best way to pay and be paid, for everyone everywhere. This is our global vision and the common purpose that unites the entire Visa team. As a global payments technology company, tech is at the heart of what we do: Our VisaNet network processes over 13,000 transactions per second for people and businesses around the world, enabling them to use digital currency instead of cash and checks. We are also global advocates for financial inclusion, working with partners around the world to help those who lack access to financial services join the global economy. Visa’s sponsorships, including the Olympics and FIFA™ World Cup, celebrate teamwork, diversity, and excellence throughout the world. If you have a passion to make a difference in the lives of people around the world, Visa offers an uncommon opportunity to build a strong, thriving career. Visa is fueled by our team of talented employees who continuously raise the bar on delivering the convenience and security of digital currency to people all over the world. Join our team and find out how Visa is everywhere you want to be.
This is a hands on role on a large multi-tenant Analytics Hadoop Cluster. You will be collaborating with partners from business and technology organizations, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
- Deliver projects within budget and deadlines
- Ensure that the Hadoop data platform is capable of supporting a growing list of downstream platforms like and SAS, TensorFlow, Salford, MicroStrategy, Tableau and many more
- Hands on performance tuning and operational efficiency
- Formulate methods to enable consistent Data loading and optimize Data operations
- Evaluate and introduce technology tools and processes to streamline operation of the data platform and for streamlining use case execution for a diverse user community from multiple LOBs
- New project sizing and SME support
- Ensure the Hadoop platform can effectively meet performance & SLA requirements
- Minimum of four-year technical degree required. Masters will be highly regarded
- 5+ years experience with Hive, YARN, Spark, Impala, Kafka, SOLR, Oozie, Sentry, Encryption, Hbase, etc.
- Experience in optimization, capacity planning & architecture of a large multi-tenant cluster.
- Experience with open source and Petabyte range data implementations.
- Expertise in at least one commercial distribution of Hadoop. Preferably Cloudera
- Experience with multiple open source tool sets in the Big Data space
- Appreciation of diverse LOBs and their data needs combined with deep technical expertise are required for the role
- Experience in tool Integration, automation, configuration management in GIT
- Excellent verbal and written communication and presentation skills, analytical and problem solving skills