- Bengaluru, Karnataka, India
At Visa, your individuality fits right in. Working here gives you an opportunity to impact the world, invest in your career growth, and be part of an inclusive and diverse workplace. We are a global team of disruptors, trailblazers, innovators and risk-takers who are helping drive economic growth in even the most remote parts of the world, creatively moving the industry forward, and doing meaningful work that brings financial literacy and digital commerce to millions of unbanked and underserved consumers.
You're an Individual. We're the team for you. Together, let's transform the way the world pays.
To ensure that Visa’s payment technology is truly available to everyone, everywhere requires the success of our key bank or merchant partners and internal business units. The Global Data Science group supports these partners by using our extraordinarily rich data set that spans more than 3 billion cards globally and captures more than 100 billion transactions in a single year. Our focus lies on building creative solutions that have an immediate impact on the business of our highly analytical partners. We work in complementary teams comprising members from Data Science and various groups at Visa. To support our rapidly growing group we are looking for data engineers who are equally passionate about the opportunity to use Visa’s rich data to tackle meaningful business problems. This position will be part of Data Engineering and Technology function in building and maintaining global data assets and issuer solutions. We are looking expertise in data warehousing and building large-scale data processing systems by using the latest database technologies. The Data Engineer takes responsibility for building and running data pipelines, designing our local data warehouse and data frameworks, and catering for different data presentation techniques. The position is based at Visa's offices in Bangalore, India.
Implementation of applications utilizing Big Data infrastructure and services under the supervision/collaboration of senior data engineer.
Quality control of data assets pipeline and data service development.
Execute and manage large scale ETL processes to support development and publishing of reports, Datamart’s and predictive models.
Build and develop ETL pipelines in Spark, Python, HIVE or SAS that process transaction and account level data and standardize data fields across various data sources.
Collaborate with cross-functional teams across geographies to understand data flows and processes to enable design and creation of the best possible solutions to each engineering challenge.
Provide quality data solutions in a timely manner and be responsible for data governance and integrity while meeting objectives and maintaining SLAs.
Build self-service tools and utilities.
Flexible approach to analyzing technical issues and clearly communicating recommendations/solutions with stakeholders in an ever-changing environment.
4+ yrs. work experience with a Bachelor’s Degree or 2+ years of work experience with a Master's or Advanced Degree in an analytical field such as computer science, statistics, finance, economics or relevant area.
Business skills and general qualities:
Ability to understand business requirements of the data science business.
Ability to translate data and technical concepts into requirements documents, business cases and user stories.
Good understanding of agile working practices and related program management skills.
Good communication and presentation skills with ability to interact with different cross-functional team members at varying levels
Ability to learn new tools and paradigms as data science continues to evolve at Visa and elsewhere.
Demonstrated intellectual and analytical rigor, team oriented, energetic, collaborative, diplomatic, and flexible style.
Intellectually curious and continuously striving to learn.
3+ years professional experience in building robust data pipelines and writing ETL/ELT code (Python, Hive, Spark, Shell Scripts)
3+ years of engineering background building API,UI and backend programming.
2+ years of experience working with scheduling tools (Airflow, Oozie)
2+ years of scripting experience
3+ years of hands-on experience working with large scale data ingestion, ETL processing, storage, Hadoop ecosystem, Spark
Experience in writing and optimizing SQL queries in Big data environment.
Experience working in Linux/Unix environment and exposure to command line utilities.
Experience creating/supporting production software/systems and a proven track record of identifying and resolving performance bottlenecks for production systems.
Preferred experience with Visualization Tools like Tableau, Power BI, D3 and exposure to code version control systems (git).
Exposure to machine learning models based on unstructured, structured, and streaming datasets.
Strong communication skills