Software Engineer, Big Data

  • Redwood City, CA
  • Full-time

Company Description

PubMatic is the automation solutions company for an open digital media industry. 

Featuring the leading omni-channel revenue automation platform for publishers and enterprise-grade programmatic tools for media buyers, our publisher-first approach enables advertisers to access premium inventory at scale. 

Processing nearly one trillion ad impressions per month, PubMatic has created a global infrastructure to activate meaningful connections between consumers, content and brands. 

Since 2006, our focus on data and technology innovation has fueled the growth of the programmatic industry as a whole. Headquartered in Redwood City, California, PubMatic operates 11 offices and six data centers worldwide. 

See how we work at https://vimeo.com/103893936

Job Description

PubMatic's Big Data Engineering group is responsible for building scalable, fault-tolerant and highly available big data platform handling PB’s of data that is behind PubMatic Analytics. We work with a large data volume, flowing through PubMatic platform from across the globe. The platform is built to ingest & process data to provide real-time and slice and dice analytics for our internal & external customers. We are looking for Senior Software Engineer, responsible for delivering industry-leading solutions, optimizing the platform, challenging the norms and bring in solutions for industry critical problems.

Responsibilities:

  • Work in a cross-functional environment to architect, design and develop new functions in our product line.
  • Conduct feasibility analysis, produce functional and design specifications of proposed new features.
  • Troubleshoot complex issues discovered in-house as well as in customer environments.

Qualifications

  • 3-5 years of software development experience
  • 1-2 years of Big Data experience
  • Solid CS fundamentals including data structure and algorithm design, and creation of architectural specifications.
  • R&D contributions and production deployments of large backend systems.
  • Designing and implementing data processing pipelines with a combination of the following technologies: Hadoop, Map Reduce, YARN, Spark, Hive, Kafka, Avro, Parquet, SQL and NoSQL data warehouses.
  • Implementation of professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management, documentation, build processes, automated testing, and operations.
  • Passion for developing and maintaining a high-quality code and test base, and enabling contributions from engineers across the team.
  • Ability to achieve stretch goals in a very innovative and fast paced environment.
  • Ability to learn new technologies quickly and independently.
  • Excellent verbal and written communication skills, especially in technical communications.
  • Strong inter-personal skills and a desire to work collaboratively.

Additional Information

All your information will be kept confidential according to EEO guidelines.