Senior Principal Big Data Engineer
- Full-time
- Department: Development: Data Analytics
Company Description
PubMatic is a publisher-focused sell-side platform for an open digital media future.
Featuring leading omni-channel revenue automation technology for publishers and enterprise-grade programmatic tools for media buyers, PubMatic's publisher-first approach enables advertisers to access premium inventory at scale.
Processing over one trillion ad impressions per month, PubMatic has created a global infrastructure to drive publisher monetization and control over their ad inventory.
Since 2006, PubMatic's focus on data and technology innovation has fueled the rise of the programmatic industry as a whole. Headquartered in Redwood City, California, PubMatic operates 13 offices and six data centers worldwide.
Job Description
We are immediately hiring a Senior Principal Big Data Engineer to join our analytics team in Redwood City.
PubMatic's Big Data Engineering group is responsible for building scalable, fault-tolerant and highly available big data platform handling the pedabytes of data within PubMatic Analytics.
We work with a large data volume, flowing through PubMatic platform from across the globe. The platform is built to ingest & process data to provide real-time 'slice and dice' analytics for our customers.
This key hire will build thoughtful solutions, optimize our platform, and challenge assumptions for how we approach industry-critical problems.
Responsibilities:
Work in a cross-functional environment to architect, design and develop new functions in our product line
Conduct feasibility analysis, produce functional and design specifications of proposed new features
Troubleshoot complex issues discovered in-house as well as in customer environments
Improve codebase, bring in latest technologies, re-architect modules to increase the throughput and performance
Mentor junior engineers in software development, technology and processes
Qualifications
8+ years supporting big data use cases
2+ years of team management or mentoring
R&D contributions and production deployments of large backend systems
Deep experience defining big data solution architectures and component designs, exploring technical feasibility trade-offs, creating POCs using new technologies, and productizing the best solutions in line with business requirements
Designing and implementing data processing pipelines with a combination of the following technologies: Hadoop, Map Reduce, YARN, Spark, Hive, Kafka, Avro, Parquet, SQL and NoSQL data warehouses
Implementation of professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management, documentation, build processes, automated testing, and operations
Solid CS fundamentals including data structure and algorithm design, and creation of architectural specifications
Proven track record in working with internal customers to understand their use cases, and developing technology to enable analytic insight at SCALE
Passion for developing and maintaining a high-quality code and test base, enabling contributions from engineers across the team
Ability to handle multiple competing priorities with good time management and a dedication to doing what it takes to get the work done right
Ability to achieve stretch goals in a very innovative and fast paced environment.
Ability to learn new technologies quickly and independently
Excellent verbal and written communication skills, especially in technical communications
Strong inter-personal skills and a desire to work collaboratively
#LI-NP1
Additional Information
PubMatic is proud to be an equal opportunity employer; we don’t just value diversity, we promote and celebrate it.
We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
All your information will be kept confidential according to EEO guidelines.