Sr. Spark/Hadoop Developer

  • Beaverton, Beaverton, OR, us
  • Contract

Company Description

The Aroghia Group is a nationwide information technology firm that provides cutting-edge IT services, solutions, and staff placements for clients ranging from startups to Fortune 500 companies. We are committed to helping our clients achieve their goals through innovation, collaboration, and deep expertise.

 

Job Description

Responsibilities:

  • Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem.  Design and implement end-to-end solutions.
  • Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
  • Research, evaluate, and utilize new technologies/tools/frameworks centered around Hadoop and other elements in the Big Data space.
  • Define and build data acquisitions and consumption strategies.
  • Build and incorporate automated unit tests, participate in integration testing efforts.
  • Work with teams to resolve operational and performance issues.
  • Work with architecture/engineering leads and other teams to ensure quality solutions are implemented and engineering best practices are defined and adhered to.

Qualifications

Primary: Spark, Python, Data Engineering, Data Science

  • MS/BS degree in a computer science field or related discipline
  • 6+ years’ experience in large-scale software development
  • 1+ year experience in Hadoop
  • Strong Java programming, Python, shell scripting, and SQL
  • Strong development skills around Hadoop, Spark, MapReduce, and Hive
  • Strong understanding of Hadoop internals
  • Good understanding of file formats including JSON, Parquet, Avro, and others
  • Experience with databases like Oracle
  • Experience with performance/scalability tuning, algorithms, and computational complexity
  • Experience (at least familiarity) with data warehousing, dimensional modeling, and ETL development
  • Ability to understand ERDs and relational database schemas
  • Proven ability to work with cross-functional teams to deliver appropriate resolution

Nice to have:

  • Experience with AWS components and services, particularly EMR, S3, and Lambda
  • Experience with open source NoSQL technologies such as HBase, DynamoDB, Cassandra
  • Experience with messaging and complex event-processing systems such as Kafka and Storm
  • Experience provisioning RESTful API’s to enable real-time data consumption
  • Automated testing, Continuous Integration / Continuous Delivery
  • Scala
  • Machine learning frameworks
  • Statistical analysis with Python, R, or similar

Additional Information

All your information will be kept confidential according to EEO guidelines.

Only those with first and last names will be considered. 

Resumes should include links to LinkedIn profile. 

GC, USC or H1 only

Only W2 candidates will be considered; no C2C!