Data Scientist - Big Data / Machine Learning
- West Jakarta, Kebon Jeruk, West Jakarta City, Jakarta, Indonesia
Cermati is a financial technology (fintech) startup based in Indonesia. Cermati simplifies the process of finding and applying for financial product by bringing everything online so people can shop around for financial products online and can apply online without having to physically visit a bank.
Our team hailed from Silicon Valley Tech companies such as Google, Microsoft, LinkedIn and Sofi as well as Indonesian startups such as Doku, Touchten. We have graduates from well known universities such as Universitas Indonesia, ITB, Stanford, University of Washington, Cornell and many others. We are building a company with the same culture of openness, transparency, drive and meritocracy as Silicon Valley companies. Join us in our cause to build a world class fintech company in Indonesia.
- Analyze and mine large amounts of data to obtain useful business insights
- Design and develop core business metrics, create insightful automated dashboards and data visualizations to track them and extract useful business insights;
- Work with different business and technical teams across the company to establish unified definitions, systems, and data governance for key metrics.
- Build scalable backend solutions for automation of data processing;
- Develop predictive/segmentation models to understand our customers' behavior and convert that to actionables that will drive product/marketing/sales key metrics;
- Initiate and drive projects to completion with minimal guidance
- Communicate findings to internal teams and evangelize data-driven business decisions
- Masters or PhD degree in a quantitative or hard science discipline for example: statistics, operations research, computer science, informatics, mathematics, economics, physics, bioinformatics.
- 3+ years (Masters) or 1+ years (Phd) of industry experience in technical role, or in relevant industries for example: e-commerce, social network, ads network, fintech, large logistics operation.
- Programming skills with Python, R, Java, or equivalent language.
- Programming skills with SQL (psql/mysql/presto/hive/etc)
- Experience building and optimizing data ETL from one system to another
- Experience building Machine Learning model using open source libraries preferably Tensorflow
- Experience with some data visualization tool preferably redash.
- Experience with tuning queries for columnar database like AWS Redshift
- Experience in Hadoop or other MapReduce paradigms and associated languages such as Pig and Hive;
- Experience with code repository/versioning system preferably git
- Experience with Unix/Linux environment for automating processes with shell scripting.
- Experience with manipulating massive-scale structured and unstructured data;
- Understanding of different data stores, query tuning for different databases (preferably PostgreSQL and Redshift); performance tuning
- Experience in marketing and sales, being an innovator and disruptor is also a plus;
- Self-motivated, creative and collaborative;
- Excellent communications skills, with the ability to synthesize, simplify and explain complex problems to different types of audience, including executives