What You'll Do:
- Design, build, optimize, launch and support new and existing data products and data pipelines
- Implement real-time data processing applications
- Participate in performance tuning to continually improve the speed and stability of the data platform
- Design and implement dimensional data models and cubes
- Work closely with cross-functional teams, including data scientists, analysts, product managers, and software engineers to drive value for our customers and business partners.
- Implement and develop tools for data governance, data quality and data lineage
What You'll Need:
- Bachelor's degree or relevant experience in Computer Science or related field
- 3+ years of professional experience with real-world data applications at scale
- Proficiency in open-source distributed computing technologies (e.g. Hive, Kafka, Spark, Presto, Airflow)
- Strong programming skills (Python preferred, Java a plus) and SQL
- Growth and can-do mindset; willingness to learn and share knowledge
- Understanding of databases (e.g., MySQL, PostgreSQL, MongoDB)
- Be able to communicate in English both speaking and writing fluently. (Thai a plus)
- Bonus: Knowledge of Apache Druid, Kafka, Flink, Docker and Kubernetes