What you’ll Do:
- Maintain and improve solutions and tools in the current data platform including Airflow, Presto, etc.
- Research, PoC, and develop solutions or tools to improve the scalability and reliability of data platform
- Help data engineer to improve ETL process to deliver data faster
- Develop a monitoring and tracking system that enable teams to measure performance and easy to troubleshoot
What you’ll Need:
- Bachelor’s degree or equivalent experience in Computer Science or related field.
- 1+ year (for junior) or 3+ years (for senior) of experience in data engineer or data architect role.
- Understanding of Hadoop ecosystem such as HDFS, YARN, Kafka, Spark and Hive.
- Knowledge of big data architecture concept and design.
- Ability to design and implement scalable and reliable data infrastructure.
- Knowledge of monitoring and logging.
- Knowledge of docker and Kubernetes.
- Have a growth and can-do mindset, and willing to learn new things or share knowledge with others.