We are currently searching for an experienced Data Engineer. Being at the forefront of the organisation's digital transformation journey, you will be the part of team who will implement Hadoop ecosystem and infrastructure.
Bachelor in IT, Computer Science or Engineering;
Has 3 years of hands on experience using Big Data technologies like Hadoop distribution (Hortonworks), Hive, HBase, Spark, Pig, Sqoop, Kafka and Spark Streaming;
At least 3 years of big data programming techniques on coding in Java;
Strong knowledge in various database technologies (Synapse / SQL);
Has at least 3 years of experience setting up Hadoop Cluster and manages at least one cluster;
Ability to communicate and present technical information in a clear and unambiguous manner;
Strong ability to work independently and cooperate with diverse teams in a multiple stakeholders environment;
Strong sense of work ownership, high affinity with anything data and a desire for constant improvements.
AWS EMR experience.
Design data warehouse, build ways to ingest data to the warehouse and help to build efficient data pipelines;
Implement and manage Hadoop ecosystem and infrastructure on Cloud platform;
Setting up and performance tuning of Hadoop Cluster;
Collaborate with DevOps team to enhance Big Data environment and build impactful analytics solutions;
Collaborate with the Systems Engineering team to deploy software environments required for Hadoop.