Hadoop Bigdata Engineer Job Description Template
Our company is looking for a Hadoop Bigdata Engineer to join our team.
Responsibilities:
- Ambitious individual who can work under their own direction towards agreed targets/goals;
- Must be flexible to work on the office timings to accommodate the multi-national client timings;
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed;
- Willing to learn new technologies and research orientated projects;
- Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Requirements:
- Exposure towards Cloud development would be a plus and added advantage;
- Familiarity with Distributed data systems, asynchronous messaging i.e Kafka;
- Hadoop development, debugging, and implementation of workflows and common algorithms;
- 1 to 2 years of on-hands experience in Hadoop – MapReduce, HDFS, HBase, Hive, Sqoop,YARN; Fresher’s can also apply with certifications;
- You are responsible for Hadoop development and implementation including loading from disparate data sets, pre-processing using Hive and Pig;
- Exposure of software engineering and solution development if any experience;
- Familiarity with data loading tools like Flume, Sqoop;
- Experience with Cloudera Distribution & expertise in Spark & Scala;
- Strong in SQL, NoSQL, RDBMS and Data warehousing concepts;
- Some experience in writing map-reduce jobs;
- Strong knowledge in back-end programming i.e.Core Java, Multi-Threading, OOPS, Writing Parsers;
- Preferred experience in Linux OS.