Average salary: Rs1,290,000 /yearly
More statsGet new jobs by email
- ...Key Responsibilities: Develop, test, and deploy Hadoop-based data processing workflows using tools like MapReduce, Hive, Pig, and Spark . Design and implement ETL/ELT pipelines to ingest and process large volumes of structured and unstructured data. Write efficient...Suggested
- ...Requirement: No Remote: All 5 days work from office. Working experience of Hadoop, Hive SQLs, Spark, Bigdata Eco System Tools. Should be able to tweak queries and work on performance enhancement. The candidate will be responsible for delivering code, setting up environment...SuggestedWork at office
- ...Key Responsibilities: Design, develop, and optimize large-scale data processing workflows using Hadoop components such as HDFS, MapReduce, Hive, Pig, and HBase . Build and maintain ETL pipelines to ingest and transform data from various sources into Hadoop clusters....Suggested
- ...warehouses using modern cloud technologies (Azure, AWS, or GCP).- Develop and manage ETL/ELT workflows using tools such as Databricks,... ...Scala.- Proven experience with SQL and big-data frameworks (Spark, Hadoop, Kafka).- Hands-on experience with cloud-based data platforms - Azure...Suggested
