Average salary: Rs1,290,000 /yearly

More stats
Get new jobs by email
  •  ...Key Responsibilities: Develop, test, and deploy Hadoop-based data processing workflows using tools like MapReduce, Hive, Pig, and Spark . Design and implement ETL/ELT pipelines to ingest and process large volumes of structured and unstructured data. Write efficient... 
    Suggested
    Chennai
    a month ago
  •  ...Requirement: No Remote: All 5 days work from office. Working experience of Hadoop, Hive SQLs, Spark, Bigdata Eco System Tools. Should be able to tweak queries and work on performance enhancement. The candidate will be responsible for delivering code, setting up environment... 
    Suggested
    Work at office
    Chennai
    4 days ago
  •  ...Key Responsibilities: Design, develop, and optimize large-scale data processing workflows using Hadoop components such as HDFS, MapReduce, Hive, Pig, and HBase . Build and maintain ETL pipelines to ingest and transform data from various sources into Hadoop clusters.... 
    Suggested
    Chennai
    a month ago
  •  ...warehouses using modern cloud technologies (Azure, AWS, or GCP).- Develop and manage ETL/ELT workflows using tools such as Databricks,...  ...Scala.- Proven experience with SQL and big-data frameworks (Spark, Hadoop, Kafka).- Hands-on experience with cloud-based data platforms - Azure... 
    Suggested

    TalenTree

    Chennai
    17 days ago