Average salary: Rs1,154,998 /yearly

More stats
Get new jobs by email
  •  ...Must be strong in Hadoop and Spark Architecture Hands-on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Should have strong hands-on experience on Hive/Impala/Spark query performance tuning concepts Good UNIX Shell, Python/... 
    Suggested
    Bangalore
    a month ago
  •  ...Description We are looking for a highly skilled Hadoop Developer with 8-13 years of experience in the Indian job market. The ideal candidate should have a deep understanding of Hadoop, its ecosystem, and related technologies. The role involves designing, developing, and maintaining... 
    Suggested
    Bangalore
    a month ago
  •  ...approach to software development, where possible in which the test is developed before the code Participate in design and code reviews with...  ...using Spark/Scala or HDFS file system Very good knowledge Hadoop architecture end to end Very good hands on in writing Spark... 
    Suggested
    Bangalore
    29 days ago
  •  ...Description We are seeking a highly skilled Hadoop Developer with 2-5 years of experience to join our team. The ideal candidate will have a strong understanding of Hadoop ecosystem and its components, and possess experience with data ingestion, processing, and analysis.... 
    Suggested
    Secunderabad
    a month ago
  •  ...Description We are seeking a highly skilled Hadoop Developer with 8-13 years of experience to join our dynamic team. The ideal candidate should have a strong background in software development and experience in Big Data technologies, specifically Hadoop. The successful candidate... 
    Suggested
    Bangalore
    a month ago
  •  ...Job Designation: Java with Hadoop Developer Location: Gurgaon, India Required Experience (in Years)- 3 To 7 Yrs This position reports into the VP – Engineering at Airlinq. He will work with the Engineering and Development teams to build and maintain a testing and... 
    Suggested
    Full time
    Gurgaon
    2 days ago
  •  ...Title: Hadoop with Scala Developer Experience: 8 to 12 Yrs Notice Period: 15 Days or Less Location: Chennai, Bangalore Job Description Hadoop Scala Please DO NOT apply if your profile does not meet the job description or required qualifications.... 
    Suggested
    Bangalore
    2 days ago
  •  ...applications with data pipeline open source products, and experience in Hadoop data platform; strong critical thinking, communication, and...  ...with line of business users and technology teams to design, develop, and test full stack cloud data solutions. Lead and ensure the... 
    Suggested
    Work at office
    Secunderabad
    3 days ago
  •  ...hiring a Senior Technology Engineer with strong expertise in Hadoop-based Big Data ecosystems , automation, and DevOps tools. The role...  ...Work closely with cross-functional teams including architects, developers, and business analysts to translate requirements into reliable... 
    Suggested
    Bangalore
    3 days ago
  •  ...decisions.Mandatory Skill Set :- Extensive hands-on experience with the Hadoop ecosystem (HDFS, MapReduce, YARN).- Strong expertise in Apache...  ...HiveQL queries and managing Hive Metastore.- Experience in developing and maintaining Apache Kafka-based streaming solutions.- In-... 
    Suggested
    Bangalore
    2 days ago
  •  ...understanding of big data tools and Responsibilities : Design, develop, and maintain scalable and high-performance data processing...  ...similar role. Strong expertise in big data technologies such as Hadoop, Spark, Hive, HBase, Kafka, Flume. Proficiency in SQL and at least... 
    Suggested
    Pune
    2 days ago
  •  ...Engineer and be part of the Risk Solution Services team to design and develop solutions for all data requirements. The ideal candidate will...  ...developing, maintaining and supporting big data ETL pipelines (Hadoop, Hive, Spark) Responsible for the implementation and test of scalable... 
    Suggested
    Hybrid work
    Work at office
    Local area
    Bangalore
    3 days ago
  •  ...compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing... 
    Suggested
    Coimbatore
    a month ago
  •  ...As a Data Engineer, you will leverage Databricks and Hadoop ecosystems to build robust and efficient data pipelines, enable real-time analytics...  ...high-performance, reliability, and cost-effectiveness. Develop and maintain data models and ETL (Extract, Transform, Load) processes... 
    Suggested
    Bangalore
    a month ago
  •  ...Position Summary We are seeking an Apache Hadoop - Subject Matter Expert (SME) who will be responsible for designing, optimizing, and scaling Impala, Spark-based data processing systems. This role involves hands-on experience in Impala and Spark architecture and core functionalities... 
    Suggested
    Shift work
    Bangalore
    2 days ago
  •  ...Education : ~ Bachelor's in Computer Science, Engineering, or related field Experience : ~4–7 years in data integration, Hadoop/Spark deployments, or platform configuration Must-Have Technical Skills : Hadoop, Spark, Hive, Trino, Kafka, Airflow Python... 
    Bangalore
    2 days ago
  •  ...Position Overview Job Title: Data Engineer (ETL, Big Data, Hadoop, Spark, GCP) , Assistant Vice President Location: Pune, India Role Description Senior engineer is responsible for developing and delivering elements of engineering solutions to accomplish business... 
    Flexible hours
    Pune
    3 days ago
  •  ...Key deliverables: # Manage and administer Hadoop ecosystem components and performance tuning # Automate operational workflows using...  ...across AWS/Azure/GCP and optimize load balancing with Nginx # Develop tools and monitoring solutions using Prometheus, Grafana, ELK #... 
    Hybrid work
    Bangalore
    a month ago
  •  ...a member of the Post Purchase Solutions your role will involve developing and implementing practices that will allow deployment of machine...  ...~ Solid experience in Big Data engineering, with experience in Hadoop, Apache Spark, Python, and SQL. ~ Proficiency in creating and... 
    Hybrid work
    Work at office
    Local area
    Immediate start
    Bangalore
    3 days ago
  •  ...Role :We are looking for a skilled Big Data Engineer to design, develop, and maintain large-scale data processing systems. The ideal candidate...  ...Design and implement big data pipelines and ETL processes using Hadoop, Spark, Hive, and Kafka.- Develop and maintain data ingestion,... 
    Full time

    True Talents

    Bangalore
    23 days ago
  • Responsibilities :- Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases.- Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis.- Collaborate with data scientists... 

    Unison consulting pte ltd

    Chennai
    15 days ago
  •  ...of data across the organization.Key Responsibilities : - Design, develop, and maintain scalable ETL/ELT pipelines for structured and unstructured...  ...- Build and optimize big data solutions using frameworks such as Hadoop and Spark.- Implement and manage cloud-based data platforms (AWS,... 
    Full time

    Techmora

    Bangalore
    23 days ago
  •  ...clients Big Data team. The ideal candidate will have deep expertise in Hadoop, Spark, and modern data ecosystems, and will be responsible for...  ...within a fast-paced environment.Key Responsibilities : - Design, develop, and maintain robust big data pipelines and ETL workflows.- Work... 
    Contract work
    Remote job

    Lancesoft India Pvt Ltd

    India
    8 days ago
  • Responsibilities:- Proven experience in SQL, Spark, Hadoop ecosystem- Have worked on multiple TBs of data volume from ingestion to consumption.- Work with business stakeholders to identify and document high impact business problems and potential solutions.- Good understanding... 

    Novo Tree Minds Consulting

    Mumbai
    16 days ago
  •  ...BangaloreExperience : 6+ yearsRole Overview : Looking for a Senior Big Data Developer to design, build, and optimize data pipelines and analytics...  ...Develop, test, and deploy high-performance data pipelines using Hadoop, Spark, Kudu, and HBase.- Implement ETL/ELT workflows and ensure... 

    SK HR Consultants

    Bangalore
    22 days ago
  •  ...looking for a skilled and detail-oriented Big Data Engineer to design, develop, and maintain scalable data pipelines and architectures. The role...  ...& Competencies :- Strong knowledge of big data frameworks (Hadoop, Spark, Flink, Kafka).- Hands-on experience with cloud platforms... 

    Impacteers

    Bangalore
    28 days ago
  •  ...behavior and anchor our decisions.Key Responsibilities : - Design and Develop Data Pipelines : Architect, build, and deploy scalable and...  ...data processing solutions using technologies like Apache Spark and Hadoop.- Programming and Scripting : Exhibit expert-level programming skills... 
    Work at office

    DUN BRADSTREET TECHNOLOGY AND CORPORATE SERVICES

    Hyderabad
    27 days ago
  •  ...highly skilled Big Data Engineer with 59 years of experience to develop, maintain, and optimize robust, scalable big data solutions. This...  ...requires expertise in designing and implementing solutions using Hadoop-based technologies and AWS. The engineer will be crucial in building... 
    Full time

    FxConsulting

    Gurgaon
    15 days ago
  •  ...Job Title: Senior Data Engineer/Developer Number of Positions: 2 Job Description: The Senior Data Engineer will be responsible for...  ...Data Engineer or similar role. Experience with big data tools: Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf... 

    CAPCO

    Pune
    more than 2 months ago
  •  ...to selecting, integrating, and maintaining Big Data frameworks.- Develop distributed data processing systems using Spark, Akka, or similar...  ...pipelines.- Experience with frameworks like Spark, Akka, Storm, Hadoop.- Exposure to cloud platforms (AWS/Azure) for data engineering workloads... 
    Full time
    Work at office
    Shift work

    Funic Tech

    Pune
    7 days ago