Average salary: Rs1,154,998 /yearly
More statsGet new jobs by email
- ...Must be strong in Hadoop and Spark Architecture Hands-on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Should have strong hands-on experience on Hive/Impala/Spark query performance tuning concepts Good UNIX Shell, Python/...Suggested
- ...Description We are looking for a highly skilled Hadoop Developer with 8-13 years of experience in the Indian job market. The ideal candidate should have a deep understanding of Hadoop, its ecosystem, and related technologies. The role involves designing, developing, and maintaining...Suggested
- ...approach to software development, where possible in which the test is developed before the code Participate in design and code reviews with... ...using Spark/Scala or HDFS file system Very good knowledge Hadoop architecture end to end Very good hands on in writing Spark...Suggested
- ...Description We are seeking a highly skilled Hadoop Developer with 2-5 years of experience to join our team. The ideal candidate will have a strong understanding of Hadoop ecosystem and its components, and possess experience with data ingestion, processing, and analysis....Suggested
- ...Description We are seeking a highly skilled Hadoop Developer with 8-13 years of experience to join our dynamic team. The ideal candidate should have a strong background in software development and experience in Big Data technologies, specifically Hadoop. The successful candidate...Suggested
- ...Job Designation: Java with Hadoop Developer Location: Gurgaon, India Required Experience (in Years)- 3 To 7 Yrs This position reports into the VP – Engineering at Airlinq. He will work with the Engineering and Development teams to build and maintain a testing and...SuggestedFull time
- ...Title: Hadoop with Scala Developer Experience: 8 to 12 Yrs Notice Period: 15 Days or Less Location: Chennai, Bangalore Job Description Hadoop Scala Please DO NOT apply if your profile does not meet the job description or required qualifications....Suggested
- ...applications with data pipeline open source products, and experience in Hadoop data platform; strong critical thinking, communication, and... ...with line of business users and technology teams to design, develop, and test full stack cloud data solutions. Lead and ensure the...SuggestedWork at office
- ...hiring a Senior Technology Engineer with strong expertise in Hadoop-based Big Data ecosystems , automation, and DevOps tools. The role... ...Work closely with cross-functional teams including architects, developers, and business analysts to translate requirements into reliable...Suggested
- ...decisions.Mandatory Skill Set :- Extensive hands-on experience with the Hadoop ecosystem (HDFS, MapReduce, YARN).- Strong expertise in Apache... ...HiveQL queries and managing Hive Metastore.- Experience in developing and maintaining Apache Kafka-based streaming solutions.- In-...Suggested
- ...understanding of big data tools and Responsibilities : Design, develop, and maintain scalable and high-performance data processing... ...similar role. Strong expertise in big data technologies such as Hadoop, Spark, Hive, HBase, Kafka, Flume. Proficiency in SQL and at least...Suggested
- ...Engineer and be part of the Risk Solution Services team to design and develop solutions for all data requirements. The ideal candidate will... ...developing, maintaining and supporting big data ETL pipelines (Hadoop, Hive, Spark) Responsible for the implementation and test of scalable...SuggestedHybrid workWork at officeLocal area
- ...compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing...Suggested
- ...As a Data Engineer, you will leverage Databricks and Hadoop ecosystems to build robust and efficient data pipelines, enable real-time analytics... ...high-performance, reliability, and cost-effectiveness. Develop and maintain data models and ETL (Extract, Transform, Load) processes...Suggested
- ...Position Summary We are seeking an Apache Hadoop - Subject Matter Expert (SME) who will be responsible for designing, optimizing, and scaling Impala, Spark-based data processing systems. This role involves hands-on experience in Impala and Spark architecture and core functionalities...SuggestedShift work
- ...Education : ~ Bachelor's in Computer Science, Engineering, or related field Experience : ~4–7 years in data integration, Hadoop/Spark deployments, or platform configuration Must-Have Technical Skills : Hadoop, Spark, Hive, Trino, Kafka, Airflow Python...
- ...Position Overview Job Title: Data Engineer (ETL, Big Data, Hadoop, Spark, GCP) , Assistant Vice President Location: Pune, India Role Description Senior engineer is responsible for developing and delivering elements of engineering solutions to accomplish business...Flexible hours
- ...Key deliverables: # Manage and administer Hadoop ecosystem components and performance tuning # Automate operational workflows using... ...across AWS/Azure/GCP and optimize load balancing with Nginx # Develop tools and monitoring solutions using Prometheus, Grafana, ELK #...Hybrid work
- ...a member of the Post Purchase Solutions your role will involve developing and implementing practices that will allow deployment of machine... ...~ Solid experience in Big Data engineering, with experience in Hadoop, Apache Spark, Python, and SQL. ~ Proficiency in creating and...Hybrid workWork at officeLocal areaImmediate start
- ...Role :We are looking for a skilled Big Data Engineer to design, develop, and maintain large-scale data processing systems. The ideal candidate... ...Design and implement big data pipelines and ETL processes using Hadoop, Spark, Hive, and Kafka.- Develop and maintain data ingestion,...Full time
- Responsibilities :- Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases.- Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis.- Collaborate with data scientists...
- ...of data across the organization.Key Responsibilities : - Design, develop, and maintain scalable ETL/ELT pipelines for structured and unstructured... ...- Build and optimize big data solutions using frameworks such as Hadoop and Spark.- Implement and manage cloud-based data platforms (AWS,...Full time
- ...clients Big Data team. The ideal candidate will have deep expertise in Hadoop, Spark, and modern data ecosystems, and will be responsible for... ...within a fast-paced environment.Key Responsibilities : - Design, develop, and maintain robust big data pipelines and ETL workflows.- Work...Contract workRemote job
- Responsibilities:- Proven experience in SQL, Spark, Hadoop ecosystem- Have worked on multiple TBs of data volume from ingestion to consumption.- Work with business stakeholders to identify and document high impact business problems and potential solutions.- Good understanding...
- ...BangaloreExperience : 6+ yearsRole Overview : Looking for a Senior Big Data Developer to design, build, and optimize data pipelines and analytics... ...Develop, test, and deploy high-performance data pipelines using Hadoop, Spark, Kudu, and HBase.- Implement ETL/ELT workflows and ensure...
- ...looking for a skilled and detail-oriented Big Data Engineer to design, develop, and maintain scalable data pipelines and architectures. The role... ...& Competencies :- Strong knowledge of big data frameworks (Hadoop, Spark, Flink, Kafka).- Hands-on experience with cloud platforms...
- ...behavior and anchor our decisions.Key Responsibilities : - Design and Develop Data Pipelines : Architect, build, and deploy scalable and... ...data processing solutions using technologies like Apache Spark and Hadoop.- Programming and Scripting : Exhibit expert-level programming skills...Work at office
- ...highly skilled Big Data Engineer with 59 years of experience to develop, maintain, and optimize robust, scalable big data solutions. This... ...requires expertise in designing and implementing solutions using Hadoop-based technologies and AWS. The engineer will be crucial in building...Full time
- ...Job Title: Senior Data Engineer/Developer Number of Positions: 2 Job Description: The Senior Data Engineer will be responsible for... ...Data Engineer or similar role. Experience with big data tools: Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf...
- ...to selecting, integrating, and maintaining Big Data frameworks.- Develop distributed data processing systems using Spark, Akka, or similar... ...pipelines.- Experience with frameworks like Spark, Akka, Storm, Hadoop.- Exposure to cloud platforms (AWS/Azure) for data engineering workloads...Full timeWork at officeShift work