Average salary: Rs198,000 /yearly
More statsGet new jobs by email
Search Results: 6,669 vacancies
- Experience : 4 to 7 YearsLocation : Pune (work from office)Job Description : - 4+ years of hands-on experience - Hadoop, System administration with sound knowledge in Unix based Operating System internals.- Working experience on Cloudera CDP and CDH and Hortonworks HDP Distribution...SuggestedWork at officeRotating shift
- Job Position Title : Hadoop Data TesterCompany : CLPS Global or RiDiK Pvt LtdJob Summary : Were looking for Hadoop Data Tester to support our team in Hyderabad. This role offers the opportunity to work on meaningful projects, collaborate with talented colleagues, and contribute...SuggestedLocal area
- We are hiring immediately for Hadoop AdminLocation : HyderabadAnlage Infotech - PayrollPrimary Skills - HDP, CDP, Linux, Python, Ansible and Kubernetes.Immediate - 30 DaysBudget - 14 - 15 Lpa MAX.Job Description : - 3-6 Years Experience in Hadoop Engineering with working experience...SuggestedImmediate start
- Job Summary :We are seeking a highly skilled and motivated Hadoop and Cloud Data Engineer to join our data engineering team. The ideal candidate will be responsible for designing, building, and maintaining big data infrastructure and cloud-based data solutions to support our...Suggested
- ...PostgreSQL, or Oracle.- Strong knowledge of programming languages such as Python, Java, or Scala.- Experience with big data technologies like Hadoop, Spark, or Kafka.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Understanding of data warehousing concepts...Suggested
- Job Role : Hadoop Administrator (Role open for multiple locations) - WFH and WFOJob description :What is your Role ?- You will manage Hadoop clusters, data storage, server resources, and other virtual computing platforms. - You perform a variety of functions, including data...SuggestedWork from home
- ..., Azure, GCP)- Experience with SDLC (Software Development Life Cycle) such as Agile Scrum, Kanban, Jira- Familiarity with data processing tools and frameworks (e.g., Airflow, Hadoop, Spark).- Experience with Azure DevOps, Dockers, and Kubernetes is desired. (ref:hirist.tech)Suggested
- ...best practices for scalable data pipelines.Key Skills :- Apache Spark (with Scala).- Scala language expertise.- Apache Hive.- HDFS and Hadoop Ecosystem.- Oozie workflow orchestration.Preferred Qualifications :- 68 years of experience working on Big Data platforms.- Hands-on...Suggested
- Job Description :We are seeking an experienced Hadoop Engineer with a strong background in Big Data technologies to join our data engineering team in Bangalore. The ideal candidate will have a deep understanding of the Hadoop ecosystem and hands-on experience designing, developing...Suggested
- Role : Hadoop EngineerLocation : Hyderabad, IndiaExperience : 6 YearsEmployment Type : Full-timeJob Overview : Are you an experienced Hadoop Engineer with a passion for data, automation, and modern DevOps practices? We're looking for a skilled professional with 6 years of strong...SuggestedFull time
- Must have technical Skills :1. Expertize and hands-on experience on Spark DataFrame, and Hadoop echo system components 2. Good and hand-on experience- of any of the Cloud (AWS/Azure/GCP)3. Good knowledge of PySpark (Spark SQL) Good to have technical Skills :1. Good knowledge...Suggested
- Job Description :Key Responsibilities :- Hadoop Cluster Management : Install, configure, and maintain Hadoop clusters, ensuring high availability, scalability, and performance.- Ecosystem Management : Manage and monitor various Hadoop ecosystem components, including HDFS, YARN...Suggested
- ...procedures.Required Skills & Qualifications :- 3+ years of hands-on experience in Big Data engineering.- Proficiency in technologies such as Hadoop, Spark, Hive, Kafka, Flink, or Presto.- Strong programming/scripting skills in Python, Java, or Scala.- Experience with cloud-based...SuggestedRemote jobFlexible hours
- Job Title : ETL Developer Hadoop + SQLLocation : Bangalore, IndiaExperience : 4 to 6 YearsJob Type : Full-time / PermanentNotice Period : Immediate to 30 days preferredKey Responsibilities :- Develop and maintain ETL pipelines for data ingestion, transformation, and integration...SuggestedFull timeImmediate start
- ...concepts and 3+ years applied experience Hands on streaming data applications with data pipeline open source products, and experience in Hadoop data platform; strong critical thinking, communication, and teamwork skills are essential Collaborate with line of business users...SuggestedWork at office
- ...years in design and development of large-scale data-driven systems- Work experience on open-source technologies such as Apache Spark, Hadoop Stack, Kafka, Druid, etc.- Work experience with NoSQL (Cassandra/HBase/Aerospike) and RDBMS systems- Great problem-solving, coding in...
- ...Redshift, Azure Data LakeExcellent Python, PySpark and SQL development and debugging skills, exposure to other Big Data frameworks like Hadoop Hive would be added advantageExperience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g....
- ...Analyst with good decision-making, analytical and problem-solving skills.- Working knowledge / experience of Big Data frameworks like Hadoop, Hive and Spark.- Hands-on experience in query languages like HQL or SQL (Spark SQL) for Data exploration.- Data mapping :Determine the...
- Required Technical Skill Set : Hadoop, Python, PySpark, HIVEMust Have :Hands-on experience of Hadoop, Python, PySpark, Hive, Big Data Eco System Tools.- Should be able to develop, tweak queries and work on performance enhancement.- Solid understanding of object-oriented programming...
- ...data-driven decisions.Design & Develop: Build and maintain scalable data platform frameworks leveraging Big Data technologies (Spark, Hadoop, Kafka, Hive, etc.) and GCP services (BigQuery, Dataflow, Pub/Sub, etc.).Data Pipeline Development: Develop, optimize, and manage batch...Full time