Average salary: Rs1,154,998 /yearly
More statsGet new jobs by email
- ...Requirement: No Remote: All 5 days work from office. Working experience of Hadoop, Hive SQLs, Spark, Bigdata Eco System Tools. Should be able to tweak queries and work on performance enhancement. The candidate will be responsible for delivering code, setting up environment...SuggestedWork at office
- ...passions and skills with our vacancies, setting you on a path to exceptional career development and success. Senior Apache Hadoop Developer at BairesDev We are seeking a Senior Apache Hadoop Developer with expertise in big data ecosystem, HDFS architecture, MapReduce...SuggestedLocal areaWorldwide
- ...Key Responsibilities: Develop, test, and deploy Hadoop-based data processing workflows using tools like MapReduce, Hive, Pig, and Spark . Design and implement ETL/ELT pipelines to ingest and process large volumes of structured and unstructured data. Write efficient...Suggested
- ...What You'll Do Design, develop, and code Hadoop applications to process and analyze large data collections. Create and maintain scalable data processing frameworks for various business needs. Extract, transform, and load (ETL) data and isolate data clusters for analysis...Suggested
- ...decommission legacy HDP/Cloudera clusters during transition. • Manage HDFS/YARN/Hive/HBase • Plan data export/import • Migrate Ranger/Atlas policies • Advise on transition architecture • Optimize clusters Hadoop, Hive, HDFS, YARN, Ranger, Atlas, Kerberos, Linux Scripting...Suggested
- ...test plan reviews You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge... ...place for you Technical Requirements: ~ Primary skills Hadoop Hive HDFS Additional Responsibilities: Knowledge...Suggested
- Key responsibilities: Design, code, test, and debug software applications and systems Collaborate with cross-functional teams to identify and resolve software issues Write clean, efficient, and well-documented code Stay current with emerging technologies and industry...SuggestedFull time
- ...you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python/Hadoop Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). NTT DATA Services currently seeks Python Hadoop Developer to...SuggestedLong term contractHybrid workWork at officeRemote jobFlexible hours
- Description :We are looking for a highly skilled Senior Data Engineer (Hadoop) to join our team in Bangalore.The ideal candidate will design, develop, and optimize data solutions within the Hadoop ecosystem, ensuring high performance, scalability, and data quality across enterprise...Suggested
- ...Redshift, Azure Data LakeExcellent Python, PySpark and SQL development and debugging skills, exposure to other Big Data frameworks like Hadoop Hive would be added advantageExperience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g....Suggested
- Job Description :- 5+ years of experience in Hadoop eco system- 3 to 5 years of hands on experience in architecting, designing, and implementing data ingestion pipes for batch, real-time, and streams.- 3 to 5 years of hands on experience with proven track record in building...Suggested
- ...F2F (Mandatory)- Work Mode : Hybrid- Skills required : Bigdata (Hadoop Hive, Impala, Spark) , pyspark, Python, Oracle, Exadata (RDBMS),... ...big data infrastructure.Roles and Responsibilities : - Design and develop scalable Big Data solutions using the Hadoop and Spark ecosystems...SuggestedHybrid workImmediate start
- ...potential. Title And Summary Senior Data Engineer - (On Premises - Hadoop, Spark, Python, SQL) Who is Mastercard Mastercard is a... ...works to improve experience and metrics in ownership area Develop complete understanding of end-to-end technical architecture and dependency...SuggestedWorldwide
- ...building new solutions from ground up. This role will work with developers, architects, product managers and data analysts on data initiatives... ..., and data sets : - Experience with big data tools : Hadoop, Spark, Kafka, etc.- Experience with relational SQL databases, such...Suggested
- ...skilled Senior Data Engineer with strong hands-on expertise in the Hadoop ecosystem and PySpark for large-scale data processing. The ideal... ..., and maintain data pipelines using Hadoop, Spark, and Hive.- Develop and optimize large-scale ETL/ELT workflows for structured and unstructured...SuggestedFull timeImmediate start
- ...Key Responsibilities: Design, develop, and optimize large-scale data processing workflows using Hadoop components such as HDFS, MapReduce, Hive, Pig, and HBase . Build and maintain ETL pipelines to ingest and transform data from various sources into Hadoop clusters....
- ...HyderabadExperience : 610 yearsKey Responsibilities : - Design, develop, and maintain scalable big data systems and pipelines.- Implement... ...processing frameworks and optimize large datasets using tools such as Hadoop, Spark, and Hive.- Develop and maintain ETL processes to ensure...
- ...distributed computing concepts.Key Responsibilities : - Design, develop, and optimize data pipelines and workflows for large-scale data processing... ..., Spark on Kubernetes, YARN, and Oozie.- Work extensively with Hadoop, Kafka, Spark, and Spark Structured Streaming to process real-...Work at office
- ...warehouses using modern cloud technologies (Azure, AWS, or GCP).- Develop and manage ETL/ELT workflows using tools such as Databricks,... ...Scala.- Proven experience with SQL and big-data frameworks (Spark, Hadoop, Kafka).- Hands-on experience with cloud-based data platforms - Azure...
- Description :- Provides technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, HBase etc.) and contributes... ...cutting-edge open-source technologies and software paradigms- Developing and implementing an overall organizational data strategy that is...
- ...Responsibilities : Data Pipeline Design & Development :- Design, develop, test, and maintain end-to-end batch and streaming data pipelines... ...based data platforms.- Strong hands-on experience with Cloudera / Hadoop Ecosystem, Apache Spark, and Kafka (Confluent or Apache) for...Long term contract
- ...Qualifications and Skills:Education:Bachelors in Computer Science, Engineering, or related field.Experience:47 years in data integration, Hadoop/Spark deployments, or platform configuration.Must-Have Technical Skills:- Hadoop, Spark, Hive, Trino, Kafka, Airflow.- Python, Bash/...
- ...analysis , and data exploration to extract valuable insights. Develop and optimize Machine Learning models to achieve high accuracy... ...Skills : Experience with Big Data technologies (e.g., Hadoop , Spark ). Familiarity with containerization and orchestration...
- ...Warehouse Hive team looking for a passionate and seasoned Software Developer to join our growing engineering team.This group is targeting the... ...product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning...Work from homeFlexible hours
- ...ensuring optimization and reliability.Key Responsibilities:- Design and build distributed data processing systems using Spark and Hadoop- Develop and optimize Spark applications for performance and scale- Build ETL/ELT pipelines for ingestion and transformation of large...
- ...insights and product decisions across eloelo.Responsibilities :- Develop, maintain, and optimize ETL/ELT pipelines and data workflows.- Manage... ...).- Hands-on knowledge of Big Data technologies (Spark, Kafka, Hadoop, etc. ).- Familiarity with data warehouses like Redshift,...
- ...Job Title: Senior Data Engineer/Developer Number of Positions: 2 Job Description: The Senior Data Engineer will be responsible for... ...Data Engineer or similar role. Experience with big data tools: Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf...
- ...applications. Expertise in big data engineering with knowledge of Hadoop Apache Spark Python and SQL. Proficiency in creating and... ...managing large-scale data pipelines and ETL processes. Experience developing and maintaining Spark pipelines and productizing AI/ML models....Full timeLocal areaRemote job
- ...Key Responsibilities: Design, develop, and optimize big data pipelines and ETL workflows using PySpark , Hadoop (HDFS, MapReduce, Hive, HBase) . Develop and maintain data ingestion, transformation, and integration processes on Google Cloud Platform services such...
- ...product managers, engineering leaders, architects and software developers and business operations on the definition and delivery of highly... ...HBase) & building pipelines • Expertise and Deep understanding on Hadoop Ecosystem including HDFS, YARN, MapReduce, Tools like Hive, Pig/...
