Average salary: Rs1,150,000 /yearly
More statsGet new jobs by email
- ...experience with big data technologies and associated tools such as Hadoop, Unix, HDFS, Hive, Impala, etc.- Proficient in using Spark/Scala-... ...and scalable applications for data analytics.- Work with other Bigdata developers to make sure that all data solutions are consistent.-...Big Data
- ...Requirement: No Remote: All 5 days work from office. Working experience of Hadoop, Hive SQLs, Spark, Bigdata Eco System Tools. Should be able to tweak queries and work on performance enhancement. The candidate will be responsible for delivering code, setting up environment...Big DataWork at office
- Job Description :- 5+ years of experience in Hadoop eco system- 3 to 5 years of hands on experience in architecting, designing, and implementing... ...(Azure) platform.- 3 to 5 years of hands on experience in Bigdata tools such as Sqoop, Hive, Spark, Scala, hBase, Mapreduce etc.- 1...Big Data
- Description : - Skill : Bigdata + Pyspark- Location : Hyderabad only- Experience : 5-9 Years- Notice Period : Immediate to 30 Days- Interview... ...F2F (Mandatory)- Work Mode : Hybrid- Skills required : Bigdata (Hadoop Hive, Impala, Spark) , pyspark, Python, Oracle, Exadata (RDBMS),...Big DataHybrid workImmediate start
- ...technologies to meet those needs. Design and implement scalable, high-performance big data architectures, using technologies such as Hadoop, Spark, and NoSQL databases. Extract, transform, and load large data sets into a big data platform for analysis and reporting....Big Data
- ...services and APIs to facilitate secure and efficient data exchange.Key Responsibilities :- Develop data processing applications using Spark, Hadoop- Write MapReduce jobs and data transformation logic- Implement machine learning models and analytics solutions- Code optimisation and...Big Data
- ...Big Data Developer - Spark/Hadoop We are seeking an experienced Big Data Developer with expertise in Hadoop, Spark, and Kafka to take complete ownership of the software development lifecycle, from requirement gathering to final deployment. This role is ideal for a proactive...Big DataPermanent employmentImmediate start
- ...Redshift, Azure Data LakeExcellent Python, PySpark and SQL development and debugging skills, exposure to other Big Data frameworks like Hadoop Hive would be added advantageExperience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g....Big Data
- ...navigate their next in their digital transformation journey this is the place for you Technical Requirements: ~ Primary skills Hadoop Hive HDFS Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design...Big Data
- ...Immediate to 15 Days PreferredJob Overview : We are looking for a highly skilled Senior Data Engineer with strong hands-on expertise in the Hadoop ecosystem and PySpark for large-scale data processing. The ideal candidate should have deep technical proficiency in distributed data...Big DataFull timeImmediate start
- ...Key Responsibilities: Data Engineering & Development: Design, build, and maintain ETL/ELT pipelines using Hadoop ecosystem tools Write complex Hive queries for data transformation and analysis Work with HDFS for storage and efficient data access Develop...Big Data
- ...functional teams on data acquisition, transformation, and processing.- Perform ETL workflows, data ingestion, and data processing using Hadoop ecosystem.- Build and maintain data solutions ensuring performance, scalability, and reliability.- Monitor, troubleshoot, and tune...Big Data
- ...Key Responsibilities: Design, develop, and optimize large-scale data processing workflows using Hadoop components such as HDFS, MapReduce, Hive, Pig, and HBase . Build and maintain ETL pipelines to ingest and transform data from various sources into Hadoop clusters....Big Data
- ...# REQUIREMENT TEMPLATE Pyspark Hadoop Requirement Identifier (Serial No.) No. of positions Prepared... ...Spark Streaming Experience with Hadoop HDFS Hive and other BigData technologies. Familiarity with Data warehousing and ETL concepts...Big DataFull timeWork at officeRemote job
- ...to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success. Apache Hadoop Developer at BairesDev We are seeking an Apache Hadoop Developer with expertise in big data ecosystem, HDFS architecture,...Big DataLocal areaWorldwide
- Description :- Provides technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, HBase etc.) and contributes to open-source Big Data technologies.Must have : Operating knowledge of cloud computing platforms (GCP, especially Big Query, Dataflow, Dataproc...Big Data
- ...processing.- Implement and manage Big Data orchestration tools such as Airflow, Spark on Kubernetes, YARN, and Oozie.- Work extensively with Hadoop, Kafka, Spark, and Spark Structured Streaming to process real-time and batch data.- Ensure adherence to SOLID and DRY principles,...Big DataWork at office
- ...maintain scalable big data systems and pipelines.- Implement data processing frameworks and optimize large datasets using tools such as Hadoop, Spark, and Hive.- Develop and maintain ETL processes to ensure data availability, accuracy, and quality for downstream applications.-...Big Data
- Job Role : Hadoop Administrator (Role open for multiple locations) - WFH and WFOJob description :What is your Role ?- You will manage Hadoop... ...in Cloudera Hadoop.- Work on Hadoop, Python, PySpark, Hive SQLs, Bigdata Eco System Tools.- Experience in working with teams in a complex...Big DataWork from home
- ...Responsibilities Provide technical support and troubleshooting for Big Data applications and systems built on the Hadoop ecosystem. Monitor system performance, analyze logs, and identify potential issues before they impact services. Collaborate with engineering...Big Data
- ...Key Responsibilities: Develop, test, and deploy Hadoop-based data processing workflows using tools like MapReduce, Hive, Pig, and Spark . Design and implement ETL/ELT pipelines to ingest and process large volumes of structured and unstructured data. Write efficient...Big Data
- ...Qualifications and Skills:Education:Bachelors in Computer Science, Engineering, or related field.Experience:47 years in data integration, Hadoop/Spark deployments, or platform configuration.Must-Have Technical Skills:- Hadoop, Spark, Hive, Trino, Kafka, Airflow.- Python, Bash/...Big Data
- ...related roles.- Strong programming expertise in Python, PySpark, or Scala.- Proven experience with SQL and big-data frameworks (Spark, Hadoop, Kafka).- Hands-on experience with cloud-based data platforms - Azure Data Factory, Databricks, AWS Glue, Snowflake, or GCP Dataflow.-...Big Data
- ...Scala or Python. ~ Good experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) ~ Proficiency in programming languages such as Python, Java, or Scala. ~ Optimization and Performance Tuning of Spark Applications...Big DataPermanent employmentFlexible hours
- ...Teamware Solutions is seeking a skilled professional for the BigData and Hadoop Ecosystems Engineer role. This position is crucial for designing, building, and maintaining scalable big data solutions. You'll work with relevant technologies, ensuring smooth data operations...Big Data
- ...high accuracy and performance in real-world scenarios . Preferred Skills : Experience with Big Data technologies (e.g., Hadoop , Spark ). Familiarity with containerization and orchestration tools (e.g., Docker , Kubernetes ). Experience in automating...Big Data
- ...ETL tools (Airflow, dbt, etc. ) and cloud platforms (AWS, GCP, or Azure).- Hands-on knowledge of Big Data technologies (Spark, Kafka, Hadoop, etc. ).- Familiarity with data warehouses like Redshift, Snowflake, or BigQuery.Preferred Qualifications : - Experience in live streaming...Big Data
- ...Job description Job Summary: We are looking for a Big Data Developer with expertise in Apache Spark, the Hadoop ecosystem. Required Skills: ~5+ years of experience in big data technologies . ~ Strong programming skills in Java (Scala/Python is a plus). ~ Hands...Big DataPermanent employmentFull time
- ...Engineer fields.Experience in building and optimizing big data pipelines, architectures, and data sets : - Experience with big data tools : Hadoop, Spark, Kafka, etc.- Experience with relational SQL databases, such as PostgreSQL, MySQL, etc.- Experience with stream-processing...Big Data
- # No of years experience 5+ Detailed job description - Skill Set: # Bigdata Testing - Hadoop, HDFS, Hive, Kafka, Spark, SQL UNIX Mandatory Skills # Bigdata Testing - Hadoop, HDFS, Hive, Kafka, Spark, SQL UNIX Good to Have Skills # Bigdata Testing - Hadoop, HDFS,...Big Data
