Get new jobs by email
Rs 3 - 8 lakhs p.a.
...Experience : Minimum 3 to Maximum 8 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Gurgaon / Pune Mandatory Skills : Big Data | Hadoop | SCALA | spark | spark Sql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification...Suggested- ...align your passions and skills with our vacancies, setting you on a path to exceptional career development and success. Apache Hadoop Developer at BairesDev We are seeking an Apache Hadoop Developer with expertise in big data ecosystem, HDFS architecture, MapReduce...SuggestedLocal areaWorldwide
Rs 3 - 8 lakhs p.a.
...Must be strong in Hadoop and Spark Architecture Hands-on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Should have strong hands-on experience on Hive/Impala/Spark query performance tuning concepts Good UNIX Shell, Python/...Suggested- ...Job Summary This role in T&A DATA Technology within T&I is for the position of Hadoop developer with experience in Hadoop echosystem & scala spark, DPT & Python with overall experience of 10+ years. Core Technical Skills Required Work experience on Hadoop, Hive, Spark...SuggestedLong term contractFull timeWork at officeWork from homeFlexible hours
Rs 4 - 6 lakhs p.a.
...Key Responsibilities: Develop, test, and deploy Hadoop-based data processing workflows using tools like MapReduce, Hive, Pig, and Spark . Design and implement ETL/ELT pipelines to ingest and process large volumes of structured and unstructured data. Write efficient...Suggested- ...Job Designation: Java with Hadoop Developer Location: Gurgaon, India Required Experience (in Years)- 3 To 7 Yrs This position reports into the VP – Engineering at Airlinq. He will work with the Engineering and Development teams to build and maintain a testing and...SuggestedFull time
Rs 4 - 7 lakhs p.a.
...Key Responsibilities: Design, develop, and optimize large-scale data processing workflows using Hadoop components such as HDFS, MapReduce, Hive, Pig, and HBase . Build and maintain ETL pipelines to ingest and transform data from various sources into Hadoop clusters....Suggested- Job Description :- 5+ years of experience in Hadoop eco system- 3 to 5 years of hands on experience in architecting, designing, and implementing data ingestion pipes for batch, real-time, and streams.- 3 to 5 years of hands on experience with proven track record in building...Suggested
- ...Minumum 5 years of experince in Data Engineering. Design, develop, and maintain robust data pipelines in Hadoop and related ecosystems, ensuring data reliability, scalability, and performance. Implement data ETL processes for batch and streaming analytics requirements...SuggestedFull time
- Job Description :Roles : - Set up and run Hadoop development frameworks.- Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.- Explore and learn new technologies...Suggested
- ...applications with data pipeline open source products, and experience in Hadoop data platform; strong critical thinking, communication, and... ...with line of business users and technology teams to design, develop, and test full stack cloud data solutions. Lead and ensure the...SuggestedWork at office
- ...data scientists and analysts to understand their data needs and develop data solutions that meet those needs- Design, build, and maintain... ...products.- Strong experience with big data technologies such as Hadoop, Spark, and Hive- Proficiency in either Scala or Java programming...Suggested
- .... - Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms). - Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and...Suggested
- ...Cloud. The ideal candidate will have hands-on expertise in Spark, Hadoop ecosystem, and microservices architecture, along with strong... ...skills in Scala, Python, or Java.Key Responsibilities :- Design, develop, and maintain Big Data applications (Batch & APIs) on AWS- Build...Suggested
Rs 3 - 12 lakhs p.a.
...compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing...Suggested- ...a Senior Data Engineer, you will be instrumental in designing, developing, and maintaining our data infrastructure, ensuring the seamless... ...or similar).- Develop and maintain scalable data solutions using Hadoop and related technologies, enabling efficient processing and storage...Hybrid work
- ...pipelines for fraud detection and risk analysis. The role focuses on processing user and transactional data using Spark SQL, Flink SQL, Hadoop/HDFS and light Java work for MapReduce integration, and production support to ensure timely, accurate, and complete data...Hybrid workImmediate start3 days week
Rs 2.5 - 5.5 lakhs p.a.
...As a Data Engineer, you will leverage Databricks and Hadoop ecosystems to build robust and efficient data pipelines, enable real-time analytics... ...high-performance, reliability, and cost-effectiveness. Develop and maintain data models and ETL (Extract, Transform, Load) processes...- Description :Roles & Responsibility :- To build a Big Data Platform for APAC region- To develop data pipelines to load different kind of data (both structured and unstructured data) to HADOOP system- To migrate existing datasets from Data warehouse to Data Lake.- Build views...
- ...Senior Engineer in the Data Engineering & Analytics team, you will develop data & analytics solutions that sit atop vast datasets gathered... ...: 8+ years Working proficiency in using Python, PySpark, SQL, Hadoop platforms to build Big Data products & platforms. Experience with...
- ...Position Summary We are seeking a Staff Product Support Engineer - Hadoop SME (Subject Matter Expert) who will be responsible for designing, optimising, migrating and scaling Hadoop and Spark-based data processing systems. This role involves hands-on experience with Hadoop...Shift work
- ...Title: Data Engineer Skills: Hadoop, Hive, Pyspark, SQL. Experience: 4+ Yrs Qualification: Any Degree Notice Period: Immediate Joiner Location: Chennai, Bangalore. Please DO NOT apply if your profile does not meet the job description or required qualifications...Immediate start
Rs 4 - 7 lakhs p.a.
...Key Responsibilities: Design, develop, and optimize big data pipelines and ETL workflows using PySpark , Hadoop (HDFS, MapReduce, Hive, HBase) . Develop and maintain data ingestion, transformation, and integration processes on Google Cloud Platform services such...- ...Engineer with strong expertise across traditional big‑data platforms (Hadoop ecosystem) and modern cloud-native architectures (AWS).... ...Kafka) and AWS (Glue, EMR, Lambda, Step Functions, Redshift) . Develop distributed data processing solutions using PySpark, Spark SQL...Hybrid workLocal area
- You will join a team of highly skilled hadoop engineers who are responsible for delivering Acceldata's support services in vendor-agnostic environments. As a Site Reliability Engineer , you will actively learn from experienced team members, contributing to improving the availability...
- ...platforms. The ideal candidate will bring deep technical expertise in Hadoop, Spark, Kafka, Big Data technologies, Java 8+, distributed... ...mentoring. Key Responsibilities: Architect, design, and develop scalable, high-performance data and AI platform solutions. Build...Full timeHybrid workWork at officeLocal areaShift work
- ...Job Title: Platform Support Engineer (Hadoop / Data Pipeline Operations) Location: Remote Job Type: Fulltime with BayOne Solutions... ...user acceptance testing before deploying solutions to the field. Develop and implement procedures for configuration and testing of systems...Full timeLocal areaRemote job
