Get new jobs by email
Rs 3 - 8 lakhs p.a.
...Experience : Minimum 3 to Maximum 8 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Gurgaon / Pune Mandatory Skills : Big Data | Hadoop | SCALA | spark | spark Sql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification...Big Data- Job Description :- 5+ years of experience in Hadoop eco system- 3 to 5 years of hands on experience in architecting, designing, and implementing... ...(Azure) platform.- 3 to 5 years of hands on experience in Bigdata tools such as Sqoop, Hive, Spark, Scala, hBase, Mapreduce etc.- 1...Big Data
- ...align your passions and skills with our vacancies, setting you on a path to exceptional career development and success. Apache Hadoop Developer at BairesDev We are seeking an Apache Hadoop Developer with expertise in big data ecosystem, HDFS architecture, MapReduce...Big DataLocal areaWorldwide
Rs 4 - 6 lakhs p.a.
...Key Responsibilities: Develop, test, and deploy Hadoop-based data processing workflows using tools like MapReduce, Hive, Pig, and Spark . Design and implement ETL/ELT pipelines to ingest and process large volumes of structured and unstructured data. Write efficient...Big Data- ...Job Designation: Java with Hadoop Developer Location: Gurgaon, India Required Experience (in Years)- 3 To 7 Yrs This position reports into the VP – Engineering at Airlinq. He will work with the Engineering and Development teams to build and maintain a testing and...Big DataFull time
Rs 4 - 7 lakhs p.a.
...Key Responsibilities: Design, develop, and optimize large-scale data processing workflows using Hadoop components such as HDFS, MapReduce, Hive, Pig, and HBase . Build and maintain ETL pipelines to ingest and transform data from various sources into Hadoop clusters....Big Data- ...data scientists and analysts to understand their data needs and develop data solutions that meet those needs- Design, build, and maintain... ...products.- Strong experience with big data technologies such as Hadoop, Spark, and Hive- Proficiency in either Scala or Java programming...Big Data
- Job Description :Roles : - Set up and run Hadoop development frameworks.- Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.- Explore and learn new technologies...Big Data
Rs 3 - 12 lakhs p.a.
...compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing...Big Data- ...Cloud. The ideal candidate will have hands-on expertise in Spark, Hadoop ecosystem, and microservices architecture, along with strong... ...skills in Scala, Python, or Java.Key Responsibilities :- Design, develop, and maintain Big Data applications (Batch & APIs) on AWS- Build...Big Data
Rs 2.5 - 5.5 lakhs p.a.
...As a Data Engineer, you will leverage Databricks and Hadoop ecosystems to build robust and efficient data pipelines, enable real-time analytics... ...high-performance, reliability, and cost-effectiveness. Develop and maintain data models and ETL (Extract, Transform, Load) processes...Big Data- ...pipelines for fraud detection and risk analysis. The role focuses on processing user and transactional data using Spark SQL, Flink SQL, Hadoop/HDFS and light Java work for MapReduce integration, and production support to ensure timely, accurate, and complete data...Big DataHybrid workImmediate start3 days week
- Description :Roles & Responsibility :- To build a Big Data Platform for APAC region- To develop data pipelines to load different kind of data (both structured and unstructured data) to HADOOP system- To migrate existing datasets from Data warehouse to Data Lake.- Build views...Big Data
- ...Position Summary We are seeking a Staff Product Support Engineer - Hadoop SME (Subject Matter Expert) who will be responsible for designing, optimising, migrating and scaling Hadoop and Spark-based data processing systems. This role involves hands-on experience with Hadoop...Big DataShift work
Rs 3 - 8 lakhs p.a.
...Must be strong in Hadoop and Spark Architecture Hands-on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Should have strong hands-on experience on Hive/Impala/Spark query performance tuning concepts Good UNIX Shell, Python/...Suggested- ...Job Summary This role in T&A DATA Technology within T&I is for the position of Hadoop developer with experience in Hadoop echosystem & scala spark, DPT & Python with overall experience of 10+ years. Core Technical Skills Required Work experience on Hadoop, Hive, Spark...Long term contractFull timeWork at officeWork from homeFlexible hours
- ...Senior Engineer in the Data Engineering & Analytics team, you will develop data & analytics solutions that sit atop vast datasets gathered... ...: 8+ years Working proficiency in using Python, PySpark, SQL, Hadoop platforms to build Big Data products & platforms. Experience with...Big Data
Rs 4 - 7 lakhs p.a.
...Key Responsibilities: Design, develop, and optimize big data pipelines and ETL workflows using PySpark , Hadoop (HDFS, MapReduce, Hive, HBase) . Develop and maintain data ingestion, transformation, and integration processes on Google Cloud Platform services such...Big Data- ...platforms. The ideal candidate will bring deep technical expertise in Hadoop, Spark, Kafka, Big Data technologies, Java 8+, distributed... ...mentoring. Key Responsibilities: Architect, design, and develop scalable, high-performance data and AI platform solutions. Build...Big DataFull timeHybrid workWork at officeLocal areaShift work
- ...Engineer with strong expertise across traditional big‑data platforms (Hadoop ecosystem) and modern cloud-native architectures (AWS).... ...Kafka) and AWS (Glue, EMR, Lambda, Step Functions, Redshift) . Develop distributed data processing solutions using PySpark, Spark SQL...Big DataHybrid workLocal area
Rs 3 - 10 lakhs p.a.
# No of years experience 5+ Detailed job description - Skill Set: # Bigdata Testing - Hadoop, HDFS, Hive, Kafka, Spark, SQL UNIX Mandatory Skills # Bigdata Testing - Hadoop, HDFS, Hive, Kafka, Spark, SQL UNIX Good to Have Skills # Bigdata Testing - Hadoop, HDFS,...Big Data- ...Job Title: Platform Support Engineer (Hadoop / Data Pipeline Operations) Location: Remote Job Type: Fulltime with BayOne Solutions... ...user acceptance testing before deploying solutions to the field. Develop and implement procedures for configuration and testing of systems...Big DataFull timeLocal areaRemote job
Rs 3 - 7 lakhs p.a.
.... Expertise on PySpark, Spark SQL, Data frames, SQL Server etc. Mandatory Skills. PySpark/Spark SQL, Data frames, SQL Server, Bigdata, NumPy, Pandas. Machine Learning, Data Science. Monitor and curate the most important datasets in one or multiple projects, adhering...Big Data- Palantir, Bigdata, Pyspark In-depth knowledge of the Palantir Foundry platform, including data integration (Data Connections), Data Transformation... .... Knowledge on AIP is an added advantage. Ability to develop value-creating strategies and models that enable clients to...Big Data
- Pyspark, Bigdata, Spark, Python, Databricks, AWS Team Management: Lead and manage a team of technical professionals – Data Engineer and Application Developers, ensuring effective collaboration and productivity. Client Interaction: Serve as the primary point of contact...Big Data
- ...pipelines using Azure Data Factory, Databricks, and Spark. • Develop and optimize spark data workflows using Scala for large-scale data... ...and Parquet within a Big Data ecosystem. Job Requirements: Bigdata, Spark, Scala, Spark Streaming, Kafka, Azure, Databricks, SQL.Big Data
Rs 7 - 12 lakhs p.a.
...experience in technology and development. # Technical skillset required : (1) Language: Java or Python (2) Experience with BigData/Hadoop/Spark/Kafka (3) Experience with APIs and microservices architecture (4) UI Development and integration experience would be a...Big Data- ...and lookups. 2. Client description 3. Requirements Design, develop, and execute data quality test plans for big data systems.... ...processes, standards, and best practices. 3. Details on tech stack Bigdata architecture understanding Exposure to Splunk...Big Data
Rs 5 - 9 lakhs p.a.
...Skills : Bigdata, Scala & Spark, Cloud ( AWS OR Azure) Bigdata Technologies - Lead Data Engineer (8+ Years of overall experience in Data... ...technologies) Primary - Spark including streaming, Scala, Hadoop, Hbase, Kafka, Delta (preferably CDP), SQL. Knowledge / Experience...Big Data- Description : Title : Bigdata Scala Spark Developer - HadoopExp : 4 to 8 yearsLocation : Pune, HyderabadNP : Immediate to 30 DaysJob Description : - 4 years of experience in Scala Spark Big Data Hadoop Hive- Must have good technical experience and should be able to provide...Big DataImmediate start
