Average salary: Rs550,000 /yearly
More statsGet new jobs by email
Rs 4 - 7 lakhs p.a.
...We are seeking a proactive Senior Snowflake PySpark Developer to lead the design and maintenance of data pipelines in cloud environments. You will be responsible for building robust ETL processes using Snowflake, PySpark, SQL, and AWS Glue . This role requires strong expertise...SuggestedRs 6.5 - 17 lakhs p.a.
...Design, implement, and optimize ETL pipelines and data processing workflows using PySpark Work on distributed computing frameworks for large-scale data processing Collaborate with Databricks and other cloud platforms for data storage and transformation Perform data...Suggested- ...We are looking for a skilled PySpark Developer with hands-on experience in Reltio MDM to join our data engineering team. The ideal candidate will be responsible for designing and implementing scalable data processing solutions using PySpark and integrating with Reltio's...Suggested
- ...deliver cutting-edge analytics solutions to our clients and drive business growth.Key Responsibilities :- Design and develop scalable ETL pipelines using PySpark, Python, and other relevant technologies to ingest, transform, and load data from various sources into our data...SuggestedHybrid work
- ...: Experience using Git for collaborative development. Big Data Tools: Exposure to Hive, PySpark , or similar technologies. Roles & Responsibilities Develop and optimize Python scripts for data processing and automation. Write efficient Spark SQL...Suggested
- Description :Job Title : Data Engineer (Python, PySpark, Airflow)Experience : 5+ YearsLocation : Bangalore / Gurgaon / PuneMode of work... ...and cloud-based data platforms.Key Responsibilities :- Design, develop, and maintain scalable ETL/ELT data pipelines using Python and PySpark...Suggested
- ...focus on data quality, governance, and performance optimization.Key Responsibilities :- Data Engineering & Pipeline Development Design, develop, and maintain scalable, reliable, and high-performance data pipelines using Python, Apache Spark, and Shell scripting.- Build...SuggestedHybrid work
- ...Role: Data Engineer (PySpark, SQL, GCP) Experience: 6+ Years Locations: Indore | Raipur | Gurgaon | Bangalore We are looking for experienced Data Engineers to build and optimise scalable data pipelines and data models using modern data engineering practices. The role...SuggestedFull timeHybrid work
- We are seeking skilled Azure Data Migration Engineers to lead on-premises to Azure cloud data migration for US healthcare RCM systems. Work onsite to design scalable cloud data architectures, build ETL/ELT pipelines, and enable analytics using Azure Data Services. Key ...SuggestedHybrid work
Rs 5 - 14 lakhs p.a.
...engineering, data warehousing, and big data processing. The ideal candidate will have strong expertise in Python, PySpark, and AWS data services to design, develop, and maintain robust data pipelines. Key Responsibilities Design and implement end-to-end data engineering...Suggested- ...infrastructure. The ideal candidate will have strong expertise in PySpark/Python, Databricks, and end-to-end MLOps lifecycle management,... ...governance, and compliance frameworksKey Responsibilities :- Design, develop, and maintain end-to-end ML pipelines including data ingestion,...Suggested
Rs 18 - 25 lakhs p.a.
...object-orientated approach. What are we looking for Problem-solving skills,Prioritization of workload ,Commitment to quality PySpark, Python, SQL (Structured Query Language) Roles and Responsibilities In this role, you need to analyze and solve moderately complex...Suggested
