Average salary: Rs1,175,000 /yearly
More statsGet new jobs by email
Rs 6.5 - 17 lakhs p.a.
...Design, implement, and optimize ETL pipelines and data processing workflows using PySpark Work on distributed computing frameworks for large-scale data processing Collaborate with Databricks and other cloud platforms for data storage and transformation Perform data...Suggested- ...We are looking for a skilled PySpark Developer with hands-on experience in Reltio MDM to join our data engineering team. The ideal candidate will be responsible for designing and implementing scalable data processing solutions using PySpark and integrating with Reltio's...Suggested
Rs 4 - 7 lakhs p.a.
...We are seeking a proactive Senior Snowflake PySpark Developer to lead the design and maintenance of data pipelines in cloud environments. You will be responsible for building robust ETL processes using Snowflake, PySpark, SQL, and AWS Glue . This role requires strong expertise...Suggested- ...deliver cutting-edge analytics solutions to our clients and drive business growth.Key Responsibilities :- Design and develop scalable ETL pipelines using PySpark, Python, and other relevant technologies to ingest, transform, and load data from various sources into our data...SuggestedHybrid work
- ...with deep knowledge of AI/ML frameworks and libraries such as TensorFlow, PyTorch, or Scikit-learn Five years of experience with PySpark for distributed data processing, performance optimisation, and integration with AI and data engineering workflows Practical experience...SuggestedPermanent employmentFull time
- Join us as a Software Engineer. PySpark This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It's a chance to hone your existing technical...SuggestedPermanent employmentFull time
Rs 5 - 14 lakhs p.a.
...engineering, data warehousing, and big data processing. The ideal candidate will have strong expertise in Python, PySpark, and AWS data services to design, develop, and maintain robust data pipelines. Key Responsibilities Design and implement end-to-end data engineering...Suggested- ...ZSers drive impact by bringing a client-first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring...SuggestedFull timeNo agencyHybrid workLocal areaWork from homeWorldwideFlexible hours
Rs 18 - 25 lakhs p.a.
...object-orientated approach. What are we looking for Problem-solving skills,Prioritization of workload ,Commitment to quality PySpark, Python, SQL (Structured Query Language) Roles and Responsibilities In this role, you need to analyze and solve moderately complex...Suggested- ...Role: Data Engineer (PySpark, SQL, GCP) Experience: 6+ Years Locations: Indore | Raipur | Gurgaon | Bangalore We are looking for experienced Data Engineers to build and optimise scalable data pipelines and data models using modern data engineering practices. The role...SuggestedFull timeHybrid work
