Average salary: Rs1,318,570 /yearly
More statsGet new jobs by email
- ...Join us as a Data Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences...SuggestedPermanent employmentImmediate start
- ...optimization. Qualifications : Required Skills and Experience Hands-on experience developing in Python including Pandas Scikit and Math packages. Hands-on experience with PySpark preferred Understanding Relational Databases such as SQL-Server or similar...SuggestedFull timeFlexible hours
- ...Primary skill required for the data engineer role is hands-on experience with Spark particularly PySpark. Candidates must have experience in coding data pipelines in Spark and should be comfortable with Python for automation. Other skills like Azure Airflow and Databricks...SuggestedFull time
- ...Job Summary: We are looking for a skilled and experienced PySpark Data Engineer to join our growing data engineering team. The... ...data assets. Key Responsibilities: · Design, develop, and maintain large-scale ETL pipelines using PySpark and...SuggestedFull time
- ...Join us as an Application Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer...SuggestedPermanent employmentImmediate start
- ...lakes, and cloud storage. Pipeline Orchestration & Processing :- Develop and manage data workflows using orchestration tools such as... ...- Implement ETL / ELT pipelines using tools such as Databricks, PySpark, or Spark SQL, depending on project needs.- Handle data migrations...SuggestedHybrid workWork at office
- ...Join us as an Engineer - PySpark/AWS at Barclays, responsible for supporting the successful delivery of location strategy projects to plan... ...E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop...SuggestedPermanent employmentImmediate start
- ...analytics platforms. The role requires deep hands-on expertise in PySpark, Python, AWS, SQL, and modern data engineering practices, with... ...commercial, clinical, or patient data.Key Responsibilities :- Design, develop, and maintain scalable data pipelines using PySpark and Python...Suggested
- ...ecosystem. The ideal candidate will have deep expertise in Databricks, PySpark, and modern data lake architectures, and be adept at designing... ...pipelines and data workflows.Key Responsibilities : - Design, develop, and maintain scalable data pipelines using Azure and Databricks...SuggestedFull time
- ...As a Data Engineer you will design develop and maintain data solutions that facilitate data generation collection and processing. Your... ...Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data modeling and database design...SuggestedFull timeImmediate start
- ...# Experience: At least 5 years working as a Data Engineer. # Technical Proficiency: Experience with programming languages such as Pyspark Python Scala or Java; strong SQL skills ; deep experience with big data frameworks such as Apache Spark. # ETL & Data Pipelines:...SuggestedFull timeFlexible hours
- ...5 YearsLocation : Remote (Gurgaon, Pune, Bangalore)Key skills : Pyspark, Azure (Mandatory)Notice Period : Immediate to 15 Days or serving... ...data lakes- Ensuring data is properly secured and protected- Developing and implementing data governance policies and procedures- Collaborating...SuggestedImmediate startRemote job
- ...Joiners to 15 Days onlyJob Description :We are looking for a skilled Junior Data Engineer with strong hands-on experience in Databricks, PySpark, Python, and SQL to build and maintain scalable data pipelines. The role involves working in a lakehouse environment and...SuggestedImmediate start
- ...using Cloud technologies such as such as Databricks, EMR, Athena, PySpark, S3, AWS Lambda etc. Strong foundation on database concepts and... ...with orchestration tools like Apache Airflow.- Expertise in developing ETL workflows with complex transformations such as SCD, deduplications...SuggestedFull time
- Description : Job Summary : We are looking for an experienced AWS Data Engineer Lead (PySpark Developer) with strong knowledge of AWS services, particularly AWS Lambda, to design and implement scalable, high-performance data processing workflows for large and complex datasets...Suggested