Average salary: Rs2,183,333 /yearly
More statsGet new jobs by email
- ...Join us as a Data Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences...SuggestedPermanent employmentImmediate start
- ...optimization. Qualifications : Required Skills and Experience Hands-on experience developing in Python including Pandas Scikit and Math packages. Hands-on experience with PySpark preferred Understanding Relational Databases such as SQL-Server or similar...SuggestedFull timeFlexible hours
- ...Primary skill required for the data engineer role is hands-on experience with Spark particularly PySpark. Candidates must have experience in coding data pipelines in Spark and should be comfortable with Python for automation. Other skills like Azure Airflow and Databricks...SuggestedFull time
- Description :We are looking for a skilled PySpark developer with strong SQL experience and hands-on exposure to Teradata or Snowflake. The ideal candidate will work on data processing, transformation, and analytics solutions in a large-scale data environment.Key Responsibilities...Suggested
- ...revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at . Job Title: Python Pyspark Developer Position: Lead Analyst Experience: 6 - 9 Years Category: Software Development/ Engineering Shift: General Main...SuggestedFull timeLocal areaShift work
- ...unit test plan reviews- You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge... ...- Develop and maintain scalable data pipelines using Python and PySpark.- Collaborate with data engineers and data scientists to understand...Suggested
- ...Job Summary: We are looking for a skilled and experienced PySpark Data Engineer to join our growing data engineering team. The... ...data assets. Key Responsibilities: · Design, develop, and maintain large-scale ETL pipelines using PySpark and...SuggestedFull time
- ...Scala / PySpark Developer Experience: Minimum 5 years of relevant experience Location: Visakhapatnam (Work From Office) Role type: Full time (End client- TCS) Requirements ~ Bachelors or Masters degree in Computer Science or in related field ~5 years of...SuggestedFull timeWork at office
- ...Join us as an Application Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer...SuggestedPermanent employmentImmediate start
- ...Role: Spark Scala Developer Experience: 3–8 years Location: Bengaluru Employment Type: Full-time ⸻ What We’re Looking For We’re hiring a Spark Scala Developer who has real-world experience working in Big Data environments, both on-prem and/or in the cloud...SuggestedFull time
- ...Position Description: Azure data bricks developer with 5- 6 years of experience We are seeking a skilled Azure Databricks Developer... ...Databricks on Azure. The ideal candidate will have strong expertise in PySpark Azure Data Lake and data engineering best practices in a cloud...SuggestedFull time
- ...analytics platforms. The role requires deep hands-on expertise in PySpark, Python, AWS, SQL, and modern data engineering practices, with... ...commercial, clinical, or patient data.Key Responsibilities :- Design, develop, and maintain scalable data pipelines using PySpark and Python...Suggested
- ...data pipelines.The ideal candidate should have strong expertise in Python, PySpark, and SQL, along with working knowledge of Databricks and cloud platforms like AWS or Azure.Key Responsibilities :- Develop and maintain scalable ETL/ELT pipelines using Python, PySpark, and SQL.-...Suggested
- ...Join us as a Data Engineer - PySpark. You will be responsible for supporting the successful delivery of Location Strategy projects to plan... ...on Dataframes, RDD and SparkSQL. Hands on Experience in developing, testing and maintaining applications on AWS Cloud. Strong hold...SuggestedImmediate start
- ...Role: Data Engineer (PySpark, SQL, GCP) Experience: 6+ Years Locations: Indore | Raipur | Gurgaon | Bangalore We are looking for experienced Data Engineers to build and optimise scalable data pipelines and data models using modern data engineering practices. The role...SuggestedFull timeHybrid work
- ...lakes, and cloud storage. Pipeline Orchestration & Processing :- Develop and manage data workflows using orchestration tools such as... ...- Implement ETL / ELT pipelines using tools such as Databricks, PySpark, or Spark SQL, depending on project needs.- Handle data migrations...Hybrid workWork at office
- Description : Job Title : Data Engineer (PySpark)About the Role : We are seeking a highly skilled Data Engineer with deep expertise in PySpark... ...team.As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data...
- ...build, and optimize cloud-based data warehouses and data lakes on AWS, Azure, or GCP.- Develop and maintain scalable data pipelines using modern ETL/ELT frameworks such as DBT, PySpark/SparkSQL, and Airflow.- Implement data integration solutions supporting multiple patterns...Hybrid work
- ...Python for data manipulation and analysis. SQL Expertise: Advanced knowledge of SQL for querying and managing databases. PySpark: Experience with PySpark for big data processing. Hadoop: Hands-on experience with Hadoop ecosystem components. Hive...Full time
- ...Roles & Responsibilities Design build and optimize large-scale data processing pipelines using Spark and PySpark . Develop high-quality maintainable code using Python and object-oriented programming principles. Collaborate with cross-functional teams...Full timeHybrid workWork at office3 days week
- ...with data scientists, analysts, and other engineers to design, develop, and deploy scalable data pipelines, ensuring data quality and accessibility... ...as Spark, Kafka, and Hadoop.- Strong proficiency in Python and Pyspark for data engineering tasks.- Proven experience in building and...
- ...ecosystem. The ideal candidate will have deep expertise in Databricks, PySpark, and modern data lake architectures, and be adept at designing... ...pipelines and data workflows.Key Responsibilities : - Design, develop, and maintain scalable data pipelines using Azure and Databricks...Full time
- ...As a Data Engineer you will design develop and maintain data solutions that facilitate data generation collection and processing. Your... ...Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data modeling and database design...Full timeImmediate start
- ...ideal candidate will have strong programming skills in Python and PySpark, hands-on experience with cloud platforms (Azure, AWS, GCP), and... ...is highly desirable.Key Responsibilities : - Design, develop, and maintain scalable data pipelines using Python and PySpark.-...
- ...architecture and design artifacts for complex applications ensuring design constraints are met Gather analyze and synthesize data to develop visualizations and reporting for continuous improvement Develop solution using Python Py Spark in a data driven environment...Full time
- ...Description :- Design, build, and maintain ETL/ELT data pipelines for finance data migration from SAP R/3 to S/4HANA- Work extensively with PySpark and Hadoop-based environments for large-scale data processing- Orchestrate and monitor workflows using Apache Airflow- Collaborate...
- ...and modern data engineering tools.Key Responsibilities :- Design, develop, and optimize scalable data pipelines and ETL/ELT workflows-... ...Databricks- Develop data processing applications using Python, SQL, and PySpark (or Scala)- Orchestrate workflows using Apache Airflow-...Hybrid workImmediate startRemote job
- ...- 2( F2F)Core technical expertise required for this role :- AWS PySpark : Strong hands-on experience using PySpark within AWS environments... ...and transformation.- ETL Frameworks : Experience designing, developing, and maintaining scalable ETL frameworks for batch and streaming...Long term contract
- ...applications while being accountable for ensuring design constraints are met by software code development Gathers analyzes synthesizes and develops visualizations and reporting from large diverse data sets in service of continuous improvement of software applications and systems...Full time
- ...Design and implement data models and data architectures to support business and analytics needs. Develop and maintain ETL pipelines using SQL Python PySpark Scala and Airflow. Work with Azure ADLS for data storage and management. Ensure data validation...Full timeFlexible hours