Average salary: Rs1,318,570 /yearly
More statsGet new jobs by email
- ...data extract, transformation and load steps conform to specified exit criteria and standards. Design and execute test scenarios, develop, and document data test plans based on requirements and technical specifications. Identify, analyze, and document all defects. Perform...Suggested
- ...ROLE SUMMARY We are seeking a highly skilled PySpark Developer with hands-on experience in Databricks to join Sompo's IT Systems Development unit in an offshore capacity. This role focuses on designing, building, and optimizing large-scale data pipelines and processing solutions...Suggested
- ...relevant minimum 5+ years) Detailed job description - Skill Set: Technical Lead Candidate should have good hands on experience in Pyspark preferrable more than 7 years. Should have very good understanding of Agile development methodologies and excellent team handling...Suggested
- ...consider three years of progressive experience in the specialty in lieu of every year of education At least 5 years of experience in Pyspark, Spark with Hadoop distributed frameworks while handling large amount of big data using Spark and Hadoop Ecosystems in Data Pipeline...Suggested
- ...Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting...Suggested
- ...Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data processing pipelines using PySpark. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop high-quality code...Suggested
- ...Job title : Python, Pyspark Developer with Databricks Skills : Python, Pyspark, Databricks, SQL, Azure Experience Required : 8-10 Location : Indore, Pune Notice Period : Immediate to 15 days, Serving notice Overview: We are seeking a highly skilled Python...SuggestedImmediate start
- ...Join us as an Application Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer...SuggestedImmediate start
- ...gaps and disruptions of the future. We are looking forward to hire PySpark Professionals in the following areas : Job Description Role: Data Engineer. Roles And Responsibilities Design, develop, and maintain scalable data pipelines using Spark (PySpark or Spark...SuggestedFlexible hours
- ...Job Title: Data Engineer – Python / PySpark Number of Position: 4 Experience: 3 to 10 years Location: Pune (Client - USA) Work Model: Full-time, hybrid work, 3 days/week in office(Client location.) Job Type: Contract-to-Hire Client Domain: Utilities...SuggestedFull timeContract workHybrid workWork at office3 days week
- ...Join us as a Data Engineer - Pyspark at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and... ...ensuring unparalleled customer experiences. As a part of team of developers, you will deliver technology stack, using strong analytical and...SuggestedImmediate start
- ...successful as a Data Engineer you should have experience with: Hands on experience in pyspark and strong knowledge on Dataframes, RDD and SparkSQL Hands on Experience in developing, testing and maintaining applications on AWS Cloud. Strong hold on AWS Data Analytics...SuggestedImmediate start
- ...Skills Required- Minimum 6 years of working experience in Python, Pyspark, AWS and SQL. Programming & Frameworks: Python, PySpark... ...orchestration tools), CI/CD, Docker Responsibilities Design and develop scalable ETL pipelines using PySpark and SQL Manage data...Suggested
- ...JD As Below Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github... ...Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years...SuggestedFlexible hours
- ...solutions across 10 countries and 12 industry verticals in the last four years.About the Role :We are hiring a Senior Developer with strong expertise in PySpark, Machine Learning, Generative AI, SQL, Data Warehousing, and Microsoft Fabric.If youre passionate about building...SuggestedRemote job
- ...experience 8 to 10 Years Experience in Perform Design, Development & Deployment using Azure Services ( Databricks, PySpark , SQL, Data Factory,) Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume and...
- ...operations.Key Responsibilities :Data Platform Engineering :- Design, develop, and enhance data ingestion, transformation, and orchestration... ...- Implement best practices in distributed data processing using PySpark, Python, and SQL.Collaboration & Solutioning :- Partner with...
- ...Roles & Responsibilities :As a Senior Data Engineer, you manage and develop the solutions in close alignment with various business and Spoke... ..., and loading of data from a wide variety of data sources using Pyspark, SQL and AWS big data-technologies.- Build analytics tools that...
- ...TimeShift Timings : UK Shift(2 :00 PM-11 :00 PM IST/3 :00 PM-12 :00 AM IST)Responsibilities :- Hands-on experience with ETL processes using PySpark, SQL, Microsoft Fabric and other relevant technologies.- Collaborate with client and other stakeholders to understand data...Full time
- ...pipelines and contribute to innovative AI-driven data solutions on Azure.Key Responsibilities :- Design and develop robust ETL/ELT pipelines using Databricks, PySpark, and Azure Data Factory.- Implement scalable data processing workflows integrating Delta Lake and Azure Data...Contract workImmediate startWeekend work
- ...Spark development for batch and streaming applications- Skilled in Pyspark and Data Engineering- Expert in ETL implementation and migration... ...streaming (Dstream and Structured Streaming)- Familiar with developer tools like Jupyter notebooks- Knowledgeable about Airflow or similar...
- ...and deploying modern data solutions in an agile environment.Key Responsibilities :- Design, develop, and maintain scalable and robust data pipelines using Databricks and Spark/PySpark- Write complex and efficient SQL queries for data extraction and transformation- Work with...Hybrid workFlexible hours
- ...Databricks- Experience in building and optimizing data pipelines, architectures and data sets- Excellent experience in Scala or Python, PySpark and SQL- Ability to troubleshoot and optimize complex queries on the Spark platform- Knowledgeable on structured and unstructured...
- About the Role :We are looking for Sr. Azure Data Engineers to design, develop, and optimize data engineering pipelines on Azure. You will work hands-on with Azure Data Factory, Databricks (PySpark), Azure Synapse, and ADLS Gen2, ensuring scalable, high-performance solutions...