Average salary: Rs2,183,333 /yearly
More statsGet new jobs by email
- ...data extract, transformation and load steps conform to specified exit criteria and standards. Design and execute test scenarios, develop, and document data test plans based on requirements and technical specifications. Identify, analyze, and document all defects. Perform...Suggested
- ...delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and...Suggested
- ...culture where teamwork and collaboration are encouraged excellence is rewarded and diversity is respected and valued Primary Skills Pyspark Spark and proficient in SQL Secondary Skills Scala and Python Experience 3 Yrs Key Responsibilities: Bachelor...Suggested
- ...Job Summary We are looking for a Senior PySpark Developer with 3 to 6 years of experience in building and optimizing data pipelines using PySpark on Databricks, within AWS cloud environments. This role focuses on the modernization of legacy domains, involving integration...Suggested
- ...Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting...Suggested
- ...We are seeking a PySpark Developer with IT experience . The ideal candidate will possess strong PySpark knowledge and hands-on experience in SQL, HDFS, Hive, Spark, PySpark, and Python . You will be instrumental in developing and optimizing data pipelines, working...Suggested
- ...We're Hiring: PySpark Developer (Databricks) Experience: 4+ Years in Data Engineering / Distributed Systems Location: Offshore (Sompo IT Systems Development Unit) Joiners: Immediate Budget: Competitive Role Summary: Design, build, and optimize large-scale...SuggestedImmediate start
- ...ROLE RESPONSIBILITIES Data Engineering and Processing: • Develop and manage data pipelines using PySpark on Databricks. • Implement ETL/ELT processes to process structured and unstructured data at scale. • Optimize data pipelines for performance, scalability, and...Suggested
- Responsibilities : 1. Deploying a hadoop cluster, maintaining a hadoop cluster, adding and removing nodes using cluster monitoring tools like Cloudera Manager, configuring the NameNode high availability and keeping a track of all the running hadoop jobs. 2. Implementing...Suggested
- ...by designing and implementing robust ETL pipelines. Creating PySpark scripts both generic templates and scripts tailored to specific... ...data processes. Collaborating with cross-functional teams to develop scalable and maintainable data integration architectures. Strong...Suggested
- Job Title : PySpark DeveloperLocation : Bangalore (Work from Office)Experience : 5+ YearsNotice Period : Immediate Joiners PreferredAbout... ...-performance data solutions.Key Responsibilities : - Design, develop, and optimize PySpark-based data pipelines.- Work on large-scale...SuggestedWork at officeImmediate start
- ...Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data processing pipelines using PySpark. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop high-quality code...Suggested
- ...Title- Pyspark Developer Job Description Must-Have Minimum 6-8 years of experience in build & deployment of Bigdata applications using SparkSQL, SparkStreaming in Python; Minimum 5 years of extensive experience in design, build and deployment of Python-based applications...Suggested
- ...We are seeking a Pyspark/Python Developer with strong design and development skills for building data pipelines. The ideal candidate will have experience working on AWS/AWS CLI , with AWS Glue being highly desirable . You should possess hands-on SQL experience and be...Suggested
- ...: Experience using Git for collaborative development. Big Data Tools: Exposure to Hive, PySpark , or similar technologies. Roles & Responsibilities Develop and optimize Python scripts for data processing and automation. Write efficient Spark SQL...Suggested
- ...Job title : Python, Pyspark Developer with Databricks Skills : Python, Pyspark, Databricks, SQL, Azure Experience Required : 8-10 Location : Indore, Pune Notice Period : Immediate to 15 days, Serving notice Overview: We are seeking a highly skilled Python...Immediate start
- ...Teamware Solutions is seeking a skilled Databricks / PySpark Developer to build and optimize our big data processing and analytics solutions. This role is crucial for working with relevant technologies, ensuring smooth data operations, and contributing significantly to business...
- Key responsibilities: Design, code, test, and debug software applications and systems Collaborate with cross-functional teams to identify and resolve software issues Write clean, efficient, and well-documented code Stay current with emerging technologies and industry...Full time
- ...The developer must have sound knowledge in Apache Spark and Python programming. Deep experience in developing data processing tasks using Pyspark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Create...
- Role: Spark Scala Developer Experience: 3-8 years Location: Bengaluru Employment Type: Full-time ⸻ What We're Looking For We're hiring a Spark Scala Developer who has real-world experience working in Big Data environments, both on-prem and/or in the cloud. You...Full time
- ...We are looking for a Lead ETL Developer to join our C3 Data team based in Bangalore. This role offers a unique opportunity to work on Clarivate... ...-edge big data technologies. Our team has expertise in Python, PySpark, Spark, Databricks, ECS, AWS, and Airflow -- and we would love...
- ...Key Responsibilities: Design, develop, and maintain ETL pipelines using Python , PySpark , and SQL on distributed data platforms. Write clean, efficient, and scalable PySpark code for big data transformation and processing. Develop reusable scripts and tools...
- ...Join us as an Application Engineer - PySpark Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer...Immediate start
- ...organisation and keep our data safe and secure Day-to-day, you'll develop innovative, data-driven solutions through data pipelines,... ...software engineering fundamentals Experience in Oracle PL-SQL, PySpark, AWS S3, Glue, and Airflow Good knowledge of modern code development...
- ...Join us as a Data Engineer - Pyspark at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and... ...ensuring unparalleled customer experiences. As a part of team of developers, you will deliver technology stack, using strong analytical and...Immediate start
- ...Job description: Role: Data Engineer. Roles and Responsibilities . Design, develop, and maintain scalable data pipelines using Spark (PySpark or Spark with Scala). . Build data ingestion and transformation frameworks for structured and unstructured...Flexible hours
- ...expertise supports our customers. You'll Also Need Three to Four years of overall experience with at least two years of experience in PySpark, AWS, and SQL Experience in multiple programming languages or Low Code toolsets Experience of DevOps, Testing and Agile...Permanent employmentFull time
- ...Must-Have Skills Good experience in Pyspark - Including Dataframe core functions and Spark SQL Good experience in SQL DBs - Be able to write queries including fair complexity. Should have excellent experience in Big Data programming for data transformation and aggregations...
- ...and motivated Data Engineer with a strong background in Python, PySpark, and SQL, to join our growing data engineering team. The ideal candidate... ...in agile environments. Key Responsibilities Design, develop, and maintain robust data pipelines using Python, PySpark, and...
- ...and experienced Big Data Engineer with 5+ years of relevant experience in developing data and analytic solutions. The ideal candidate will have strong expertise in Python, SQL, Spark/PySpark, and AWS Cloud . You will play a crucial role in designing, implementing, and...Flexible hours
