Average salary: Rs1,350,000 /yearly
More statsGet new jobs by email
- ...by designing and implementing robust ETL pipelines. Creating PySpark scripts both generic templates and scripts tailored to specific... ...data processes. Collaborating with cross-functional teams to develop scalable and maintainable data integration architectures. Strong...Suggested
- ...Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting...Suggested
- ...We are seeking a PySpark Developer with IT experience . The ideal candidate will possess strong PySpark knowledge and hands-on experience in SQL, HDFS, Hive, Spark, PySpark, and Python . You will be instrumental in developing and optimizing data pipelines, working...Suggested
- ...We are seeking a Pyspark/Python Developer with strong design and development skills for building data pipelines. The ideal candidate will have experience working on AWS/AWS CLI , with AWS Glue being highly desirable . You should possess hands-on SQL experience and be...Suggested
- ...delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job description: Python Pyspark Developer: (5+ Years) Design and develop Python, SQL and DBT application. Hands on in developing Jobs in pySpark with Python/ SCALA (...SuggestedContract workHybrid workImmediate startWorldwide
- ...The developer must have sound knowledge in Apache Spark and Python programming. Deep experience in developing data processing tasks using Pyspark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Create...Suggested
- ...Key Responsibilities: Design, develop, and optimize big data pipelines and ETL workflows using PySpark , Hadoop (HDFS, MapReduce, Hive, HBase) . Develop and maintain data ingestion, transformation, and integration processes on Google Cloud Platform services such...Suggested
- ...applications, ensuring design constraints are met. Gather, analyze, and develop visualizations and reporting from large, diverse data sets to... ..., and operational stability. Strong programming skills in PySpark and SparkSQL. Proficient in orchestration using Airflow....Suggested
- ...Job Responsibilities: Design, develop, and implement robust and scalable data pipelines using Azure Data Factory (ADF) . Efficiently... ...data solutions and scripts primarily using Python and PySpark . Collaborate with data scientists, analysts, and other engineering...Suggested
- ...supports analytics, reporting, and AI/ML solutions. Key Responsibilities Design, develop, and maintain data ingestion, transformation, and orchestration pipelines using technologies like PySpark, SQL, Databricks, and Airflow. Integrate data from various sources including...SuggestedRemote job
- ...and SE applications- Skills & Expertise : Python, Data Bricks, PySpark, Cloud-based services -Azure, ADT data, FHIR, EHR data, BI tools... ...Interfacing with business customers, gathering requirements and developing new datasets in data platform- Identifying the data quality issues...SuggestedFull timeContract workImmediate startRemote job
- Key Responsibilities : - Design, develop, and maintain scalable data pipelines and architectures using AWS services.- Implement ETL/ELT... ....Required Skills : - 3+ years of experience in Python, SQL, and PySpark.2+ years of experience with AWS services such as :- AWS Glue- AWS...Suggested
- Description :Role : Data Engineer.Location : Hyderabad.Key Responsibilities :- Design, develop, and maintain scalable data pipelines using PySpark and Azure Data Factory.- Work closely with business stakeholders, analysts, and data scientists to understand data requirements...SuggestedFull timeContract work
- ..., and transformation across both batch and streaming datasets.- Develop and optimize end-to-end data pipelines to process structured and... ...Strong working knowledge of the Databricks ecosystem, including: PySpark, Notebooks, Structured Streaming, Unity Catalog, Delta Live...SuggestedImmediate start
