Average salary: Rs1,350,000 /yearly
More statsGet new jobs by email
- Pyspark Developer EXP: 5 to 10 LOCATION: Hyderabad, Chennai, Bengaluru Role & responsibilities Strong knowledge of Pyspark and SQL Experience in PySpark. Experience in DataStage, Teradata Experience in Python. Delta LakeSuggested
- ...with leading banks' Risk Management teams on projects related to consumer and wholesale banking risk models. Develop, fine-tune, and implement programs using PySpark, Python, Scala on Big Data/Hadoop platforms. Enhance and deploy machine learning models based on business...SuggestedWork at office
- ...We are seeking a PySpark Developer with IT experience . The ideal candidate will possess strong PySpark knowledge and hands-on experience in SQL, HDFS, Hive, Spark, PySpark, and Python . You will be instrumental in developing and optimizing data pipelines, working...Suggested
- ...Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting...Suggested
- ...We are seeking a Pyspark/Python Developer with strong design and development skills for building data pipelines. The ideal candidate will have experience working on AWS/AWS CLI , with AWS Glue being highly desirable . You should possess hands-on SQL experience and be...Suggested
- ...delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job description: Python Pyspark Developer: (5+ Years) Design and develop Python, SQL and DBT application. Hands on in developing Jobs in pySpark with Python/ SCALA (...SuggestedContract workHybrid workImmediate startWorldwide
- ...The developer must have sound knowledge in Apache Spark and Python programming. Deep experience in developing data processing tasks using Pyspark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Create...Suggested
- ...Key Responsibilities: Design, develop, and optimize big data pipelines and ETL workflows using PySpark , Hadoop (HDFS, MapReduce, Hive, HBase) . Develop and maintain data ingestion, transformation, and integration processes on Google Cloud Platform services such...Suggested
- ...Job Responsibilities: Design, develop, and implement robust and scalable data pipelines using Azure Data Factory (ADF) . Efficiently... ...data solutions and scripts primarily using Python and PySpark . Collaborate with data scientists, analysts, and other engineering...Suggested
- Job Title : PySpark DeveloperLocation : Chennai, Hyderabad, KolkataWork Mode : Monday - Friday (5 days WFO)Experience : 5+ Years in Backend... ...the Role :We are looking for an experienced PySpark Developer with strong data engineering capabilities to design, develop, and...SuggestedImmediate startWorking Monday to Friday
- Description : Job Title : PySpark/Scala DeveloperFunctional Skills : Experience in Credit Risk/Regulatory risk domainTechnical Skills :... ...Learning TechniquesJob Description : - 5+ Years of experience with Developing/Fine tuning and implementing programs/applications- Using Python...SuggestedWork at office
- ...and SE applications- Skills & Expertise : Python, Data Bricks, PySpark, Cloud-based services -Azure, ADT data, FHIR, EHR data, BI tools... ...Interfacing with business customers, gathering requirements and developing new datasets in data platform- Identifying the data quality issues...SuggestedFull timeContract workImmediate startRemote job
- ...experience with Azure Databricks, Azure Data Lake, Azure Data Factory, PySpark/Spark, and SQL, along with the ability to guide a small team on technical design and delivery.Key Responsibilities : - Design, develop, and optimize large-scale data pipelines using Azure Databricks,...Suggested
- Key Responsibilities : - Design, develop, and maintain scalable data pipelines and architectures using AWS services.- Implement ETL/ELT... ....Required Skills : - 3+ years of experience in Python, SQL, and PySpark.2+ years of experience with AWS services such as :- AWS Glue- AWS...Suggested
- Description :Role : Data Engineer.Location : Hyderabad.Key Responsibilities :- Design, develop, and maintain scalable data pipelines using PySpark and Azure Data Factory.- Work closely with business stakeholders, analysts, and data scientists to understand data requirements...SuggestedFull timeContract work
