Average salary: Rs1,350,000 /yearly

More stats
Get new jobs by email
  •  ...by designing and implementing robust ETL pipelines. Creating PySpark scripts both generic templates and scripts tailored to specific...  ...data processes. Collaborating with cross-functional teams to develop scalable and maintainable data integration architectures. Strong... 
    Suggested
    Secunderabad
    a month ago
  •  ...Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting... 
    Suggested
    Secunderabad
    a month ago
  •  ...We are seeking a PySpark Developer with IT experience . The ideal candidate will possess strong PySpark knowledge and hands-on experience in SQL, HDFS, Hive, Spark, PySpark, and Python . You will be instrumental in developing and optimizing data pipelines, working... 
    Suggested
    Secunderabad
    27 days ago
  •  ...We are seeking a Pyspark/Python Developer with strong design and development skills for building data pipelines. The ideal candidate will have experience working on AWS/AWS CLI , with AWS Glue being highly desirable . You should possess hands-on SQL experience and be... 
    Suggested
    Secunderabad
    27 days ago
  •  ...delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job description: Python Pyspark Developer: (5+ Years) Design and develop Python, SQL and DBT application. Hands on in developing Jobs in pySpark with Python/ SCALA (... 
    Suggested
    Contract work
    Hybrid work
    Immediate start
    Worldwide
    Secunderabad
    3 days ago
  •  ...The developer must have sound knowledge in Apache Spark and Python programming. Deep experience in developing data processing tasks using Pyspark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Create... 
    Suggested
    Secunderabad
    27 days ago
  •  ...Key Responsibilities: Design, develop, and optimize big data pipelines and ETL workflows using PySpark , Hadoop (HDFS, MapReduce, Hive, HBase) . Develop and maintain data ingestion, transformation, and integration processes on Google Cloud Platform services such... 
    Suggested
    Secunderabad
    27 days ago
  •  ...applications, ensuring design constraints are met. Gather, analyze, and develop visualizations and reporting from large, diverse data sets to...  ..., and operational stability. Strong programming skills in PySpark and SparkSQL. Proficient in orchestration using Airflow.... 
    Suggested
    Secunderabad
    2 days ago
  •  ...Job Responsibilities: Design, develop, and implement robust and scalable data pipelines using Azure Data Factory (ADF) . Efficiently...  ...data solutions and scripts primarily using Python and PySpark . Collaborate with data scientists, analysts, and other engineering... 
    Suggested
    Secunderabad
    27 days ago
  •  ...supports analytics, reporting, and AI/ML solutions. Key Responsibilities Design, develop, and maintain data ingestion, transformation, and orchestration pipelines using technologies like PySpark, SQL, Databricks, and Airflow. Integrate data from various sources including... 
    Suggested
    Remote job
    Secunderabad
    a month ago
  •  ...and SE applications- Skills & Expertise : Python, Data Bricks, PySpark, Cloud-based services -Azure, ADT data, FHIR, EHR data, BI tools...  ...Interfacing with business customers, gathering requirements and developing new datasets in data platform- Identifying the data quality issues... 
    Suggested
    Full time
    Contract work
    Immediate start
    Remote job

    HyrEzy Talent Solutions

    Hyderabad
    9 days ago
  • Key Responsibilities : - Design, develop, and maintain scalable data pipelines and architectures using AWS services.- Implement ETL/ELT...  ....Required Skills : - 3+ years of experience in Python, SQL, and PySpark.2+ years of experience with AWS services such as :- AWS Glue- AWS... 
    Suggested

    Deqode

    Hyderabad
    7 days ago
  • Description :Role : Data Engineer.Location : Hyderabad.Key Responsibilities :- Design, develop, and maintain scalable data pipelines using PySpark and Azure Data Factory.- Work closely with business stakeholders, analysts, and data scientists to understand data requirements... 
    Suggested
    Full time
    Contract work

    INTEGRATED PERSONNEL SERVICES LIMITED

    Hyderabad
    10 days ago
  •  ..., and transformation across both batch and streaming datasets.- Develop and optimize end-to-end data pipelines to process structured and...  ...Strong working knowledge of the Databricks ecosystem, including: PySpark, Notebooks, Structured Streaming, Unity Catalog, Delta Live... 
    Suggested
    Immediate start

    INUMELLAS CONSULTANCY SERVICES PRIVATE LIMITED

    Hyderabad
    10 days ago