Get new jobs by email
  •  ...We're Hiring: PySpark Developer (Databricks) Experience: 4+ Years in Data Engineering / Distributed Systems Location: Offshore (Sompo IT Systems Development Unit) Joiners: Immediate Budget: Competitive Role Summary: Design, build, and optimize large-scale... 
    Suggested
    Immediate start
    Delhi
    12 days ago
  •  ...Years of experience 8 to 10 Years Experience in Perform Design, Development & Deployment using Azure Services ( Databricks, PySpark , SQL, Data Factory,) Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume... 
    Suggested
    Noida
    12 days ago
  • Location: Gurgaon ( Work from office)We are looking for a Python PySpark Developer with 34 years of experience, primarily focused on Python programming and automation using GitHub Actions. The ideal candidate will be responsible for developing scalable Python-based data workflows... 
    Suggested
    Work at office

    DG Liger Consulting

    Delhi
    1 day ago
  •  ...data extract, transformation and load steps conform to specified exit criteria and standards. Design and execute test scenarios, develop, and document data test plans based on requirements and technical specifications. Identify, analyze, and document all defects. Perform... 
    Suggested
    Noida
    12 days ago
  •  ...: Experience using Git for collaborative development. Big Data Tools: Exposure to Hive, PySpark , or similar technologies. Roles & Responsibilities Develop and optimize Python scripts for data processing and automation. Write efficient Spark SQL... 
    Suggested
    Noida
    3 days ago
  •  ...knowledge of Azure SQL, Data Lake, Data Factory, PySpark, and related services. Experience in...  ...Experience with MS-SQL, Cosmos DB, Databricks, and event-driven architectures. Knowledge...  ...Lake, Data Factory, PySpark, etc.). Develop solutions for secure handling of PII and... 
    Suggested
    Gurgaon
    2 days ago
  •  ...Engineer with a strong background in Python, PySpark, and SQL, to join our growing data...  ...environments.Key Responsibilities :- Design, develop, and maintain robust data pipelines using...  ...cloud data platforms like AWS Redshift, Databricks.- Excellent problem-solving and communication... 
    Suggested

    ACENET CONSULTING PRIVATE LIMITED

    Gurgaon
    a month ago
  •  ...engineering, data warehousing, and big data processing. The ideal candidate will have strong expertise in Python, PySpark, and AWS data services to design, develop, and maintain robust data pipelines. Key Responsibilities Design and implement end-to-end data engineering... 
    Suggested
    Gurgaon
    16 days ago
  •  ...Engineering Lead with strong expertise in Azure Data Factory, Databricks, PySpark, and SQL. This role requires both technical depth and...  ...solutions.Key Responsibilities :- Data Pipeline Development: Design, develop, and maintain large-scale ETL/ELT pipelines using Azure Data... 
    Suggested
    Work at office
    Immediate start

    Techno Wise

    Gurgaon
    8 days ago