Get new jobs by email
- ...We're Hiring: PySpark Developer (Databricks) Experience: 4+ Years in Data Engineering / Distributed Systems Location: Offshore (Sompo IT Systems Development Unit) Joiners: Immediate Budget: Competitive Role Summary: Design, build, and optimize large-scale...SuggestedImmediate start
- ...Years of experience 8 to 10 Years Experience in Perform Design, Development & Deployment using Azure Services ( Databricks, PySpark , SQL, Data Factory,) Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume...Suggested
- Location: Gurgaon ( Work from office)We are looking for a Python PySpark Developer with 34 years of experience, primarily focused on Python programming and automation using GitHub Actions. The ideal candidate will be responsible for developing scalable Python-based data workflows...SuggestedWork at office
- ...data extract, transformation and load steps conform to specified exit criteria and standards. Design and execute test scenarios, develop, and document data test plans based on requirements and technical specifications. Identify, analyze, and document all defects. Perform...Suggested
- ...: Experience using Git for collaborative development. Big Data Tools: Exposure to Hive, PySpark , or similar technologies. Roles & Responsibilities Develop and optimize Python scripts for data processing and automation. Write efficient Spark SQL...Suggested
- ...knowledge of Azure SQL, Data Lake, Data Factory, PySpark, and related services. Experience in... ...Experience with MS-SQL, Cosmos DB, Databricks, and event-driven architectures. Knowledge... ...Lake, Data Factory, PySpark, etc.). Develop solutions for secure handling of PII and...Suggested
- ...Engineer with a strong background in Python, PySpark, and SQL, to join our growing data... ...environments.Key Responsibilities :- Design, develop, and maintain robust data pipelines using... ...cloud data platforms like AWS Redshift, Databricks.- Excellent problem-solving and communication...Suggested
- ...engineering, data warehousing, and big data processing. The ideal candidate will have strong expertise in Python, PySpark, and AWS data services to design, develop, and maintain robust data pipelines. Key Responsibilities Design and implement end-to-end data engineering...Suggested
- ...Engineering Lead with strong expertise in Azure Data Factory, Databricks, PySpark, and SQL. This role requires both technical depth and... ...solutions.Key Responsibilities :- Data Pipeline Development: Design, develop, and maintain large-scale ETL/ELT pipelines using Azure Data...SuggestedWork at officeImmediate start