Get new jobs by email
- ...Teamware Solutions is seeking a skilled Databricks / PySpark Developer to build and optimize our big data processing and analytics solutions. This role is crucial for working with relevant technologies, ensuring smooth data operations, and contributing significantly to business...Suggested
- ...Job title : Python, Pyspark Developer with Databricks Skills : Python, Pyspark, Databricks, SQL, Azure Experience Required : 8-10 Location : Indore, Pune Notice Period : Immediate to 15 days, Serving notice Overview: We are seeking a highly skilled Python...SuggestedImmediate start
- ...constraints are met. Gather, analyze, and develop visualizations and reporting from large,... ...of practical experience with Spark, SQL, Databricks, and AWS cloud ecosystem. Expertise in... .... Strong programming skills in PySpark and SparkSQL. Proficient in orchestration...Suggested
- ...seeking a skilled Data Engineer with hands-on experience in Databricks and PySpark to design and implement scalable data pipelines and... ...drive business decisions. Key Responsibilities: Design, develop, and maintain ETL/ELT pipelines using Databricks , PySpark...Suggested
- ...Profile Summary We’re looking for dynamic data engineers with Databricks or Apache Spark and AWS experience to join the data analytics... ...Gainwell and its clients. Your role in our mission Design, develop and deploy data pipelines including ETL-processes for getting, processing...SuggestedLong term contractTemporary workWork at officeRemote jobAfternoon shift
- ...Key Responsibilities : Design, develop, and manage robust data pipelines using PySpark and SQL. Work with AWS services to implement data solutions. Utilize Databricks for data processing and analytics. Collaborate with data scientists, analysts, and other stakeholders...Suggested
- Role: Spark Scala Developer Experience: 3-8 years Location: Bengaluru Employment Type... ...⸻ What We're Looking For We're hiring a Spark Scala Developer who has real-world... ...and modern data tools like Snowflake or Databricks is a strong plus. ⸻ Your Responsibilities...SuggestedFull time
- We're Hiring | Data Engineer (4+ Years) | Pune (Hybrid)Enterprise Minds India Pvt. Ltd. is looking... ...environment.Key Responsibilities :- Design, develop, and maintain scalable and robust data pipelines using Databricks and Spark/PySpark- Write complex and efficient SQL queries for...SuggestedHybrid workFlexible hours
- ...Global is seeking a highly skilled AWS Data Engineer with 6+ years of experience in AWS, Databricks, Pyspark and S3. As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines, optimizing data workflows, and ensuring data quality...Suggested
- ...Job Responsibilities: Design, develop, and implement robust and scalable data pipelines... ...Lake Storage (ADLS) . Utilize Azure Databricks for advanced data processing, transformation... ...scripts primarily using Python and PySpark . Collaborate with data scientists, analysts...Suggested
- ...Years of experience 8 to 10 Years Experience in Perform Design, Development & Deployment using Azure Services ( Databricks, PySpark , SQL, Data Factory,) Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume...Suggested
- ...We are seeking a PySpark Developer with IT experience . The ideal candidate will possess strong PySpark knowledge and hands-on experience... ...data processing solutions, particularly within the Azure Databricks environment. Key Responsibilities PySpark Development :...Suggested
- ...Job Summary We are looking for a Senior PySpark Developer with 3 to 6 years of experience in building and optimizing data pipelines using PySpark on Databricks, within AWS cloud environments. This role focuses on the modernization of legacy domains, involving integration...Suggested
- ...implementing robust ETL pipelines. Creating PySpark scripts both generic templates and... ...Collaborating with cross-functional teams to develop scalable and maintainable data... ...distributed data processing on platforms like Databricks or Hadoop. Hands-on experience in delivering...Suggested
- ...Key Responsibilities: Design, develop, and maintain ETL pipelines using Python , PySpark , and SQL on distributed data platforms. Write clean, efficient... ...such as AWS, Azure, or GCP (e.g., EMR , Databricks , BigQuery , Synapse , etc.). ~ Experience...Suggested
- ...Engineer with strong hands-on expertise in Databricks on any cloud platform (AWS, Azure, or GCP... ...with proven skills in SQL, Python, and PySpark. The ideal candidate will have a solid background... ....Key Responsibilities :- Design, develop, and maintain scalable data pipelines and...Full time
- ...We are looking for a Lead ETL Developer to join our C3 Data team based in Bangalore. This role offers a unique opportunity to work... ...big data technologies. Our team has expertise in Python, PySpark, Spark, Databricks, ECS, AWS, and Airflow -- and we would love to speak with you...
- Software Engineer III - (AWS, Databricks, Pyspark) Start Date Starts Immediately... ...About the job Key responsibilities: 1. Design and develop ETL processes using Ab Initio. 2. Implement efficient ETL workflows...Immediate startWorldwide
- ...s business objectives. Job Responsibilities Design and develop ETL processes using Ab Initio. Implement efficient ETL workflows... ...and data warehousing principles. Proficiency in AWS, Databricks, PySpark. Strong SQL skills for querying and manipulating data....
- ...in implementing/designing solutions using Azure Big Data technologies - 5+ years of hands-on experience in Azure Data factory, Azure Databricks, Azure DevOps, Azure Data lake storage (Gen1/Gen2), Azure logic apps, Azure functions with Python or Java - Proficient in dealing...Full time
- ...are seeking a strong Data Engineer with advanced expertise in Databricks and PySpark. The successful candidate will be a key contributor to... ...from various sources, ensuring scalability and efficiency.- Develop robust data ingestion pipelines to load data into the Databricks...
- ...Load), data warehousing, and data analytics.Work with AWS and Databricks to design, develop, and maintain data pipelines and data platforms.Build... ...Responsibilities :- Work extensively on Databricks and its modules using PySpark for data processing.- Designs, Develops, and optimize...Full timeShift work
- ...Must-Have Skills Good experience in Pyspark - Including Dataframe core functions and Spark SQL Good experience in SQL DBs - Be able... ...DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. Experience in migrating on-premise...
- ...gaps and disruptions of the future. We are looking forward to hire PySpark Professionals in the following areas : Job Description Role: Data Engineer. Roles And Responsibilities Design, develop, and maintain scalable data pipelines using Spark (PySpark or Spark...Flexible hours
- ...delivery conduct technical risk planning perform code reviews and unit test plan reviews You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge management and adherence to the organizational guidelines and processes...
- ...Engineer with a strong background in Python, PySpark, and SQL, to join our growing data... ...environments. Key Responsibilities Design, develop, and maintain robust data pipelines using... ...of Azure services, particularly Azure Databricks, along with Data Factory, Blob Storage,...
- Responsibilities : 1. Deploying a hadoop cluster, maintaining a hadoop cluster, adding and removing nodes using cluster monitoring tools like Cloudera Manager, configuring the NameNode high availability and keeping a track of all the running hadoop jobs. 2. Implementing...
- ...Primary skills:Bigdata- Pyspark,Bigdata- Python,Bigdata- Spark,Technology- Functional Programming- Scala Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in...
- ...Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting...
- Job Title : PySpark DeveloperLocation : Bangalore (Work from Office)Experience : 5+ YearsNotice... ....Key Responsibilities : - Design, develop, and optimize PySpark-based data pipelines... ...Experience with cloud data platforms - AWS Glue, Databricks, Azure Data Lake, or GCP BigQuery.-...Work at officeImmediate start
