Average salary: Rs1,300,000 /yearly
More statsGet new jobs by email
- ...Data Engineer :- Design, build, and optimize robust and scalable data pipelines for both batch and real-time data ingestion using Apache Spark (Streaming + Batch), Apache Nifi, and Apache Kafka.Data Storage and Management :- Manage and maintain data storage solutions on Hadoop...Suggested
- Description :Key Responsibilities :- Design, develop, and deploy end-to-end data engineering solutions using Databricks, Apache Spark, PySpark, Python, and SQL.- Build scalable and efficient ETL/ELT pipelines for data ingestion, transformation, and integration from various sources...Suggested
- ...real-time/batch data processing applications using Scala, Java, and Apache Flink. Build and optimize distributed data pipelines on AWS... ...optimization. Preferred Skills: Exposure to Splunk, Apache Spark, Kafka, Docker, Kubernetes (k8s). Familiarity with GitLab CI/...Suggested
- Java, Spring, Hibernate, Hadoop, Spark Streaming, Unix Shell Script, MSSQL, Azure, Synapse, Cosmos DB Description GSPANN is hiring... ...knowledge of Spring and Hibernate frameworks. ~ Set up and debug Apache Spark jobs for over 4 years, with a solid understanding of data...SuggestedFull timeShift work
- ...Strong understanding of Data Warehousing, Data Lake, ETL processes and Big Data technologies (e.g Hadoop, Snowflake, Databricks, Apache Spark, PySpark, Airflow, Apache Kafka, Java, Open File & Table Formats, GIT, CI/CD pipelines etc. ) Experience in developing, debugging...Suggested
- ...with cutting-edge technologies in the Hadoop ecosystem, including Spark , Scala , Hive , and Go , and be responsible for designing... ...4+ years of hands-on experience working with Hadoop ecosystem, Apache Spark, Scala, Hive, Core Java, Go programming language ~ Strong...SuggestedImmediate start
- ...Experience with Apache Spark is must. Strong Java development background Hands-on experience with Apache Spark, especially with core APIs Proficient in Spring Boot and Microservices architecture. Experience with RESTful APIs development and integration. Good...Suggested
- ...mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning... ...practices in a big data environment. ~ Proficiency in PySpark, Apache Spark, and related big data technologies for data processing,...SuggestedPermanent employmentFull time
- About The Role Grade Level (for internal use): 11 The Team The usage reporting team gathers raw usage data from various products and produces unified datasets across departmental lines within Market Intelligence. We deliver essential intelligence for both public and...SuggestedSide jobWorldwideFlexible hours
- ...Design and develop highly scalable backend systems using Scala (Spark / Play) and Python.- Build and maintain data processing pipelines... ...Python for backend and data workflows.- Hands-on familiarity with Apache Spark for distributed data processing (deep expertise not mandatory...SuggestedFull time
- Key Responsibilities :- Design, develop, and maintain data processing applications using Scala and Apache Spark.- Collaborate with cross-functional teams to gather requirements and deliver scalable, efficient solutions.- Implement test-driven development practices to improve...Suggested
- ...from HDFS/Hive to cloud object storage (e.g., S3, Ceph). - Optimize Spark (and optionally Flink) jobs for performance and scalability in a... ...- Ensure data consistency, schema evolution, and governance with Apache Iceberg or equivalent table formats. - Support migration strategy...Suggested
- ...and Develop Data Pipelines : Architect, build, and deploy scalable and efficient data pipelines within our Big Data ecosystem using Apache Spark and Apache Airflow.- Document new and existing pipelines and datasets to ensure clarity and maintainability.- Data Architecture and...SuggestedWork at office
- ...pipelines and systems for data processing.- Utilize Data Lakehouse, Spark on Kubernetes and related technologies to manage large-scale data... ...and availability.- Orchestrate and monitor workflows using Apache Airflow.- Ensure code quality and version control using GIT.- Troubleshoot...SuggestedFull time
- ...Looking for Apache Camel SME & Developers RESPONSIBILITIES: ~Collaborate with IT and Business teams to ensure the applications are integrated successfully and enterprise services are established respecting the principles of Service Oriented Architecture. ~Lead project...Suggested
- ...various audiences. About the job The Global Business Applied AI - Strategic Programs for Automation, Resolution and Knowledge (GBAI-SPARK) team is a crucial part of our GBO's commitment to innovation and efficiency. GBAI is responsible for architecting and delivering AI-...
- ...systems You will: Build and maintain large-scale, distributed backend systems. Design and optimize Big Data ecosystems including Spark, Hadoop/MR, and Kafka. Leverage cloud-based platforms (GCP, AWS, Azure) for development and deployment. Implement observability...Work from homeFlexible hours
- ...Responsibilities Executes standard software solutions, design, development, and technical troubleshooting Building pipelines in spark, tuning spark queries Writes secure and high-quality code using the syntax of at least one programming language with limited guidance...
- ...Capabilities, And Skills Formal training or certification on software development concepts and 2+ years applied experience in Java Spark, AWS, SQL Proficiency in one or more large-scale data processing distributions such as JavaSpark along with knowledge on Data Pipeline...Hybrid work
- ...Data Processing and Analytics Services: Develop scalable data processing and analytics services utilizing our big data stack, including Spark, Trino, Airflow, and Kafka, to support real-time and batch data workflows. Architect Distributed Systems: Design, develop, and...Full time
- ...and we remain a trusted partner. The Global Business Applied AI - Strategic Programs for Automation, Resolution and Knowledge (GBAI-SPARK) team is a crucial part of our Global Business Organization (GBO's) commitment to innovation and efficiency. GBAI is responsible for...Full timeWorldwide
- ...Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to Databricks and Python Experience with Spark, Kafka, and cloud platforms (preferably AWS) Knowledge of Databricks for data engineering solutions ABOUT US...
- ...Roles and Responsibilities Design, develop, test, and deploy big data solutions using Spark Streaming. Collaborate with cross-functional teams to gather requirements and deliver high-quality results. Develop scalable and efficient algorithms for processing large datasets...
- ...responsible for ingesting data into our data lake and providing frameworks and services for operating on that data including the use of Spark Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+...
- ...Lambda, Glue, Lake Formation, Redshift, DynamoDB, and RDS for data engineering solutions.- Build and orchestrate analytics workflows using Apache Airflow.- Develop automation scripts and manage infrastructure using Terraform or CloudFormation.- Ensure data quality, security,...Full timeContract work
- Key Responsibilities :- Develop and maintain data processing applications using Spark and Scala.- Collaborate with cross-functional teams to understand data requirements and design efficient solutions.- Implement test-driven deployment practices to enhance the reliability of...
- ...services and APIs to facilitate secure and efficient data exchange.Key Responsibilities : - Develop data processing applications using Spark, Hadoop- Write MapReduce jobs and data transformation logic- Implement machine learning models and analytics solutions- Code...Hybrid workWork from home
- ...in designing & implementing software solutions using Big Data, Event driven & AWS tech stack. Required hands-on experience on Java, Spark, AWS, Kafka. Hands-on practical experience in system design, application development, testing, and operational stability....
- ...Snowflake for data ingestion and processing.- Understand and apply PySpark best practices and performance tuning techniques.- Experience with Spark architecture and its components (e.g., Spark Core, Spark SQL, DataFrames).ETL & Data Warehousing : - Apply strong understanding of ETL...
- ...hands-on experience in building data pipelines and big data technologies. ~ Proficiency with Big Data technologies such as Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR, and other AWS services (S3, Lambda, EMR). Expertise in at least one programming...