Average salary: Rs1,375,000 /yearly
More statsGet new jobs by email
- Responsibilities :- Design, develop, and maintain robust and scalable data pipelines using Apache Spark and Scala on the Databricks platform.- Implement ETL (Extract, Transform, Load) processes for various data sources, ensuring data quality, integrity, and efficiency.- Optimize...Suggested
- ...solutions. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design... ...Big Data Technologies: Experience with HDFS, Hive, HBase, Apache Spark, and Kafka. Familiarity with building self-service platform...Suggested
- ...their usage At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies. Having expertise in Spark Core, Spark SQL and Spark Streaming. Experience with Hadoop, HDFS, Hive...Suggested
- ...mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning... ...practices in a big data environment. ~ Proficiency in PySpark, Apache Spark, and related big data technologies for data processing,...SuggestedPermanent employmentFull time
- ...Role :We are looking for a Data Engineer with strong experience in Spark (PySpark), SQL, and data pipeline architecture.You will play a... ...this mentioned skills would be great).- Familiarity with orchestration frameworks such as Apache Airflow or Apache NiFi. (ref:hirist.tech)Suggested
- ...~ Knowledge in how to develop data-intensive applications using Spark. ~ Knowledge in writing SQL queries to wrangle data from relational... ...Terraform, Data Lake & Lake Formation, Open Table formats like Apache Iceberg. ~ Experience in EMR ~ Experience with CI/CD such as...SuggestedFull timeFlexible hours
- Description :Job Description :We are looking for an experienced Senior Spark Developer with a strong background in Scala, Java, and Big Data... ..., and optimize large-scale data processing applications using Apache Spark with Scala and Java- Develop and maintain low-latency...Suggested
- ...Join us as a Senior Spark Data Engineer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation... ..., building, and maintaining data processing systems using Apache Spark and related big data technologies. Ensure data quality,...Suggested
- ...involve developing high-performance, low-latency components to run Spark clusters and collaborating with global teams to propose best... ...patterns. Big Data Technologies: Experience with HDFS, Hive, HBase, Apache Spark, and Kafka. Data Processing: Proficient in processing...Suggested
- ...Position Overview Job Title: Spark/Python/Pentaho Developer Location: Pune, India Role Description Spark/Python/Pentaho Developer. Need to work on Data Integration project. Mostly batch oriented using Python/Pyspark/Pentaho. What we’ll offer you As part...SuggestedFull timeFlexible hours
- ...: 6+ years of experience as a Big Data Engineer or in a similar role. Strong expertise in big data technologies such as Hadoop, Spark, Hive, HBase, Kafka, Flume. Proficiency in SQL and at least one programming language (Java, Scala, or Python). Experience with cloud...Suggested
- ...services and APIs to facilitate secure and efficient data exchange.Key Responsibilities : - Develop data processing applications using Spark, Hadoop- Write MapReduce jobs and data transformation logic- Implement machine learning models and analytics solutions- Code...SuggestedHybrid workWork from home
- ...experience, with at least 2 years of experience as a Big Data Architect- Strong understanding of big data technologies, including Hadoop, Spark, NoSQL databases, and cloud-based data services (AWS, Azure, GCP)- Experience with open-source ecosystem programming languages, such...Suggested
- ...Snowflake for data ingestion and processing.- Understand and apply PySpark best practices and performance tuning techniques.- Experience with Spark architecture and its components (e.g., Spark Core, Spark SQL, DataFrames).ETL & Data Warehousing : - Apply strong understanding of ETL...Suggested
- ...Roles and Responsibilities Design, develop, test, and deploy big data solutions using Spark Streaming. Collaborate with cross-functional teams to gather requirements and deliver high-quality results. Develop scalable and efficient algorithms for processing large datasets...Suggested
- ...Position Overview Job Title: Data Engineer (ETL, Big Data, Hadoop, Spark, GCP) , Assistant Vice President Location: Pune, India Role Description Senior engineer is responsible for developing and delivering elements of engineering solutions to accomplish business...Flexible hours
- ...Job Description: Spark Expertise Expert proficiency in Spark Ability to design and implement efficient data processing workflows Experience with Spark SQL and DataFrames Good exposure to Big Data architectures and good understanding of Big Data eco...
- ...Position Overview Job Title: Sr. Spark/Python/Pentaho Developer, AVP Location: Pune, India Role Description Sr. Spark/Python/Pentaho Developer. Need to work on Data Integration project. Mostly batch oriented using Python/Pyspark/Pentaho. What we’ll offer...Flexible hours
- ...work experience. Proven 5-8 years of experience as a Senior Data Engineer or similar role. Experience with big data tools: Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation...
- Spark Rockstar Wanted!Imagine a world where big data is not just a buzzword, but a playground where you get to build innovative solutions that make a real impact. We're on the hunt for a Senior Big Data Engineer who's passionate about Spark and has the skills to prove it. If...
- ...Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional Requirements: Spark, Scala, Bigdata
- ...(Must-have).2+ years of experience in python programming (Must-have) .Sound knowledge of distributed systems and data processing with spark.Knowledge of any tool for scheduling and orchestration of data pipelines or workflows (preferred Airflow)(must to have)1+ years experience...Flexible hoursShift work
- ...Solid grasp of software engineering principles and MLOps practices.Preferred Qualifications : - Experience with Big Data tools like Apache Spark, Kafka, Kinesis.- Familiarity with cloud ML platforms such as AWS SageMaker or GCP ML Engine.- Exposure to data visualization tools...Hybrid workImmediate start
- ...business needs.Key Result Areas and Activities : - Design and implement Lakehouse architectures using Databricks, Delta Lake, and Apache Spark.- Lead the development of data pipelines, ETL/ELT processes, and data integration strategies.- Collaborate with business and technical...Full time
- ...work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and... ...Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring...
- ...data processing frameworks. Hands-on on technologies such as Apache Nifi, PostgresSQL, DBT, GCP. Experience in GCP Cloud Composer,... ...ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python and Hadoop Platform....
- ...experience with RDBMS and NoSQL databases, excellent communication and presentation skills Preferred Skills: Experience with BDD, Apache Spark, C#, Jira, GitLab, Confluence, Docker and Kubernetes, understanding of CI/CD processes with GitLab Experience: Minimum of 3 - 6...
- ...data pipeline development. Strong expertise in AbInitio and experience with modern data engineering tools and frameworks (e.g., Apache Spark, Kafka, AWS/GCP/Azure). Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud-based data...Long term contractTemporary work
- ...experience as a Data Engineer / Big Data Infrastructure engineer Hands-on experience with Data Streaming technologies: Apache Kafka / Apache Pulsar / Spark Streaming / Kstream / Apache Flink B.Sc. in Computer Science (or equivalent) Experience working on high scale,...
- ...expertise in Python, PySpark, SQL, AWS Cloud (EMR, Glue, Athena), Apache Airflow, and data warehousing concepts. You will be responsible... ...manage data models supporting analytics and reporting. Optimize Spark jobs for performance, cost efficiency, and scalability. Ensure...Long term contractHybrid work
