Average salary: Rs919,998 /yearly
More statsGet new jobs by email
Rs 5 - 10 lakhs p.a.
...also identifying bottlenecks and devising solutions. Your role will involve developing high-performance, low-latency components to run Spark clusters and collaborating with global teams to propose best practices and standards. Technical Skills: Programming Languages :...SuggestedRs 2.5 - 5.5 lakhs p.a.
...applications. Identify bottlenecks and bugs, and devise appropriate solutions. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design approaches suitable for the Big Data platform. Collaborate with global...SuggestedRs 5 - 10 lakhs p.a.
...frameworks. Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations. Data Analysis...SuggestedRs 20 - 34 lakhs p.a.
...Job Title: Scala Developer with Spark & Azure Experience: 6 – 10 Years Band: B3 Location: Pune – Kharadi (3 Days Work From Office) Interview Mode: Virtual Notice Period: Maximum 30 Days Job Overview We are looking for an experienced Scala Developer...SuggestedWork at officeRelocation- Job Title : Python and Spark DeveloperJob Summary :We are seeking a highly skilled Python and Spark Developer to join our dynamic software development team. The ideal candidate will possess a strong background in Python development, particularly in server-side applications,...Suggested
Rs 8 - 10 lakhs p.a.
...production rollout and infrastructure configuration Demonstrable experience of successfully delivering big data projects using Kafka, Spark Exposure working on NoSQL Databases such as Cassandra, HBase, DynamoDB, and Elastic Search Experience working with PCI Data...SuggestedLong term contractRs 2.5 - 4.5 lakhs p.a.
...Roles and Responsibilities Design, develop, test, and deploy big data solutions using Spark Streaming. Collaborate with cross-functional teams to gather requirements and deliver high-quality results. Develop scalable and efficient algorithms for processing large datasets...Suggested- We are looking for a skilled Spark Engineer to join our data engineering team. The ideal candidate will have strong expertise in big data technologies, particularly Apache Spark, and will be responsible for building scalable data pipelines, processing large datasets, and supporting...Suggested
Rs 5 - 10 lakhs p.a.
...and Deep understanding on Hadoop Ecosystem including HDFS, YARN, MapReduce, Tools like Hive, Pig/Flume, Data processing framework like Spark & Cloud platform, Orchestration Tools - Apache Nifi / Airflow, Apache Kafka Expertise in Web applications (Springboot Angular, Java...SuggestedRs 5 - 7 lakhs p.a.
...Job Description Role: Spark & Scala Developer Experience: 5+ years Our expectations is candidate should have: Strong Data frame and programming skills. Should have experience in complex objects and scala or datasets. Also, decent communication to express views...SuggestedRs 3 - 5.5 lakhs p.a.
...a solid understanding of object-oriented programming and design patterns. Experience with Big Data technologies including Hadoop, Spark, Hive, HBase, and Kafka. Strong knowledge of SQL and NoSQL databases with experience in Oracle preferred. Familiarity with data...Suggested- ...be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Join us Spark Java Developer Barclays which will be helping us build, maintain and support the all First line of controls applications. The...Suggested
- ...Join us as a BigData Quality Analyst - Spark, Scala, AWS at Barclays where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable...SuggestedPermanent employmentImmediate start
- Hi Jobseeker, We are hiring Python Spark Developer for our MNC client. Location-Pune, Hyderabad Interview Mode- Virtual Experience- 4yrs to 9yrs Notice Period- only immediate to 15days We are looking for a Data Engineer with experience in Python , Spark...SuggestedImmediate start
Rs 0.5 - 5 lakhs p.a.
Database Engineering (DBE) supports Mastercard's Business Systems and Operations globally. As part of Technology Operations, DBE drives best practices, governance, and standards in database engineering. As a Lead Engineer within the Processing Service Delivery team, you will...Suggested- ...or equivalent ~10+ years of total experience. ~4+ years of relevant experience in design and architecture Big Data solutions using Spark ~3+ years experience in working with engineering resources for innovation. ~4+ years experience in understanding Big Data events...Remote job
- ...processing services (AWS Glue, AWS Catalog, AWS Kinesis, Lake Formation). Extensive experience with distributed processing engines such as Spark, including optimization strategies, cluster‑level scaling, and operational maintenance of complex data environments. Practical...Long term contractFull timeTemporary workFlexible hours
- ...Strong knowledge of ETL/Abinitio. SQL, Python, and other programming languages. Experience with big data technologies (e.g., Hadoop, Spark) and cloud services (e.g, AWS). Some other highly valued skills include: Ability to analyze complex data sets and develop...Permanent employment
- ...demonstrable hands-on experience with middleware technologies (Kafka, API gateways etc) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Some other highly valued skills include: Expertise building ELT pipelines and cloud/...Long term contractPermanent employmentTemporary workHybrid workWork at office
- ...- Strong programming skills in Python, Scala, or Java- Hands-on experience with distributed data processing frameworks such as Apache Spark- Experience with data pipeline orchestration tools like Airflow- Strong understanding of data modeling, schema design, and database optimization...
- ...data collection techniques and DBMS principles, tools, and platforms Hands-on experience with big data technologies (such as Hadoop, Spark, etc.) Ability to create insightful data visualizations for analysis and reporting Practical understanding of machine learning...Permanent employmentFull timeFlexible hours
- ...databases , complex SQL queries, and performance optimization across large, distributed systems Practical experience with Apache Spark (preferably PySpark) Experience with REST APIs – usage, definition, and implementation Clear communication skills and the ability...Hybrid workFlexible hours1 day week
- ...Azure / GCP).- Familiarity with version control systems such as Git.- Basic understanding of working with large datasets; exposure to Spark/PySpark is a plus.Desirable Skills :- Exposure to deep learning frameworks (TensorFlow or PyTorch).- Basic understanding of MLOps concepts...Full time
- ...Candidate should be coming from a strong technological background. The candidate should have Strong working experience in Python, Spark and GCP technology and leading the design and implementation of the GCP and Big Data applications of the domain. Candidate should...Full timeFlexible hours
- ...Responsibilities : - Architect end-to-end solutions using Microsoft Fabric : OneLake, Lakehouse, Warehouse, Data Pipelines, Notebooks, Spark, and semantic modelling.- Design ingestion patterns (batch/near real-time), transformation layers, and consumption strategies (Power...Worldwide
- .... You should be confident working with multithreading and have strong knowledge of data structures. Basic-to-good knowledge of Apache Spark is expected.Key Responsibilities : - Design, build, and maintain Java/Spring Boot components and services.- Write clean, efficient, and...
- ...Google Cloud Platform. You will be responsible for the "heavy lifting" of data—constructing robust ETL pipelines, managing large-scale Spark clusters, and ensuring our BigQuery warehouse is performant and scalable. Beyond core engineering, you will help bridge the gap...Full timeLocal areaShift work
- ...of hands-on experience (mandatory) Strong experience as a Data Engineer or Data Warehouse professional Proficiency in Apache Spark, SQL, Python / Scala Experience working with large-scale data systems Strong understanding of data modeling, performance tuning,...Long term contractFull time
- ...Terraform. Build scalable data and ML environments using AWS services such as S3, ECR, ECS/EKS, Lambda, VPC, IAM, and CloudWatch. Build Spark-based ETL pipelines using AWS Glue, EMR, or Spark on Kubernetes . Ensure compliance with AWS security best practices including...Full timeLocal areaRemote jobFlexible hours
- ...with ML frameworks like TensorFlow, PyTorch, Scikit-learn. Experience with large-scale datasets, distributed data processing (e.g. Spark, Beam, Airflow). Solid foundation in data science: statistical modeling, A/B testing, time series, and experimentation. Proficient...Full timeFlexible hours

