Average salary: Rs934,626 /yearly

More stats
Get new jobs by email
  •  ...We’re Hiring: Spark-Scala Developer (Big Data Engineer) Location: Bangalore (Day 1 Onsite) ⏳ Notice Period: Immediate to 2–3 Weeks Experience: 5+ Years ❗ Note: Candidates with only PySpark experience will not be considered Share your profiles: ****@*****.***.... 
    Suggested
    Immediate start

    startechsinc

    Bangalore
    a month ago
  • Key Responsibilities :- Design, develop, and manage data pipelines using Databricks (Spark, Delta Lake).- Optimize large-scale data processing workflows for performance and reliability.- Collaborate with Data Scientists, Analysts, and Business Stakeholders to gather requirements... 
    Suggested

    Info

    Jaipur
    26 days ago
  • Role Overview :As a Spark Software Engineer you will be instrumental in designing, developing, and maintaining robust data pipelines and scalable data processing solutions for our clients. You will collaborate closely with data scientists, data engineers, and business stakeholders... 
    Suggested

    Consulting Pandits

    Mumbai
    13 days ago
  •  ...0 PM)Experience : 7-10 yearsKey Responsibilities :- Design, develop, and maintain scalable data processing pipelines using Hadoop and Spark.- Implement data integration and ETL processes to ingest and transform large datasets.- Collaborate with data scientists, analysts, business... 
    Suggested
    Full time

    VAMRITECH PRIVATE LIMITED

    Bangalore
    4 days ago
  •  ...Job Description   Job Summary We are seeking a highly skilled AWS Data Engineer with expertise in AI, Python, Spark, and modern AI productivity tools such as Copilot and Claude. The ideal candidate will design, build, and optimize scalable data pipelines and architectures... 
    Suggested

    JPMorgan Chase & Co.

    Hyderabad
    3 days ago
  •  ...platforms that improve learning outcomes. To support our growing Data & Analytics capability, we are hiring a Senior Data Engineer - Spark, Data Bricks & AWS who will play a critical role in strengthening McGraw Hill's data platform and enabling high-impact, data-driven decision... 
    Suggested
    Remote job

    Macmillan McGrawHill

    Noida
    6 days ago
  •  ...certification onsoftware engineeringconcepts and 3+ years applied experience Experience as a Data Engineer Experience with Python, Spark and AWS. Strong Hands-on with AWS cloud services: EMR, Terraform, Cloudwatch, Redshift Experience with relational SQL and NoSQL... 
    Suggested

    JP Morgan Chase & Co.

    Bangalore
    8 days ago
  • Rs 20 - 35 lakhs p.a.

     ...understanding of software engineering principles Excellent analytical and troubleshooting skills Mandatory Skills (Specific Role): Spark & Scala HQL / SQL Scheduling tools (Oozie / Control-M) Shell scripting / Python Tools & Technologies: HUE Hive... 
    Suggested

    Matrix Hr Technologies Private Limited

    Secunderabad
    a month ago
  •  ...About Job: Software Engineer - (Scala Spark Azure) Chennai As a Big Data Software Engineer youll be part of a team of smart highly skilled technologists who are passionate about learning and prototyping cutting-edge technologies. Ingestion Services is working on unifying... 
    Suggested
    Full time
    Local area
    Flexible hours

    NielsenIQ

    Chennai
    23 days ago
  • Rs 8 - 12 lakhs p.a.

     ...Java and Spark Architect – TTT The opportunity As an TTT services professional at Ernst & Young, you will be involved in development of cutting edge and industry leading solutions using latest tools and technologies for enabling GST compliance. You will be part of a talented... 
    Suggested
    Flexible hours

    Ernst and young LLP

    Secunderabad
    a month ago
  • Rs 3 - 10 lakhs p.a.

     ...ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services... 
    Suggested

    Carbynetech

    Lucknow
    27 days ago
  •  ...experience with Cloud methodologies (IaaS, PaaS, SaaS), microservices, orchestration etc... Experience in Apache Iceberg, NetApp S3, Apache Spark Solid understanding of software architecture, design patterns, and best practices. Experience in Investment banking is added... 
    Suggested
    Work at office

    The Wells Fargo Foundation

    Bangalore
    9 days ago
  • Rs 5 - 10 lakhs p.a.

     ...also identifying bottlenecks and devising solutions. Your role will involve developing high-performance, low-latency components to run Spark clusters and collaborating with global teams to propose best practices and standards. Technical Skills: Programming Languages :... 
    Suggested

    Synechron Technologies Private Limited

    Pune
    11 days ago
  • Rs 2.5 - 4.5 lakhs p.a.

     ...Roles and Responsibilities Design, develop, test, and deploy big data solutions using Spark Streaming. Collaborate with cross-functional teams to gather requirements and deliver high-quality results. Develop scalable and efficient algorithms for processing large datasets... 
    Suggested

    Cognizant Technology Solutions India Pvt Ltd

    Bangalore
    5 days ago
  • Rs 8 - 10 lakhs p.a.

     ...production rollout and infrastructure configuration  Demonstrable experience of successfully delivering big data projects using Kafka, Spark  Exposure working on NoSQL Databases such as Cassandra, HBase, DynamoDB, and Elastic Search  Experience working with PCI Data... 
    Suggested
    Long term contract

    Ifintalent Global Private Limited

    Chennai
    a month ago
  • Rs 2.5 - 5.5 lakhs p.a.

     ...applications. Identify bottlenecks and bugs, and devise appropriate solutions. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design approaches suitable for the Big Data platform. Collaborate with global... 

    Synechron Technologies Private Limited

    Pune
    11 days ago
  • Rs 3 - 8 lakhs p.a.

     ...transactional analytics systems. Proficiency in Core Java, Spring Boot, and J2SE. Experience with Big Data technologies like Hadoop and Spark. Cloud computing experience with AWS or Azure. Database experience with RDBMS (eg, Oracle) and NoSQL databases (eg, MongoDB).... 

    Jp morgan

    Mumbai
    28 days ago
  •  ...-making across the organization.Key Responsibilities :- Architect and implement scalable data solutions leveraging Databricks, Apache Spark, and Delta Lake.- Collaborate with cross-functional teams to define data strategies, pipelines, and integration frameworks.- Optimize... 

    MODER SOLUTIONS INDIA PRIVATE LIMITED

    Bangalore
    16 days ago
  •  ...At least 7 years of experience in ETL Development Area Must have working knowledge of Apache Spark and Kafka Framework (kSQL, Mirror Maker, etc.) Strong programming skills in at least one of these programming... 

    Luxoft

    Chennai
    11 days ago
  •  ...design and maintain scalable data pipelines and analytics systems. The ideal candidate will have 2- 4 years of experience with Apache Spark,Scala/Python, Trino/Presto, Hadoop,kafka and data lake technologies such as Delta Lake. Experience with Elasticsearch, streaming data,... 
    Hybrid work

    Octro Inc

    Noida
    9 days ago
  • Job Title : Python and Spark DeveloperJob Summary :We are seeking a highly skilled Python and Spark Developer to join our dynamic software development team. The ideal candidate will possess a strong background in Python development, particularly in server-side applications,... 

    Interface Consultancy Services

    Bangalore
    3 days ago
  • Rs 5 - 10 lakhs p.a.

     ...frameworks. Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations. Data Analysis... 

    PHOTON

    Secunderabad
    a month ago
  • Rs 3 - 12.5 lakhs p.a.

     ...Java. ~ Experience on common stacks across back-end tech stacks. ~ Experience working with coding languages preferably SQL, Java, Spark-SQL, Pyspark, Python ~ Experience working on webservices/REST API, GraphQL, Event driven real-time systems and Batch components using... 

    Xoom

    Bangalore
    a month ago
  • Mission As a Spark Technical Solutions Engineer, you will provide a deep dive technical and consulting related solutions for the challenging Spark/ML/AI/Delta/Streaming/Lakehouse reported issues by our customers and resolve any challenges involving the Data bricks... 
    Weekend work
    Weekday work

    impronics technologies

    Gurgaon
    6 days ago
  •  ...Job Description : The mandatory skills include Apache Spark with Scala, Java, or Python strong RDBMS development experience using MS SQL Server or equivalent databases such as Oracle, MySQL, or PostgreSQL Linux with shell scripting Python strong analytical and problem... 

    Talent HR Networks Private Limited

    Mumbai
    9 days ago
  • Rs 4.5 - 7 lakhs p.a.

     ...strong knowledge in Java and related frameworks. Expertise in building robust, scalable and maintainable applications with Java and Spark . Able to write clean, maintainable and efficient Java code following best practices. Having deep understanding of Java core and... 

    Cognizant Technology Solutions India Pvt Ltd

    Secunderabad
    5 days ago
  •  ...building RESTful APIs. Strong experience in Python programming. Hands on working experience with PySpark and distributed computing (Spark). Solid experience with Dapper, SQL, XUnit, NUnit, RabbitMQ. Strong database skills, preferably with PostgreSQL Solid... 

    Nagarro

    Gurgaon
    5 days ago
  •  ...stability Advanced in one or more programming language including Java and/or Python Strong Big-Data and database skills including Spark Databricks and/or Data Lake Experience with AWS cloud computing using ECS EKS EMR Lambda etc. Advanced knowledge of software... 
    Full time
    For contractors

    JPMorganChase

    Bangalore
    a month ago
  •  ...foster a culture of growth and inclusion. Job responsibilities Design develop and maintain scalable data pipelines using Python and Spark Build and optimize ETL workflows in Databricks leveraging Delta Lake features Integrate and manage data across AWS services such... 
    Full time

    JPMorganChase

    Bangalore
    a month ago
  •  ...hands-on experience designing, developing, and optimizing big data solutions using the Hadoop ecosystem, with a strong focus on Apache Spark. You will be responsible for building and maintaining scalable data pipelines, processing large datasets, and collaborating with data... 
    Full time

    Citi

    Chennai
    3 days ago