Average salary: Rs934,626 /yearly
More statsGet new jobs by email
- ...We’re Hiring: Spark-Scala Developer (Big Data Engineer) Location: Bangalore (Day 1 Onsite) ⏳ Notice Period: Immediate to 2–3 Weeks Experience: 5+ Years ❗ Note: Candidates with only PySpark experience will not be considered Share your profiles: ****@*****.***....SuggestedImmediate start
- Key Responsibilities :- Design, develop, and manage data pipelines using Databricks (Spark, Delta Lake).- Optimize large-scale data processing workflows for performance and reliability.- Collaborate with Data Scientists, Analysts, and Business Stakeholders to gather requirements...Suggested
- Role Overview :As a Spark Software Engineer you will be instrumental in designing, developing, and maintaining robust data pipelines and scalable data processing solutions for our clients. You will collaborate closely with data scientists, data engineers, and business stakeholders...Suggested
- ...0 PM)Experience : 7-10 yearsKey Responsibilities :- Design, develop, and maintain scalable data processing pipelines using Hadoop and Spark.- Implement data integration and ETL processes to ingest and transform large datasets.- Collaborate with data scientists, analysts, business...SuggestedFull time
- ...Job Description Job Summary We are seeking a highly skilled AWS Data Engineer with expertise in AI, Python, Spark, and modern AI productivity tools such as Copilot and Claude. The ideal candidate will design, build, and optimize scalable data pipelines and architectures...Suggested
- ...platforms that improve learning outcomes. To support our growing Data & Analytics capability, we are hiring a Senior Data Engineer - Spark, Data Bricks & AWS who will play a critical role in strengthening McGraw Hill's data platform and enabling high-impact, data-driven decision...SuggestedRemote job
- ...certification onsoftware engineeringconcepts and 3+ years applied experience Experience as a Data Engineer Experience with Python, Spark and AWS. Strong Hands-on with AWS cloud services: EMR, Terraform, Cloudwatch, Redshift Experience with relational SQL and NoSQL...Suggested
Rs 20 - 35 lakhs p.a.
...understanding of software engineering principles Excellent analytical and troubleshooting skills Mandatory Skills (Specific Role): Spark & Scala HQL / SQL Scheduling tools (Oozie / Control-M) Shell scripting / Python Tools & Technologies: HUE Hive...Suggested- ...About Job: Software Engineer - (Scala Spark Azure) Chennai As a Big Data Software Engineer youll be part of a team of smart highly skilled technologists who are passionate about learning and prototyping cutting-edge technologies. Ingestion Services is working on unifying...SuggestedFull timeLocal areaFlexible hours
Rs 8 - 12 lakhs p.a.
...Java and Spark Architect – TTT The opportunity As an TTT services professional at Ernst & Young, you will be involved in development of cutting edge and industry leading solutions using latest tools and technologies for enabling GST compliance. You will be part of a talented...SuggestedFlexible hoursRs 3 - 10 lakhs p.a.
...ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services...Suggested- ...experience with Cloud methodologies (IaaS, PaaS, SaaS), microservices, orchestration etc... Experience in Apache Iceberg, NetApp S3, Apache Spark Solid understanding of software architecture, design patterns, and best practices. Experience in Investment banking is added...SuggestedWork at office
Rs 5 - 10 lakhs p.a.
...also identifying bottlenecks and devising solutions. Your role will involve developing high-performance, low-latency components to run Spark clusters and collaborating with global teams to propose best practices and standards. Technical Skills: Programming Languages :...SuggestedRs 2.5 - 4.5 lakhs p.a.
...Roles and Responsibilities Design, develop, test, and deploy big data solutions using Spark Streaming. Collaborate with cross-functional teams to gather requirements and deliver high-quality results. Develop scalable and efficient algorithms for processing large datasets...SuggestedRs 8 - 10 lakhs p.a.
...production rollout and infrastructure configuration Demonstrable experience of successfully delivering big data projects using Kafka, Spark Exposure working on NoSQL Databases such as Cassandra, HBase, DynamoDB, and Elastic Search Experience working with PCI Data...SuggestedLong term contractRs 2.5 - 5.5 lakhs p.a.
...applications. Identify bottlenecks and bugs, and devise appropriate solutions. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design approaches suitable for the Big Data platform. Collaborate with global...Rs 3 - 8 lakhs p.a.
...transactional analytics systems. Proficiency in Core Java, Spring Boot, and J2SE. Experience with Big Data technologies like Hadoop and Spark. Cloud computing experience with AWS or Azure. Database experience with RDBMS (eg, Oracle) and NoSQL databases (eg, MongoDB)....- ...-making across the organization.Key Responsibilities :- Architect and implement scalable data solutions leveraging Databricks, Apache Spark, and Delta Lake.- Collaborate with cross-functional teams to define data strategies, pipelines, and integration frameworks.- Optimize...
- ...At least 7 years of experience in ETL Development Area Must have working knowledge of Apache Spark and Kafka Framework (kSQL, Mirror Maker, etc.) Strong programming skills in at least one of these programming...
- ...design and maintain scalable data pipelines and analytics systems. The ideal candidate will have 2- 4 years of experience with Apache Spark,Scala/Python, Trino/Presto, Hadoop,kafka and data lake technologies such as Delta Lake. Experience with Elasticsearch, streaming data,...Hybrid work
- Job Title : Python and Spark DeveloperJob Summary :We are seeking a highly skilled Python and Spark Developer to join our dynamic software development team. The ideal candidate will possess a strong background in Python development, particularly in server-side applications,...
Rs 5 - 10 lakhs p.a.
...frameworks. Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations. Data Analysis...Rs 3 - 12.5 lakhs p.a.
...Java. ~ Experience on common stacks across back-end tech stacks. ~ Experience working with coding languages preferably SQL, Java, Spark-SQL, Pyspark, Python ~ Experience working on webservices/REST API, GraphQL, Event driven real-time systems and Batch components using...- Mission As a Spark Technical Solutions Engineer, you will provide a deep dive technical and consulting related solutions for the challenging Spark/ML/AI/Delta/Streaming/Lakehouse reported issues by our customers and resolve any challenges involving the Data bricks...Weekend workWeekday work
- ...Job Description : The mandatory skills include Apache Spark with Scala, Java, or Python strong RDBMS development experience using MS SQL Server or equivalent databases such as Oracle, MySQL, or PostgreSQL Linux with shell scripting Python strong analytical and problem...
Rs 4.5 - 7 lakhs p.a.
...strong knowledge in Java and related frameworks. Expertise in building robust, scalable and maintainable applications with Java and Spark . Able to write clean, maintainable and efficient Java code following best practices. Having deep understanding of Java core and...- ...building RESTful APIs. Strong experience in Python programming. Hands on working experience with PySpark and distributed computing (Spark). Solid experience with Dapper, SQL, XUnit, NUnit, RabbitMQ. Strong database skills, preferably with PostgreSQL Solid...
- ...stability Advanced in one or more programming language including Java and/or Python Strong Big-Data and database skills including Spark Databricks and/or Data Lake Experience with AWS cloud computing using ECS EKS EMR Lambda etc. Advanced knowledge of software...Full timeFor contractors
- ...foster a culture of growth and inclusion. Job responsibilities Design develop and maintain scalable data pipelines using Python and Spark Build and optimize ETL workflows in Databricks leveraging Delta Lake features Integrate and manage data across AWS services such...Full time
- ...hands-on experience designing, developing, and optimizing big data solutions using the Hadoop ecosystem, with a strong focus on Apache Spark. You will be responsible for building and maintaining scalable data pipelines, processing large datasets, and collaborating with data...Full time

