Average salary: Rs1,375,000 /yearly
More statsGet new jobs by email
- ...solutions. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design... ...Big Data Technologies: Experience with HDFS, Hive, HBase, Apache Spark, and Kafka. Familiarity with building self-service platform...Suggested
- ...for learning.Build products in the data analytics space.Roles and Responsibilities :- Develop and manage robust ETL pipelines using Apache Spark (Scala).- Understand Spark concepts, performance optimization techniques, and governance tools.- Develop a highly scalable,...Suggested
- ...mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning... ...practices in a big data environment. ~ Proficiency in PySpark, Apache Spark, and related big data technologies for data processing,...SuggestedPermanent employmentFull time
- ...Join us Spark Java Developer Barclays which will be helping us build, maintain and support the all First line of controls applications... ...Services, SQL It would be a great advantage if you know Kafka, Apache Ignite, Cucumber framework, React Should be aware of Agile Scrum...SuggestedPermanent employment
- ...their usage At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies. Having expertise in Spark Core, Spark SQL and Spark Streaming. Experience with Hadoop, HDFS, Hive...Suggested
- ...Role :We are looking for a Data Engineer with strong experience in Spark (PySpark), SQL, and data pipeline architecture.You will play a... ...this mentioned skills would be great).- Familiarity with orchestration frameworks such as Apache Airflow or Apache NiFi. (ref:hirist.tech)Suggested
- ...~ Knowledge in how to develop data-intensive applications using Spark. ~ Knowledge in writing SQL queries to wrangle data from relational... ...Terraform, Data Lake & Lake Formation, Open Table formats like Apache Iceberg. ~ Experience in EMR ~ Experience with CI/CD such as...SuggestedFull timeFlexible hours
- ...reusable and well documented code- Deliver big data projects using Spark, Scala , Python, SQL- Maintain and tune existing Spark... ...participate daily agile / scrum meetings- Take responsibility for Apache Spark development and implementation- Translate complex technical...SuggestedFull time
- ...Responsibilities :- Lead the design and deployment of scalable data processing solutions on AWS using Java and Spark.- Architect and implement big data pipelines with Apache Spark on AWS EMR.- Develop and deploy Serverless applications using AWS Lambda and integrate with other...SuggestedHybrid work
- ...and implement big data pipelines and ETL processes using Hadoop, Spark, Hive, and Kafka.- Develop and maintain data ingestion, processing... ...ecosystem : HDFS, YARN, MapReduce.- Hands-on experience with Apache Spark, Hive, and Kafka.- Proficiency in SQL and at least one programming...SuggestedFull time
- ...involve developing high-performance, low-latency components to run Spark clusters and collaborating with global teams to propose best... .... Big Data Technologies : Experience with HDFS, Hive, HBase, Apache Spark, and Kafka. Data Processing : Proficient in processing...Suggested
- ...(Must-have).2+ years of experience in python programming (Must-have) .Sound knowledge of distributed systems and data processing with spark.Knowledge of any tool for scheduling and orchestration of data pipelines or workflows (preferred Airflow)(must to have)1+ years experience...SuggestedFlexible hoursShift work
- Mandatory Skills : 7+ years experience in all of the following :- Java- Spring Boot- Microservices- GCP- Apache Beam- RDBMS, NoSQL- Data formats: Flat file, JSON, Avro, xmlResponsibilities :- Bachelor in Computer Science, Engineering, or equivalent experience- 7+ years of experience...Suggested
- ...Chain Analytics- Strong problem-solving and quantitative skills.- Experience working with large datasets and distributed computing tools (e.g., Spark, Hadoop) is a plus.- Familiarity with data visualization tools (e.g., Tableau, Power BI, matplotlib, seaborn). (ref:hirist.tech)SuggestedHybrid workImmediate startShift work
- ...services and APIs to facilitate secure and efficient data exchange.Key Responsibilities : - Develop data processing applications using Spark, Hadoop- Write MapReduce jobs and data transformation logic- Implement machine learning models and analytics solutions- Code...SuggestedHybrid workWork from home
- ...dashboards for the senior management · Design and implement optimal processes Regression testing of the releases · Big Data: Spark, Hive, Data Bricks · Language: SQL, JAVA/Python · BI & Analytics: Power BI (DAX), Tableau, Dataiku · Operating System: UNIX ·...
- ...experience, with at least 2 years of experience as a Big Data Architect- Strong understanding of big data technologies, including Hadoop, Spark, NoSQL databases, and cloud-based data services (AWS, Azure, GCP)- Experience with open-source ecosystem programming languages, such...
- ...Position Overview Job Title: Sr. Spark/Python/Pentaho Developer, AVP Location: Pune, India Role Description Sr. Spark/Python/Pentaho Developer. Need to work on Data Integration project. Mostly batch oriented using Python/Pyspark/Pentaho. What we’ll offer...Full timeFlexible hours
- ...science or equivalent.- 8+ years of total experience.- 4+ years of relevant experience in design and architecture Big Data solutions using Spark.- 3+ years experience in working with engineering resources for innovation.- 4+ years experience in understanding Big Data events flow...
- ...Snowflake for data ingestion and processing.- Understand and apply PySpark best practices and performance tuning techniques.- Experience with Spark architecture and its components (e.g., Spark Core, Spark SQL, DataFrames).ETL & Data Warehousing : - Apply strong understanding of ETL...
- ...Roles and Responsibilities Design, develop, test, and deploy big data solutions using Spark Streaming. Collaborate with cross-functional teams to gather requirements and deliver high-quality results. Develop scalable and efficient algorithms for processing large datasets...
- ...Position Overview Job Title: Data Engineer (ETL, Big Data, Hadoop, Spark, GCP) , Assistant Vice President Location: Pune, India Role Description Senior engineer is responsible for developing and delivering elements of engineering solutions to accomplish business...Flexible hours
- ...work experience. Proven 5-8 years of experience as a Senior Data Engineer or similar role. Experience with big data tools: Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation...
- Spark Rockstar Wanted!Imagine a world where big data is not just a buzzword, but a playground where you get to build innovative solutions that make a real impact. We're on the hunt for a Senior Big Data Engineer who's passionate about Spark and has the skills to prove it. If...
- ...Position Overview Job Title: Spark/Python/Pentaho Developer Location: Pune, India Role Description Spark/Python/Pentaho Developer. Need to work on Data Integration project. Mostly batch oriented using Python/Pyspark/Pentaho. What we’ll offer you As part...Full timeFlexible hours
- ...: 6+ years of experience as a Big Data Engineer or in a similar role. Strong expertise in big data technologies such as Hadoop, Spark, Hive, HBase, Kafka, Flume. Proficiency in SQL and at least one programming language (Java, Scala, or Python). Experience with cloud...
- ...Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional Requirements: Spark, Scala, Bigdata
- ...Job Description: Spark Expertise Expert proficiency in Spark Ability to design and implement efficient data processing workflows Experience with Spark SQL and DataFrames Good exposure to Big Data architectures and good understanding of Big Data eco...
- ...Azure DatabricksKey Responsibilities : - Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.- Perform data pre-processing (cleaning, transformation, deduplication, normalization, encoding, scaling) to ensure high-quality...
- ...and their data services (Redshift, BigQuery, Snowflake, Databricks, etc.).- Hands-on experience with data pipeline frameworks like Apache Spark, Airflow, Kafka, or Flink.- Expertise in relational and NoSQL databases, data lakes, and warehousing solutions.- Knowledge of CI/CD...Full time