Average salary: Rs1,176,922 /yearly
More statsGet new jobs by email
- ...Azure data engineering Strong expertise in building ETL/ELT pipelines using Azure Data Factory Proficient in Azure Databricks and Spark programming (Python/Scala) Experience with Delta Lake architecture and optimization Strong knowledge of Apache Kafka for...SuggestedFull timeHybrid workImmediate start
- ...be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Join us Spark Java Developer Barclays which will be helping us build, maintain and support the all First line of controls applications. The...Suggested
- ...Join us as a BigData Quality Analyst - Spark, Scala, AWS at Barclays where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable...SuggestedPermanent employmentImmediate start
Rs 8 - 10 lakhs p.a.
...production rollout and infrastructure configuration Demonstrable experience of successfully delivering big data projects using Kafka, Spark Exposure working on NoSQL Databases such as Cassandra, HBase, DynamoDB, and Elastic Search Experience working with PCI Data...SuggestedLong term contract- Description : Title : Bigdata Scala Spark Developer - HadoopExp : 4 to 8 yearsLocation : Pune, HyderabadNP : Immediate to 30 DaysJob Description : - 4 years of experience in Scala Spark Big Data Hadoop Hive- Must have good technical experience and should be able to provide...SuggestedImmediate start
Rs 3 - 10 lakhs p.a.
...Engineers Experience : 3 to 8 Yrs of exp Location : Chennai / Pune / Mumbai / Bangalore / Hyderabad Mandatory Skills : Big Data | Hadoop | Java | spark | sparkSql | Hive Qualification : ~ B.TECH / B.E / MCA / Computer Science Background - Any Specification...Suggested- ...orchestrating ELT processes targeting data ~ Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. ~ Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. ~...SuggestedFull timeWorldwide
Rs 5 - 10 lakhs p.a.
...frameworks. Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations. Data Analysis...SuggestedRs 20 - 34 lakhs p.a.
...Job Title: Scala Developer with Spark & Azure Experience: 6 – 10 Years Band: B3 Location: Pune – Kharadi (3 Days Work From Office) Interview Mode: Virtual Notice Period: Maximum 30 Days Job Overview We are looking for an experienced Scala Developer...SuggestedWork at officeRelocation- ...Spark/Python/Pentaho Developer, AS Position Overview Job Title: Spark/Python/Pentaho Developer Location: Pune, India Role Description Spark/Python/Pentaho Developer. Need to workon Data Integration project. Mostly batch oriented using Python/Pyspark...SuggestedFlexible hours
Rs 5 - 7 lakhs p.a.
...Job Description Role: Spark & Scala Developer Experience: 5+ years Our expectations is candidate should have: Strong Data frame and programming skills. Should have experience in complex objects and scala or datasets. Also, decent communication to express views...Suggested- Hi Jobseeker, We are hiring Python Spark Developer for our MNC client. Location-Pune, Hyderabad Interview Mode- Virtual Experience- 4yrs to 9yrs Notice Period- only immediate to 15days We are looking for a Data Engineer with experience in Python , Spark...SuggestedImmediate start
- Description :We are seeking a highly skilled Senior Data Engineer with 3+ years of experience to join our dynamic team. The ideal candidate will have a strong background in data engineering technologies especially SQL, Databricks,Pyspark, Azure services and client facing experience...Suggested
- ...or equivalent ~10+ years of total experience. ~4+ years of relevant experience in design and architecture Big Data solutions using Spark ~3+ years experience in working with engineering resources for innovation. ~4+ years experience in understanding Big Data events...SuggestedRemote job
- ...Hadoop, HDFS, cluster management2. Hive, Pig and MapReduce, and Hadoop ecosystem framework3. HBase, Talend, NoSQL databases4. Apache Spark or other streaming Big Data processing, preferred5. Java or Big Data technologies, will be a plus- Competitive salary and benefits package...SuggestedFlexible hours
- ...vector stores, embedding pipelines, and LLM pattern integration into enterprise systems. Experience with streaming frameworks (Flink, Spark Streaming), graph technologies, and real‑time analytics. Knowledge of model governance, cloud security best practices, and regulatory...Long term contractPermanent employmentTemporary workHybrid work
- ...India Experience: 6+ years About the Role: We are seeking a highly skilled Big Data Developer with strong expertise in Spark and Scala to join our dynamic team. The ideal candidate will have hands-on experience with cloud platforms such as AWS, Azure, or GCP...Full timeFlexible hours
- ...processing services (AWS Glue, AWS Catalog, AWS Kinesis, Lake Formation). Extensive experience with distributed processing engines such as Spark, including optimization strategies, cluster‑level scaling, and operational maintenance of complex data environments. Practical...Long term contractFull timeTemporary workFlexible hours
- ...in building scalable, reliable data pipelines in a Lakehouse/Medallion architecture. The ideal candidate will have deep knowledge of Spark/PySpark, Terraform, and modern data engineering patterns.Key Responsibilities : - Build and operate data pipelines using PySpark/Spark...
- ...data collection techniques and DBMS principles, tools, and platforms Hands-on experience with big data technologies (such as Hadoop, Spark, etc.) Ability to create insightful data visualizations for analysis and reporting Practical understanding of machine learning...Permanent employmentFull timeFlexible hours
- ...Responsibilities :- Design and develop end-to-end AI/ML solutions using Azure Databricks- Build and optimize data pipelines using PySpark and Spark SQL- Develop, train, and deploy machine learning models in production- Perform data preprocessing, feature engineering, and model...
- ...Strong knowledge of ETL/Abinitio. SQL, Python, and other programming languages. Experience with big data technologies (e.g., Hadoop, Spark) and cloud services (e.g, AWS). Some other highly valued skills include: Ability to analyze complex data sets and develop...Permanent employment
- ...working with data warehouses such as Redshift, BigQuery, Snowflake, or similar- Experience with open-source based data architectures : Spark, Hive, Trino / Presto or similar- Excellent software engineering and scripting knowledge- Strong communication skills (both in...
- ...Data modeling, SQL query profiling, and data warehousing skills is a must- Experience in distributed data processing engines like Apache Spark, Apache Flink, Datalfow/Apache Beam, etc- Knowledge of workflow orchestrators like Airflow, Dasgter, etc is a plus- Data visualization...
- ...Exposure to MLOps platforms (e.g., SageMaker, Vertex AI, Kubeflow). Familiarity with distributed data processing frameworks (e.g., Spark) and orchestration tools (e.g., Airflow). Contributions to research papers, blogs, or open-source projects in ML/NLP/Generative AI....Full time
- ...Allocation schedule within the SAP Ex-proof Integration: Incorporate explosion-proof design principles (flameproof enclosures, non-sparking materials) into modularised hoist architectures. Collaboration: Work closely with the Order Management and Product Platform teams...Full time
- ...projects. - A collaborative environment that values technical excellence and continuous improvement Interested? If this role sparked your interest, please submit your application by 30-Apr-26 latest, on our career site. We will contact you after the application...Full time
- ...Service Delivery Team. ~ Possibility to work in leading crane building company with leading technology. Interested? If this role sparked your interest, please submit your application as soonest, on our career site. Please note that position will be filled when we find a...Full time
- ...and delivering cloud native, event driven, microservices aligned architectures using AWS (Lambda, S3, Glue, Athena, CloudWatch), Hadoop/Spark, Kafka, and related data engineering tooling.Building low latency data pipelines, real time analytics engines, and enterprise data...
- ...multiple source databases Data migration experience from legacy databases or other cloud ecosystems to Microsoft Fabric Experience is Spark / Pyspark with some hands on with Databricks is highly desired Nice-to-Have Skills · Provisioning and managing Fabric environment...Full timeRemote jobWork from home
