Average salary: Rs1,641,358 /yearly
More statsGet new jobs by email
Rs 3 - 10 lakhs p.a.
...Engineers Experience : 3 to 8 Yrs of exp Location : Chennai / Pune / Mumbai / Bangalore / Hyderabad Mandatory Skills : Big Data | Hadoop | Java | spark | sparkSql | Hive Qualification : ~ B.TECH / B.E / MCA / Computer Science Background - Any Specification...Suggested- ...Job Description Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides... ...and Drive – the operating manual for how we behave. Join us Spark Java Developer Barclays which will be helping us build, maintain...Suggested
- ...Position Overview Job Title: Spark/Python/Pentaho Developer Location: Pune, India Role Description Spark/Python/Pentaho Developer. Need to work on Data Integration project. Mostly batch oriented using Python/Pyspark/Pentaho. What we’ll offer you As part...SuggestedFlexible hours
Rs 5 - 7 lakhs p.a.
...Job Description Role: Spark & Scala Developer Experience: 5+ years Our expectations is candidate should have: Strong Data frame and programming skills. Should have experience in complex objects and scala or datasets. Also, decent communication to express views...Suggested- Hi Jobseeker, We are hiring Python Spark Developer for our MNC client. Location-Pune, Hyderabad Interview Mode- Virtual Experience- 4yrs to 9yrs Notice Period- only immediate to 15days We are looking for a Data Engineer with experience in Python , Spark...SuggestedImmediate start
- ...Technical Expertise :- Hands-on experience with SQL, Databricks, PySpark, Python, Azure Cloud, and (Good to have Power BI).- Design, develop, and optimize Pyspark workloads.- Ability to write scalable, modular and reusable code in SQL, python and Pyspark.- Ability to communicate...Suggested
- ...transformation, and loading of data from a wide variety of data sources using Spark, EMR, Snowpark, Kafka and other big data technologies. Work... ..., and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community...SuggestedRemote job
- ...build, and maintain robust ETL/ELT processes to ensure smooth and reliable data flow across systems. Data Modeling & Architecture: Develop scalable, efficient data models and architect storage solutions that support analytical and operational needs. Data Integration:...SuggestedFull time
- Description : Role Type : Full-timeAbout UsefulBI : UsefulBI is a leading AI-driven data solutions provider specializing in data engineering, cloud transformations, and AI-powered analytics for Fortune 500 companies. We help businesses turn complex data into actionable insights...SuggestedFull time
- ...ensuring unparalleled customer experiences. As a part of team of developers, you will deliver technology stack, using strong analytical and... ...HDFS, Hive, Yarn, MapReduce basics Optional/Good to Have: Spark (PySpark/Scala) HBase Kafka streaming interfaces ~ Strong...SuggestedPermanent employmentImmediate start
- ...streaming pipelines using Google Cloud Dataflow, Google Cloud Datastream, Airbyte, and orchestration tools (Airflow/Prefect/Dagster). # Develop and optimize ETL/ELT processes across AWS Postgres, Google FHIR Store, and Google BigQuery. # Build and maintain unified data...SuggestedFull time
- ...junior team members, and collaborate with cross-functional teams to deliver high-quality data solutions.Key Responsibilities :- Design, develop, and maintain scalable and efficient data pipelines and ETL/ELT workflows using GCP services.- Architect and implement data warehouse...SuggestedHybrid work
- ...strategic hubs: Spain, Brazil, the UK, Germany. The Telefónica Tech UK&I hub has an end- to-end portfolio of market leading services and develops integrated technology solutions to accelerate digital transformation through: Cloud, Data & AI (Adatis), Enterprise Applications (...SuggestedFull timeRemote job
- ...the Role? Full-time job to work with Exusia's clients to design, develop and maintain large scale data engineering solutions. The right... ...IT, MetadataHub Databricks and should be fluent with Pyspark & Spark SQL Experience working with multiple databases like Oracle/SQL...SuggestedFull time
- ...data engineers, data scientists and research scientists to design, develop, and maintain data pipelines and infrastructure to support... ...with big data technologies and distributed processing such as Spark , Hadoop ecosystem, Kafka etc. ~ Experience in designing and maintaining...Suggested
- ...to join our high performing team. We aim to attract and further develop the best Data Science & Supply Chain talent. The role and its... ...Preferred Qualifications Big data frameworks: Apache Hadoop, Apache Spark, RapidMiner, Cloudera Experience in supply chain Familiar...
- ...requirements. Apply Data Modeling Expertise: Utilize expertise in data modeling tools and techniques, including ERWin, MDM, and/or ETL, to develop logical and physical data models. Translate Business Needs: Translate business needs into logical and physical data models,...Long term contractHybrid workWorldwide
- ...solutions. To be successful as a Senior Data Engineer, you should have experience with: Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Ab Initio and SQL Build scalable batch and/or streaming data processing solutions...Immediate start
Rs 5 - 8 lakhs p.a.
...methodologies Strong analytical and problem-solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to...- ...Senior Data Engineer with strong expertise in SQL, DBT, Python, and modern cloud-based data ecosystems. The ideal candidate will design, develop, and maintain scalable data pipelines while ensuring data quality, reliability, and accessibility for analytics and business teams....Full timeHybrid workWork at officeFlexible hours
- ...Setup and test AS2/ SFTP connectivity with the Trading Partners in Informatica MFT. Qualifications Should have proficiency in developing Informatica PowerCenter mapping especially with Unstructed Data Transformation Should have experience in Onboarding Partners in Informatica...
- ...Role Description Our Data Strategy team which is part of DWS Global Technology is looking for a experienced Data Engineer to further develop the data strategy program. This program is a strategic program for DWS, where all of the core data domains of Asset Management are...Flexible hours
- ...Contract Notice Period: Immediate Mandatory Skill: Perl, Bash in Linux and SQL Job Description: Perl Scripting : Develop and maintain Perl scripts for data loading, extraction, and archiving. Create tools to generate feeds and graphical reports for regulatory...Contract workImmediate start
- ...have experience with: Hands on experience in pyspark and strong knowledge on Dataframes, RDD and SparkSQL Hands on Experience in developing, testing and maintaining applications on AWS Cloud. Strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake...
- ...Location Name: Pune Corporate Office - Mantri Job Purpose To effectively design, develop, and manage data solutions using ETL technologies such as Azure Databricks (ADB) and Azure Data Factory (ADF), work with NoSQL databases like Cosmos DB, and apply object-oriented...Long term contractFixed term contractWork at office
- About Company :Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital...Contract workHybrid workImmediate startRemote job
- ...About Position: A highly skilled Data Engineering professional with 6 to 8 years of experience in designing, developing, and maintaining robust data solutions. Proven ability to work across the full software development lifecycle, from requirement gathering to deployment...Full timeHybrid workWork at officeFlexible hours
- ...Pune Work Mode: Onsite during initial training period, transitioning to Hybrid post-training. What You Will Do: Design, develop, and maintain highly scalable data pipelines and applications using Python and PySpark . Build end-to-end data solutions from...Hybrid work
Rs 4 - 6 lakhs p.a.
Job Responsibilities: To transition legacy Rules from Python using the Polars Library to SparkSQL To create new Rules using SparkSQL based on written requirements. Must Have Skills: Understanding of Polars library Understanding of SparkSQL (this is more important...- Overview Connecting clients to markets – and talent to opportunity With 4,300 employees and over 400,000 retail and institutional clients from more than 80 offices spread across five continents, we're a Fortune-100, Nasdaq-listed provider, connecting clients to the global...
