Average salary: Rs1,174,889 /yearly

More stats
Get new jobs by email
  •  ...highly skilled  Cloud Engineer  with a specialization in  Apache Spark  and  Databricks  to join our dynamic team. The ideal candidate will...  ...-native tools. Your primary responsibility will be to design, develop, and maintain scalable data pipelines using Spark and Databricks,... 
    Suggested
    Delhi
    20 days ago
  • Hi All, We are hiring for the position of GCP Data Engineer with LTI Mindtree . Location: Pan India Experience: 5+ years Notice Period: 30–60 days Required Skills: GCP (Mandatory), BigQuery, SQL Note: This role is specifically for candidates...
    Suggested
    Delhi
    9 days ago
  •  ...Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data bricks. Develop complex ETL processes using Python scripts and SQL queries to... 
    Suggested
    Delhi
    a month ago
  •  ...SQL : Expertise in writing stored procedures, complex queries, and optimizing performance. Power BI development : Experience of developing complex power bi reports Dax : Complex Dax queries to meet business needs Paginated Reports (Power BI)  : Experience in designing... 
    Suggested
    Noida
    a month ago
  • Description Job Description 7-12 years of experience with Big Data, PySpark, Databricks including ETL/ELT. Bachelor/Masters degree in Computer Science and Engineering. Must work independently on Analytics engines like Big Data and PySpark. Good experience with...
    Suggested
    Work at office
    Noida
    23 days ago
  • B.E/ B. Tech/ M. Tech in computer science or related field of batch 2019,2020,2021, 2022, 2023 or 2024 only. Must be available for apprenticeship tenure of minimum 1 year. Basic understanding of data modeling concepts. Exposure to SQL and the ability to write simple ...
    Suggested
    Apprenticeship
    Noida
    19 days ago
  •  ...About The Role Grade Level (for internal use): 09 The Role Full Stack Java Developer Your mission is to design, create and maintain our backend applications that power our strategic distribution platform. You possess a profound knowledge of core Java and have... 
    Suggested
    Side job
    Worldwide
    Flexible hours
    Noida
    2 days ago
  • Job Description We are looking for a Data Engineer with strong skills in AWS and data engineering tools to support and monitor our data pipelines and systems. The candidate will be responsible for monitoring, troubleshooting, and ensuring smooth execution of ETL/ELT workflows...
    Suggested
    Noida
    2 days ago
  • Location: NCR Region, New Delhi About Us: At Sauce Labs, we empower the world's top enterprises - like Walmart, Bank of America, and Indeed - to deliver quality web and mobile applications at speed. Our industry-leading platform ensures continuous quality across the SDLC...
    Suggested
    Remote job
    Delhi
    1 day ago
  •  ...Common Skillsets: ~5+ years of experience in analytics, data engineering, and technologies like PySpark, Python, Spark, and SQL. ~ Strong experience in managing and transforming big data sets using PySpark, Spark-Scala, NumPy, and pandas. ~ Involved in presales activities... 
    Suggested
    Delhi
    20 days ago
  •  ...Design, build, and maintain - Implement and enforce data modeling standards - Build data pipelines using a Medallion Architecture - Develop and optimize composable data architectures - Develop and optimize data transformation processes - Write and optimize complex SQL... 
    Suggested
    Full time
    Delhi
    11 days ago
  •  ...Design Neo4j schemas for customer journeys and relationships Develop hybrid search (vector + graph + text) with performance tuning...  ..., BigQuery Processing: Airflow/Prefect, Pandas/Polars, dbt, Spark ML Pipeline: vLLM, MLflow, Sentence Transformers, PyTorch, TensorFlow... 
    Suggested
    Hybrid work
    Flexible hours
    Noida
    3 days ago
  •  ...databa ses.Collaborate closely with cross-functional stakeholders to understand data requirements and deliver actionable insig hts.Develop, test, and maintain scalable data models and datasets to streamline analytics workfl ows.Write high-quality, efficient SQL and... 
    Suggested
    Full time
    Contract work
    Hybrid work
    Delhi
    2 days ago
  •  ...e.g., MLflow, Airflow, Databricks workflows). Technical Leadership: Act as a hands-on subject matter expert in Databricks, Python, Spark, and related technologies—driving adoption of best practices and mentoring other engineers. Optimize Performance: Ensure data pipelines... 
    Suggested
    Full time
    Hybrid work
    Remote job
    Noida
    2 days ago
  •  ...build, and optimize scalable data and ML systems. The role involves developing data pipelines, deploying ML models, and collaborating across...  ...integration. Optimize large-scale data processing systems (Spark, Pandas). Ensure data quality, pipeline reliability, and model... 
    Suggested
    Work at office
    Remote job
    Noida
    1 day ago
  •  ...Type: Full Time Employment What You'll Do: Design and develop ETL/ELT pipelines using Azure Data Factory, Databricks, and other...  ...experience in Microsoft Azure Cloud, Azure Data Factory, Data Bricks, Spark, Scala/Python, ADO. ~5+ years of experience working with... 
    Full time
    Hybrid work
    Work at office
    Flexible hours
    Noida
    11 days ago
  •  ...for this engagement, ensuring high-quality professionals are aligned with the client’s expectations. Role : Senior Power BI Developer (8–12 Years Experience) Location: PAN India — Delhi / Bangalore / Chennai / Hyderabad (Hybrid) Employment Type: Full-Time / Contract... 
    Full time
    Contract work
    Hybrid work
    Flexible hours
    Delhi
    11 days ago
  •  ...Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines on Snowflake. Integrate structured and unstructured data from various internal and external sources. Optimize data models and queries for improved performance and reliability. Collaborate... 
    Freelance
    Noida
    20 days ago
  •  ...critical business decisions and technological advancements. Responsibilities Data Engineering Build and Maintain Data Pipelines: Develop and manage scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, or Azure Databricks to process large volumes of... 
    Hourly pay
    Contract work
    Local area
    Immediate start
    Remote job
    Delhi
    11 days ago
  •  ...digital government policies and programs. You will play a key role in developing, robust, scalable, and efficient systems to manage large volumes...  ...big data processing frameworks such as Apache Hadoop, Apache Spark, and Apache Flink, as well as with machine learning and data... 
    Delhi
    2 days ago
  •  ...closely with talented colleagues and drive data excellence in a dynamic, growth-focused environment. Key Responsibilities • Design, develop, and maintain efficient and reliable data pipelines and ETL processes using Databricks. • Collaborate with data scientists,... 
    Full time
    Contract work
    Hybrid work
    Remote job
    Delhi
    2 days ago
  •  ...expertise. Immediate joiners with exceptional communication skills are highly encouraged to apply. Key Responsibilities # Design, develop, and optimize scalable data pipelines leveraging GCP services including BigQuery, Cloud Storage, Cloud Functions, and Cloud SQL. #... 
    Full time
    Contract work
    Hybrid work
    Immediate start
    Delhi
    2 days ago
  •  ...and accelerate integration test cycles across the climate data platform, ensuring faster, more reliable production releases. You'll develop and maintain Python-based workflows, robust validation frameworks, and automation pipelines while collaborating with cross-functional... 
    Full time
    Local area
    Remote job
    Delhi
    1 day ago
  •  ...team of Data Engineers, IoT experts, Data Scien sts, Front End Developers, Business Developers. This team leads the development of digital...  ...experience with Data Warehousing on Azure Data Lake Experience with Spark on Databricks and Delta Lake tables Strong experience with... 
    Delhi
    3 days ago
  •  ...build, and optimize our robust data infrastructure. You'll also develop scalable data pipelines, ensure data quality, and collaborate closely...  ...processing platforms and frameworks. Examples include Hadoop, Spark, Hive, Presto, and Trino. Pipeline Orchestration & Messaging:... 
    Work at office
    Flexible hours
    Delhi
    9 days ago
  • Job Description As a Data Engineer, you will serve as a key technical expert in the development and implementation of Nokia's Hardware Services data-lake solutions. You'll be responsible for designing and building cloud-native architectures, integrating big data platforms...
    Delhi
    12 days ago
  • Experience required: 3-6 years Skills: AWS Glue, Postgre SQL, Python, SRE, PySpark, Kafka and AWS services like Lambdas and stepfunction. Willing to work on the shift as required Willing to learn SRE Strong problem-solving skills Good to have knowledge on AWS infrastructure...
    Shift work
    Delhi
    a month ago
  • Job Title: Data Engineer III Location: Delhi (Hybrid) Department: Data Engineering Reports To: Engineering Manager – Data Platform About the Role The Data Engineer III will lead the design and optimization of Baazi’s large-scale data platform. You’ll architect end...
    Hybrid work
    Delhi
    3 days ago
  •  ...Key Responsibilities: Design, develop, and deploy big data solutions using technologies such as Hadoop, Spark, Hive, HBase, Kafka, and other related tools. Develop and maintain data pipelines for data ingestion, transformation, and loading (ETL/ELT). Perform data... 
    Delhi
    a month ago
  • Why Join Iris Are you ready to do the best work of your career at one of India's Top 25 Best Workplaces in IT industry Do you want to grow in an award-winning culture that truly values your talent and ambitions Join Iris Software — one of the fastest-growing IT services...
    Full time
    Noida
    2 days ago