Average salary: Rs1,174,889 /yearly
More statsGet new jobs by email
- ...highly skilled Cloud Engineer with a specialization in Apache Spark and Databricks to join our dynamic team. The ideal candidate will... ...-native tools. Your primary responsibility will be to design, develop, and maintain scalable data pipelines using Spark and Databricks,...Suggested
- ...Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data bricks. Develop complex ETL processes using Python scripts and SQL queries to...Suggested
- Our Culture: At LUMIQ, we strive to create a community of passionate data professionals who aim to transcend the usual corporate dynamics. We offer you the freedom to ideate, commit, and navigate your career trajectory at your own pace. Culture of ownership empowerment...Suggested
- ...SQL : Expertise in writing stored procedures, complex queries, and optimizing performance. Power BI development : Experience of developing complex power bi reports Dax : Complex Dax queries to meet business needs Paginated Reports (Power BI) : Experience in designing...Suggested
- B.E/ B. Tech/ M. Tech in computer science or related field of batch 2019,2020,2021, 2022, 2023 or 2024 only. Must be available for apprenticeship tenure of minimum 1 year. Basic understanding of data modeling concepts. Exposure to SQL and the ability to write simple ...SuggestedApprenticeship
- Description Job Description 7-12 years of experience with Big Data, PySpark, Databricks including ETL/ELT. Bachelor/Masters degree in Computer Science and Engineering. Must work independently on Analytics engines like Big Data and PySpark. Good experience with...SuggestedWork at office
- Hi All, We are hiring for the position of GCP Data Engineer with LTI Mindtree . Location: Pan India Experience: 5+ years Notice Period: 30–60 days Required Skills: GCP (Mandatory), BigQuery, SQL Note: This role is specifically for candidates...Suggested
- Location: NCR Region, New Delhi About Us: At Sauce Labs, we empower the world's top enterprises - like Walmart, Bank of America, and Indeed - to deliver quality web and mobile applications at speed. Our industry-leading platform ensures continuous quality across the SDLC...SuggestedRemote job
- ...About The Role Grade Level (for internal use): 09 The Role Full Stack Java Developer Your mission is to design, create and maintain our backend applications that power our strategic distribution platform. You possess a profound knowledge of core Java and have...SuggestedSide jobWorldwideFlexible hours
- Job Description We are looking for a Data Engineer with strong skills in AWS and data engineering tools to support and monitor our data pipelines and systems. The candidate will be responsible for monitoring, troubleshooting, and ensuring smooth execution of ETL/ELT workflows...Suggested
- Experience required: 3-6 years Skills: AWS Glue, Postgre SQL, Python, SRE, PySpark, Kafka and AWS services like Lambdas and stepfunction. Willing to work on the shift as required Willing to learn SRE Strong problem-solving skills Good to have knowledge on AWS infrastructure...SuggestedShift work
- ...build, and optimize our robust data infrastructure. You'll also develop scalable data pipelines, ensure data quality, and collaborate closely... ...processing platforms and frameworks. Examples include Hadoop, Spark, Hive, Presto, and Trino. Pipeline Orchestration & Messaging:...SuggestedWork at officeFlexible hours
- ...performance # Troubleshoot and resolve technical issues # Support production incident resolution Role responsibilities: # Develop and refactor Python and SQL code # Integrate REST APIs within MDM workflows # Utilize Azure Databricks and ADF for ETL tasks #...Suggested
- Job Description As a Data Engineer, you will serve as a key technical expert in the development and implementation of Nokia's Hardware Services data-lake solutions. You'll be responsible for designing and building cloud-native architectures, integrating big data platforms...Suggested
- ...Key Responsibilities: Design, develop, and deploy big data solutions using technologies such as Hadoop, Spark, Hive, HBase, Kafka, and other related tools. Develop and maintain data pipelines for data ingestion, transformation, and loading (ETL/ELT). Perform data...Suggested
- ...team of Data Engineers, IoT experts, Data Scien sts, Front End Developers, Business Developers. This team leads the development of digital... ...experience with Data Warehousing on Azure Data Lake Experience with Spark on Databricks and Delta Lake tables Strong experience with...
- Coforge is Hiring Azure Data Engineer Location: Greater Noida / Pune / Hyderabad Experience Domain: Insurance domain experience is mandatory Joining: Immediate joiners preferred Mandatory Skills: Azure Data Bricks (ADB) Python SQL Good to...Immediate start
- ...enterprise-grade data platforms utilizing Databricks, Snowflake, Spark, and Data Lake/Warehouse solutions, supporting advanced analytics... ...quick bug resolution. Data Architecture & Engineering Design, develop, and optimize data pipelines. Architect and implement data warehousing...Hybrid work
- Job Title: Data Engineer III Location: Delhi (Hybrid) Department: Data Engineering Reports To: Engineering Manager – Data Platform About the Role The Data Engineer III will lead the design and optimization of Baazi’s large-scale data platform. You’ll architect end...Hybrid work
- ...environment. Join our team as a Senior Data Engineer. You'll develop and maintain data pipelines for our innovative gaming products.... ...Qualifications:- Experience with data streaming tools like Kafka or Spark Streaming. Exposure to infrastructure-as-code tools like...Full timeWorldwide
- ...your chance to define a significant impact and join an organization that values innovation and excellence. What You'll Do Build, develop, and maintain batch and streaming data pipelines. Implement scalable data transformations using Python scripts and orchestrate...Full time
- ...technologies. ~ Strong proficiency in Scala and experience with functional programming paradigms. ~ Hands-on experience with Databricks and Spark for data processing and analytics. ~ Proficient in Kafka for building real-time data streaming applications. ~ Solid understanding...
- Why Join Iris Are you ready to do the best work of your career at one of India's Top 25 Best Workplaces in IT industry Do you want to grow in an award-winning culture that truly values your talent and ambitions Join Iris Software — one of the fastest-growing IT services...Full time
- ...Common Skillsets: ~5+ years of experience in analytics, data engineering, and technologies like PySpark, Python, Spark, and SQL. ~ Strong experience in managing and transforming big data sets using PySpark, Spark-Scala, NumPy, and pandas. ~ Involved in presales activities...
- ...Design, build, and maintain - Implement and enforce data modeling standards - Build data pipelines using a Medallion Architecture - Develop and optimize composable data architectures - Develop and optimize data transformation processes - Write and optimize complex SQL...Full time
- ...Design Neo4j schemas for customer journeys and relationships Develop hybrid search (vector + graph + text) with performance tuning... ..., BigQuery Processing: Airflow/Prefect, Pandas/Polars, dbt, Spark ML Pipeline: vLLM, MLflow, Sentence Transformers, PyTorch, TensorFlow...Hybrid workFlexible hours
- ...build, and optimize scalable data and ML systems. The role involves developing data pipelines, deploying ML models, and collaborating across... ...integration. Optimize large-scale data processing systems (Spark, Pandas). Ensure data quality, pipeline reliability, and model...Work at officeRemote job
- ...e.g., MLflow, Airflow, Databricks workflows). Technical Leadership: Act as a hands-on subject matter expert in Databricks, Python, Spark, and related technologies—driving adoption of best practices and mentoring other engineers. Optimize Performance: Ensure data pipelines...Full timeHybrid workRemote job
- ...databa ses.Collaborate closely with cross-functional stakeholders to understand data requirements and deliver actionable insig hts.Develop, test, and maintain scalable data models and datasets to streamline analytics workfl ows.Write high-quality, efficient SQL and...Full timeContract workHybrid work
- ...Engineering Expert to join our dynamic team. Responsibilities: Develop and optimize Streaming ETL applications to process large-scale... ...data processing ~ Solid understanding of distributed systems, Spark, Databases, System design, and big data processing framework ~...
