Average salary: Rs803,255 /yearly
More statsGet new jobs by email
- ...proficiency in Python and Spark programming. • Pandas: Experience with data manipulation and analysis using Pandas. • Implementation... ...applications. • Should lead and guide a team of junior Python developers. • As Sr Python Developer responsibilities include coding,...Big Data
- ...unmatched innovation and growth. Founded in 1991, we are cloud and data engineering leaders providing solutions to fortune 100 enterprises... ...United Airlines, and Verizon. Job Description Skills: Big Data, PySpark, Python, SQL, Data Engineering, and Cloud experience...Big Data
- ...Focus Areas Build applications and solutions that process and analyze large-scale data. Develop data-driven applications and analytical tools. Implement business logic, algorithms, and backend services. Design and build APIs for secure and efficient data exchange...Big Data
- ...Job Title: Big Data Developer Experience Level 5+ years Key Responsibilities As a Big Data Developer, you will: Data Ingestion & Automation: Implement and automate data ingestion solutions using Hadoop, Sqoop, Hive, Impala, and Spark. Pipeline Management: Debug...Big Data
- ...Role - Developer - BigData and Hadoop Ecosystems E2 Experience Range - 6 to 8 years Location - Bangalore / Chennai / Hyderabad / Mumbai... ...Have: Expert in building distributed and highly parallelized big data processing pipeline which process massive amount of data (both structured...Big Data
- ...Roles and Responsibilities: ~5+ years of experience working with cloud or on-prem Big Data/MPP analytics platform (i.e. AWS Glue/EMR, Google BigQuery, Azure Data Warehouse, Databricks, Snowflake, Hortonworks/Cloudera Clusters) ~3+ years of exp in writing Advanced SQLs...Big Data
- ...About the Role We are looking for a skilled Big Data Engineer to design, build, and optimize scalable data pipelines and big data platforms... ...and real-time data systems. Key Responsibilities Design, develop, and maintain large-scale data pipelines and ETL workflows...Big Data
- ...Job Description We are looking for data engineers who have the right attitude, aptitude... ...TEXT] *Roles & Responsibilities* Design, develop, and manage robust ETL pipelines using... ...Minimum 5+ years of hands-on experience in Big Data / Data Engineering Strong expertise...Big DataImmediate start
- ...brainstorming sessions to select, integrate, and maintain ~ Big Data tools and frameworks required to solve Big Data problems at scale... ...products and features we have. ~8+ years of experience in developing highly scalable Big Data pipelines. ~ Hands on exp in team leading...Big Data
- ...years of relevant work experience as AWS Developer Good knowledge of Hadoop Architecture... ...and its ecosystem and possess experience in data storage HDFS, writing queries HQL or Spark... ...Pyspark Strong hands on Experience in AWS Big data tools EMR, Glue, Athena, MSK/Kinesis...Big Data
- ...Job Description: Develop and maintain ETL processes using Python and PySpark Design and implement data pipelines and workflows Optimize and fine tune data processing... ...PySpark ETL Data Pipeline Big Data AWS GCP Azure Data Warehousing...Big Data
- ...Key Responsibilities: Data Engineering & Modeling Implement and validate predictive models and maintain statistical models with a focus on big data. Develop, test, and deploy data pipelines for cleansing, integration, and transformation. Design and implement enterprise...Big Data
- ...Job Title: Data Platform Developer Key Responsibilities As a Data Platform Developer, you will: Solution Design & Development: Design, build... ...System, HDFS File Types, and HDFS compression codecs. Big Data Ecosystem Management: Apply in-depth knowledge of various...Big DataFull time
- ...Key Deliverables: Build and maintain scalable data pipelines using Databricks and PySpark Enable unified data access and governance... ...unstructured data across platforms Optimize performance and quality of big data workflows Collaborate in SAFe Agile teams to deliver...Big DataHybrid work
- ...pipelines for large and complex datasets Design and implement data models and integration solutions across cloud platforms Ensure... ...data requirements into robust technical architectures Optimize big data performance using Spark, SQL, and cloud orchestration Support...Big Data
- ...Key Responsibilities : Design, develop, and optimize data pipelines using Azure Databricks. Ensure data reliability, scalability, and quality... ...Skills Required : Expertise in Azure Databricks, Spark, and big data tools. Strong programming skills (e.g., Python, Scala...Big Data
- We're looking for Data Engineers ! Salary : INR 12,00,000 - 15,00,000 / year Responsibilities Develop/Convert the database (Hadoop to GCP) of specific objects (tables, views... ...statistical models with a focus on big data, incorporating a variety of statistical...Big Data
- ...TCS is hiring for Data Engineers ! Role : Data Engineer Exp : 4 - 6 years Location : Navi Mumbai Skills : Python, Hadoop HDFS... ...to ensure compliance with regulations Proven Experience in Big Data Technologies like HDFS, PySpark, Apache Airflow, Apache Druid...Big Data
- ...Key Responsibilities: Develop and maintain data pipelines for batch and stream processing using Informatica PowerCenter or cloud ETL/ELT tools.... ...statistical models, incorporating machine learning techniques for big data applications. Design and implement enterprise search...Big Data
- ...Role Overview: As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities: Build scalable batch and real-time...Big Data
- ...We are seeking an experienced Data Engineer to design, build, and optimize scalable data pipelines... ...involves working with cloud platforms, big data frameworks, and streaming technologies... ...services and other modern data tools. Develop batch and stream processing workflows with...Big Data
- ...GCP Big Data Engineer Location -Bangalore & Gurgaon Leadership role with 5-10 yrs experience Skills Set GCP, SQL, PySpark, ETL knowledge MUST Skills Mandatory Skills : GCP, Storage, GCP, Big Query, GCP DataProc, GCP Cloud Composer, GCP DMS, Apache airflow...Big Data
- We are looking for Data Engineers! Salary : INR 10,00,000 - 12,00,000 / year Responsibilities You'll contribute to data gathering... ...well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning...Big Data
- ...industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business... ...assigned as required. Backend Resource Strong expertise in Big Data technologies (Spark, Hadoop, Hive, Impala, Kafka, Scala,...Big DataFull time
- ...Step functions etc. Experience in programming in JAVA and Python. Experience performing data analysis (NOT DATA SCIENCE) on AWS platforms Nice to have Experience in a Big Data technologies (Terradata, Snowflake, Spark, Redshift, Kafka, etc.) Experience with data...Big Data
- 5–7 years of experience in data engineering and data governance within industrial domains (... ...for data pipeline automation. Design and develop data pipelines using Azure Data Factory,... ..., and data governance standards. Work with Big Data technologies such as Apache Spark and...Big Data
- ...and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines...Big Data
- ...well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning... ...cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results...Big Data
- ...test plan reviews You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge... ...Technical Requirements: ~ Primary skills Technology Big Data Data Processing Spark Technology Functional Programming Scala...Big Data
- ...Proficiency in PySpark for distributed data processing and transformation. Solid experience... ...Experience with Data Warehousing and Big Data technologies, specifically within AWS.... ...and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (...Big Data
