Average salary: Rs2,824,157 /yearly
More statsGet new jobs by email
- ...techniques to improve decision-making processes.Key Responsibilities : - Build and optimize data pipelines using Databricks, SQL, and Apache Spark.- Design and implement scalable and efficient data processing systems.- Manage, monitor, and optimize data pipelines to ensure...Suggested
- ...help our team deliver data products, analytics, and models quickly and independently. The role is cross-functional and responsible for developing resilient data pipelines and infrastructure for evaluating and deploying data science models. The ideal candidate should...SuggestedStart todayRemote job
- ...and support. The ideal candidate would have extensive experience developing and supporting a DW service comprised of multiple Data... ...Expertise in at least two scripting language (Python, Scala, Spark,Unix or Java) is a mandatory. ~ Proficiency in BI/Visualization...SuggestedHybrid workWork at officeWork from home
- Job description Key Responsibilities: Data Modelling & Engineering: Design, implement, and maintain data models using Azure Data Modeling techniques to meet business needs. Create and optimize ETL pipelines, leveraging Azure Data Factory and Azure Databricks ...SuggestedRemote job
- ...Key Responsibilities: Design, develop, and maintain robust ETL/ELT pipelines to collect, clean, transform, and load data from diverse... ...S3, Glue) or Azure (Data Factory, Synapse) Familiarity with tools like Apache Spark , Kafka , Informatica , or dbt ....Suggested
- ...Responsibilities: Design, develop, and maintain data infrastructure, databases, and data pipelines Develop and implement ETL processes to extract, transform, and load data from various sources Ensure data accuracy, quality, and accessibility, and resolve data-related...Suggested
- ...the Role:** We are seeking a skilled and experienced Apache NiFi Developer/Data Engineer to join our team. The ideal candidate will have a... ...:** Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of containerization and orchestration tools (e.g....Suggested
- ...quality, integrating advanced statistical and machine learning models, and driving measurable business outcomes. Architect and develop pipelines with robust validation, quality enforcement, and efficient workflows for model deployment. Partner with data scientists...SuggestedStart todayWorldwide
- ...Experience in developing REST API services using one of the Scala frameworks. Ability to troubleshoot and optimize complex queries on the Spark platform. Expert in building and optimizing big data, data, and ML pipelines, architectures, and data sets. Knowledge in modeling...Suggested
- Must be strong with Python for ML pipelines specifically with Pytorch and scikit-learn AWS is required, building pipelines within Should have a background in LLM (langchain, agents, extensive prompt engineering) The 'strong additional requirements' below are required. ...Suggested
- ...Key Responsibilities Develop and maintain ETL pipelines for multiple source systems, e.g., SAP, Korber WMS, OpSuite ePOS, and internal BI systems. Design and implement SSIS-based data workflows, including staging, data quality rules, and CDC logic for near real-time updates...Suggested
- Job description Qualitest India Private Limited is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. # Liaising with coworkers and clients to elucidate the requirements for each task. # Conceptualizing and generating infrastructure...Suggested
- ...~ Proficiency in SQL, Python, and modern data modeling practices ~ Hands-on experience with batch and streaming frameworks (e.g., Spark, Kafka, Kinesis, Hadoop) ~ Proven track record of building and maintaining real-time and batch data pipelines at scale ~ Deep understanding...SuggestedLong term contractFor contractorsHybrid work
- ...documentation, supporting audits, and collaborating with stakeholders to drive compliance initiatives. ⭐Key Responsibilities: × Develop and maintain comprehensive documentation, including network topology diagrams, configuration reviews, and firewall standards...SuggestedImmediate startWork from homeUS shift
- ...Key Responsibilities Develop & Optimize ETL Pipelines Build robust, scalable data pipelines using Azure Data Factory (ADF), Databricks... .... Big Data Processing Utilize Azure Databricks, Apache Spark, and Delta Lake for distributed and large-scale data processing....Suggested
- ...role in shaping next-gen data systems.What you'll do :- Design & develop robust data pipelines (ETL) using the latest Big Data tech- Optimize... ...(Product, Data, Design, ML)- Work with modern tools : Apache Spark, Databricks, SQL, Python/Scala/JavaExperience in Scala is mandatoryWhat...Permanent employmentFull timeRemote jobShift work
- ...Enterprise Data Hub Team. They will work with multiple stakeholders to develop data objects and pipelines on AWS Cloud Platform.- They will work... ...data processing and transformation workflows using Apache Spark, and SQL to support analytics and reporting requirements.- Build...Immediate start
- ...environments Collaborate with threat researchers and engineers to develop and deploy effective ML solutions Conduct model evaluations... ...GCP ~ Understanding of distributed computing like Ray, Apache Spark ~ SQL (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Elasticsearch...Remote jobWork at officeLocal areaFlexible hours
- ...candidate will be responsible for providing Training on designing, developing, and delivering advanced training programs for professionals and... ...guidance on big data tools and platforms such as Hadoop, Spark, and cloud-based data solutions. Develop training materials,...
- ...Responsibilities Analyze large datasets to extract actionable insights. Develop and implement predictive models using machine learning algorithms... .... ~ Knowledge of big data technologies such as Hadoop or Spark is a plus. ~ Excellent problem-solving skills and ability to...
- ...Experience in leading a team of engineers and a good attitude toward learning the domain and implementation- Strong exposure and expertise in Spark (Primary), Scala/Java (Scala Primary), Airflow Orchestration and AWS.- Finalizing the scope of the system and delivering Big Data...
- ...machine-learning techniques and with sensitivity to the limitations of the techniques. Select, acquire and integrate data for analysis. Develop data hypotheses and methods, train and evaluate analytics models, share insights and findings and continues to iterate with...Full time
- ...community and make an impact supporting the machine learning models of some of the world's largest brands. Key Responsibilities: Develop complex, original question-and-answer pairs based on advanced topics in your area of expertise. Ensure questions involve multi-...Hourly payContract workLocal areaRemote jobFlexible hours
- ...environments Collaborate with threat researchers and engineers to develop and deploy effective ML solutions Conduct model evaluations... ...GCP ~ Understanding of distributed computing like Ray, Apache Spark ~ SQL (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Elasticsearch...Remote jobWork at officeLocal areaFlexible hours
- ...Design and implement robust, production-grade pipelines using Python, Spark SQL, and Airflow to process high-volume file-based datasets (CSV,... ...) where needed to support near-real-time data needs.- Help develop and champion internal best practices around pipeline development...Full time
- ...queries (especially for Amazon Redshift).- Experience with Apache Spark (preferably on AWS EMR) for big data processing.- Proven... ...code (e. g., Terraform, CloudFormation).Key Responsibilities : - Develop, maintain, and optimize complex SQL queries, primarily for Amazon...
- ...skilled Data Engineers to design, build, and manage robust data pipelines that power Agentic AI solutions on AWS. The role focuses on developing efficient ETL/ELT workflows, ensuring data quality, security, and scalability to support AI/ML model training, inference, and...
- .... Partnership analytics - Analyze patners' customers data and derive insights to steer partnership business. . Data modeling - Develop predictive models for insurance underwriting & fraud detection, and other use-cases as per requirements from business . Model deployment...
- ...Role Overview: We are looking for an experienced Data Architect to design, develop, and optimize enterprise data solutions on Microsoft Azure. Key Responsibilities: Lead architecture, design, and implementation of Data Warehouse and Data Lake solutions. Work across...
- ...leadership and people management. This includes contributing to architectural discussions, decisions, and execution, as well as managing and developing a team of Data Engineers (of different experience levels). What you can expect to do: Own the strategic direction...Remote jobLong term contractFull timeLocal area