Average salary: Rs2,824,157 /yearly

More stats
Get new jobs by email
  • Must be strong with Python for ML pipelines specifically with Pytorch and scikit-learn AWS is required, building pipelines within Should have a background in LLM (langchain, agents, extensive prompt engineering) The 'strong additional requirements' below are required. ...
    Suggested
    work from home
    a month ago
  •  ...and support. The ideal candidate would have extensive experience developing and supporting a DW service comprised of multiple Data...  ...Expertise in at least two scripting language (Python, Scala, Spark,Unix or Java) is a mandatory. ~ Proficiency in BI/Visualization... 
    Suggested
    Hybrid work
    Work at office
    Work from home
    work from home
    5 days ago
  •  ...help our team deliver data products, analytics, and models quickly and independently. The role is cross-functional and responsible for developing resilient data pipelines and infrastructure for evaluating and deploying data science models. The ideal candidate should... 
    Suggested
    Start today
    Remote job
    work from home
    2 days ago
  •  ...large data sets using distributed processing tools like Akka and Spark. Understanding and critically reviewing existing data...  ...multiple products and features we have. 7+ years of experience in developing highly scalable Big Data pipelines. In-depth understanding of... 
    Suggested
    work from home
    8 days ago
  •  ...fundamental portfolio managers (PMs) , as well as enterprise teams including Ops, Risk, Trading, and Compliance . The role involves developing internal data products and analytics while optimizing data pipelines. Key Responsibilities: Web Scraping & Data Acquisition... 
    Suggested
    work from home
    7 days ago
  •  ...Experience in developing REST API services using one of the Scala frameworks. Ability to troubleshoot and optimize complex queries on the Spark platform. Expert in building and optimizing big data, data, and ML pipelines, architectures, and data sets. Knowledge in modeling... 
    Suggested
    work from home
    more than 2 months ago
  • We are looking for a Senior Data Engineer with expertise in SQL, Python, AWS, and containerization to build and maintain a scalable data platform . The role involves working with web scraping, data pipelines, and DevOps practices while collaborating with cross-functional...
    Suggested
    work from home
    7 days ago
  •  ...Responsibilities: Design, develop, and maintain data infrastructure, databases, and data pipelines Develop and implement ETL processes to extract, transform, and load data from various sources Ensure data accuracy, quality, and accessibility, and resolve data-related... 
    Suggested
    work from home
    more than 2 months ago
  •  ...Develop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops, Risk, Trading, and Compliance. Develop... 
    Suggested
    work from home
    7 days ago
  • Job description Qualitest India Private Limited is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. # Liaising with coworkers and clients to elucidate the requirements for each task. # Conceptualizing and generating infrastructure...
    Suggested
    work from home
    a month ago
  •  ...Responsibilities: Data Pipeline Architecture: Design, develop, and optimize end-to-end data pipelines to extract, transform, and load (ETL) data from various sources into our data warehouse. Ensure data quality, reliability, and performance throughout the pipeline. Data... 
    Suggested
    work from home
    9 days ago
  • Job Title: Data Engineer Experience: Min 7 - 15 years in SQL & C# (Below 7 years experienced candidates please refrain from applying) Location: Permanent remote Notice period: Immediate joiners preferred (Max 15 days) Work timings- Mon-Fri, 2:00pm to 11:00pm IST ...
    Suggested
    Permanent employment
    Immediate start
    Remote job
    Working Monday to Friday
    work from home
    18 days ago
  •  ...Job Responsibilities: Develop and maintain data pipelines for large-scale data processing. Work with streaming data technologies, including...  ...experience in real-time/streaming data pipeline development using Apache Spark, StreamSets, Apache NiFi, or similar frameworks.... 
    Suggested
    work from home
    7 days ago
  •  ...the Role:** We are seeking a skilled and experienced Apache NiFi Developer/Data Engineer to join our team. The ideal candidate will have a...  ...:**   Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of containerization and orchestration tools (e.g.... 
    Suggested
    work from home
    a month ago
  •  ...quality, integrating advanced statistical and machine learning models, and driving measurable business outcomes. Architect and develop pipelines with robust validation, quality enforcement, and efficient workflows for model deployment. Partner with data scientists... 
    Suggested
    Start today
    Worldwide
    work from home
    1 day ago
  • Description:Senior Data Engineer (Spark & Lakehouse)Location: Remote, India (Preferred: Bangalore/Pune)Experience: 6+ YearsDomain: Data Engineering / Big DataAbout the Role:We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse... 
    Remote job

    Zorba Consulting India Pvt. Ltd.

    work from home
    3 hours agonew
  •  ...documentation, supporting audits, and collaborating with stakeholders to drive compliance initiatives.   ⭐Key Responsibilities: × Develop and maintain comprehensive documentation, including network topology diagrams, configuration reviews, and firewall standards... 
    Immediate start
    Work from home
    US shift
    work from home
    a month ago
  •  ...~ Proficiency in SQL, Python, and modern data modeling practices ~ Hands-on experience with batch and streaming frameworks (e.g., Spark, Kafka, Kinesis, Hadoop) ~ Proven track record of building and maintaining real-time and batch data pipelines at scale ~ Deep understanding... 
    Long term contract
    For contractors
    Hybrid work
    work from home
    1 day ago
  •  ...office applications , including regulatory reporting, settlements, and reconciliation . We are seeking an experienced ETL/Java Developer to join a mixed Luxoft/client team , focusing on new functionality development . This role provides an excellent opportunity to... 
    Remote job
    Work from home
    Flexible hours
    work from home
    7 days ago
  •  ...Requirement: Architect, develop, and maintain scalable and secure data pipelines to process structured and unstructured data from diverse sources. Collaborate with data scientists, BI analysts and business stakeholders to understand data requirements. Optimize data... 
    work from home
    16 days ago
  •  ...our data lake house environment. You will leverage your 10+ years of expertise to develop complex data pipelines, ensure data quality, and drive innovation using Databricks and Apache Spark. Responsibilities: ~ Design and implement scalable and robust data pipelines... 
    work from home
    7 days ago
  •  ...environments Collaborate with threat researchers and engineers to develop and deploy effective ML solutions Conduct model evaluations...  ...GCP ~ Understanding of distributed computing like Ray, Apache Spark ~ SQL (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Elasticsearch... 
    Remote job
    Work at office
    Local area
    Flexible hours
    work from home
    5 days ago
  •  ...candidate will be responsible for providing Training on designing, developing, and delivering advanced training programs for professionals and...  ...guidance on big data tools and platforms such as Hadoop, Spark, and cloud-based data solutions. Develop training materials,... 
    work from home
    a month ago
  •  ...Responsibilities Analyze large datasets to extract actionable insights. Develop and implement predictive models using machine learning algorithms...  .... ~ Knowledge of big data technologies such as Hadoop or Spark is a plus. ~ Excellent problem-solving skills and ability to... 
    work from home
    a month ago
  •  ...machine-learning techniques and withsensitivity to the limitations of the techniques. Select, acquire and integrate data for analysis. Develop datahypotheses and methods, train and evaluate analytics models, share insights and findings and continues toiterate with additional... 
    Full time
    Relocation package
    work from home
    1 day ago
  •  ...Experience in leading a team of engineers and a good attitude toward learning the domain and implementation- Strong exposure and expertise in Spark (Primary), Scala/Java (Scala Primary), Airflow Orchestration and AWS.- Finalizing the scope of the system and delivering Big Data... 

    NucleusTeq Consulting Pvt. Ltd.

    work from home
    21 hours ago
  •  ...skilled Data Engineers to design, build, and manage robust data pipelines that power Agentic AI solutions on AWS. The role focuses on developing efficient ETL/ELT workflows, ensuring data quality, security, and scalability to support AI/ML model training, inference, and... 
    work from home
    a month ago
  •  ...Description We are seeking a skilled Snowflake Developer / Data Engineer to join our team in India. The ideal candidate will be responsible for designing and implementing robust data solutions using Snowflake, ensuring high performance and reliability of our data infrastructure... 
    work from home
    1 day ago
  •  ...Snowflake Data Engineer to join our dynamic team in Mumbai. The ideal candidate will have 10+ years of hands-on experience in designing, developing, and implementing robust data solutions on the Snowflake platform. This role demands a deep understanding of data warehousing... 
    Immediate start
    work from home
    7 days ago
  • Description :- Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue...  ...data processing and transformation workflows using Apache Spark, and SQL to support analytics and reporting requirements.- Build... 

    Cyan360

    work from home
    12 days ago