Average salary: Rs2,824,157 /yearly
More statsGet new jobs by email
- Must be strong with Python for ML pipelines specifically with Pytorch and scikit-learn AWS is required, building pipelines within Should have a background in LLM (langchain, agents, extensive prompt engineering) The 'strong additional requirements' below are required. ...Suggested
- ...and support. The ideal candidate would have extensive experience developing and supporting a DW service comprised of multiple Data... ...Expertise in at least two scripting language (Python, Scala, Spark,Unix or Java) is a mandatory. ~ Proficiency in BI/Visualization...SuggestedHybrid workWork at officeWork from home
- ...help our team deliver data products, analytics, and models quickly and independently. The role is cross-functional and responsible for developing resilient data pipelines and infrastructure for evaluating and deploying data science models. The ideal candidate should...SuggestedStart todayRemote job
- ...large data sets using distributed processing tools like Akka and Spark. Understanding and critically reviewing existing data... ...multiple products and features we have. 7+ years of experience in developing highly scalable Big Data pipelines. In-depth understanding of...Suggested
- ...fundamental portfolio managers (PMs) , as well as enterprise teams including Ops, Risk, Trading, and Compliance . The role involves developing internal data products and analytics while optimizing data pipelines. Key Responsibilities: Web Scraping & Data Acquisition...Suggested
- ...Experience in developing REST API services using one of the Scala frameworks. Ability to troubleshoot and optimize complex queries on the Spark platform. Expert in building and optimizing big data, data, and ML pipelines, architectures, and data sets. Knowledge in modeling...Suggested
- We are looking for a Senior Data Engineer with expertise in SQL, Python, AWS, and containerization to build and maintain a scalable data platform . The role involves working with web scraping, data pipelines, and DevOps practices while collaborating with cross-functional...Suggested
- ...Responsibilities: Design, develop, and maintain data infrastructure, databases, and data pipelines Develop and implement ETL processes to extract, transform, and load data from various sources Ensure data accuracy, quality, and accessibility, and resolve data-related...Suggested
- ...Develop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops, Risk, Trading, and Compliance. Develop...Suggested
- Job description Qualitest India Private Limited is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. # Liaising with coworkers and clients to elucidate the requirements for each task. # Conceptualizing and generating infrastructure...Suggested
- ...Responsibilities: Data Pipeline Architecture: Design, develop, and optimize end-to-end data pipelines to extract, transform, and load (ETL) data from various sources into our data warehouse. Ensure data quality, reliability, and performance throughout the pipeline. Data...Suggested
- Job Title: Data Engineer Experience: Min 7 - 15 years in SQL & C# (Below 7 years experienced candidates please refrain from applying) Location: Permanent remote Notice period: Immediate joiners preferred (Max 15 days) Work timings- Mon-Fri, 2:00pm to 11:00pm IST ...SuggestedPermanent employmentImmediate startRemote jobWorking Monday to Friday
- ...Job Responsibilities: Develop and maintain data pipelines for large-scale data processing. Work with streaming data technologies, including... ...experience in real-time/streaming data pipeline development using Apache Spark, StreamSets, Apache NiFi, or similar frameworks....Suggested
- ...the Role:** We are seeking a skilled and experienced Apache NiFi Developer/Data Engineer to join our team. The ideal candidate will have a... ...:** Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of containerization and orchestration tools (e.g....Suggested
- ...quality, integrating advanced statistical and machine learning models, and driving measurable business outcomes. Architect and develop pipelines with robust validation, quality enforcement, and efficient workflows for model deployment. Partner with data scientists...SuggestedStart todayWorldwide
- Description:Senior Data Engineer (Spark & Lakehouse)Location: Remote, India (Preferred: Bangalore/Pune)Experience: 6+ YearsDomain: Data Engineering / Big DataAbout the Role:We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse...Remote job
- ...documentation, supporting audits, and collaborating with stakeholders to drive compliance initiatives. ⭐Key Responsibilities: × Develop and maintain comprehensive documentation, including network topology diagrams, configuration reviews, and firewall standards...Immediate startWork from homeUS shift
- ...~ Proficiency in SQL, Python, and modern data modeling practices ~ Hands-on experience with batch and streaming frameworks (e.g., Spark, Kafka, Kinesis, Hadoop) ~ Proven track record of building and maintaining real-time and batch data pipelines at scale ~ Deep understanding...Long term contractFor contractorsHybrid work
- ...office applications , including regulatory reporting, settlements, and reconciliation . We are seeking an experienced ETL/Java Developer to join a mixed Luxoft/client team , focusing on new functionality development . This role provides an excellent opportunity to...Remote jobWork from homeFlexible hours
- ...Requirement: Architect, develop, and maintain scalable and secure data pipelines to process structured and unstructured data from diverse sources. Collaborate with data scientists, BI analysts and business stakeholders to understand data requirements. Optimize data...
- ...our data lake house environment. You will leverage your 10+ years of expertise to develop complex data pipelines, ensure data quality, and drive innovation using Databricks and Apache Spark. Responsibilities: ~ Design and implement scalable and robust data pipelines...
- ...environments Collaborate with threat researchers and engineers to develop and deploy effective ML solutions Conduct model evaluations... ...GCP ~ Understanding of distributed computing like Ray, Apache Spark ~ SQL (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Elasticsearch...Remote jobWork at officeLocal areaFlexible hours
- ...candidate will be responsible for providing Training on designing, developing, and delivering advanced training programs for professionals and... ...guidance on big data tools and platforms such as Hadoop, Spark, and cloud-based data solutions. Develop training materials,...
- ...Responsibilities Analyze large datasets to extract actionable insights. Develop and implement predictive models using machine learning algorithms... .... ~ Knowledge of big data technologies such as Hadoop or Spark is a plus. ~ Excellent problem-solving skills and ability to...
- ...machine-learning techniques and withsensitivity to the limitations of the techniques. Select, acquire and integrate data for analysis. Develop datahypotheses and methods, train and evaluate analytics models, share insights and findings and continues toiterate with additional...Full timeRelocation package
- ...Experience in leading a team of engineers and a good attitude toward learning the domain and implementation- Strong exposure and expertise in Spark (Primary), Scala/Java (Scala Primary), Airflow Orchestration and AWS.- Finalizing the scope of the system and delivering Big Data...
- ...skilled Data Engineers to design, build, and manage robust data pipelines that power Agentic AI solutions on AWS. The role focuses on developing efficient ETL/ELT workflows, ensuring data quality, security, and scalability to support AI/ML model training, inference, and...
- ...Description We are seeking a skilled Snowflake Developer / Data Engineer to join our team in India. The ideal candidate will be responsible for designing and implementing robust data solutions using Snowflake, ensuring high performance and reliability of our data infrastructure...
- ...Snowflake Data Engineer to join our dynamic team in Mumbai. The ideal candidate will have 10+ years of hands-on experience in designing, developing, and implementing robust data solutions on the Snowflake platform. This role demands a deep understanding of data warehousing...Immediate start
- Description :- Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue... ...data processing and transformation workflows using Apache Spark, and SQL to support analytics and reporting requirements.- Build...
