Average salary: Rs2,824,157 /yearly

More stats
Get new jobs by email
  •  ...extraction, transformation, and loading (ETL) processes using Python and SQL. Feature Engineering: Collaborate with data scientists to develop and optimize features that enhance model performance and drive business insights. AWS Management: Utilize AWS services (such as S3... 
    Suggested
    Remote job

    FairMoney

    work from home
    more than 2 months ago
  •  ....   Job Title: Principal Clinical Data Science Programmer [Elluminate] Location : Bangalore/Remote What you will be doing: Develop, implement and maintain date review and data cleaning capabilities for sponsors led Phase I-IV clinical trials using sponsors technologies... 
    Suggested
    Remote job
    Permanent employment
    Flexible hours

    ICON

    work from home
    more than 2 months ago
  • $ 500 p.a.

     ...replace existing third-party solutions. Pipeline Development: Develop and optimize data pipelines for event data ingestion and...  ...Evaluate and integrate open-source technologies like Apache Druid, Spark, or similar tools based on project requirements and performance needs... 
    Suggested
    Remote job
    Full time
    Work from home
    Home office
    Flexible hours

    Zingtree

    work from home
    more than 2 months ago
  •  ...database technologies (e.g., PostgreSQL, SQL Server, MySQL). ~ Hands-on experience with data processing frameworks (e.g., Apache Spark, Apache Kafka). ~ Familiarity with cloud services (e.g., AWS, Azure, Google Cloud) and data warehousing solutions. ~ Strong programming... 
    Suggested
    Remote job
    Full time

    Infystrat

    work from home
    more than 2 months ago
  • The Role We're looking for a Junior Data Engineer to support client data integrations and master data management. You'll unify data across systems like Stripe, QuickBooks, HubSpot, and Gusto using Syncari — an MDM (Master Data Management) platform — ensuring clean, consistent...
    Suggested
    Remote job
    Full time
    Flexible hours

    FinStrat Management

    work from home
    5 days ago
  • Data Center Engineer Start Date Starts  Immediately CTC (ANNUAL) Competitive salary Competitive salary Experience ...
    Suggested
    Remote job
    Immediate start

    Tech People 247

    work from home
    a month ago
  •  ...will work closely with data scientists, analysts, and application developers to leverage the power of graph databases for complex data...  ...Platforms: AWS, Azure, or GCP Big Data Technologies: Hadoop, Spark, Kafka Strong understanding of data modelling, ETL pipelines,... 
    Suggested
    Remote job
    Immediate start

    IDFC FIRST Bank

    work from home
    a month ago
  •  ...Data Engineer with 4-5 years of hands-on experience in Big Data to develop and maintain scalable data processing solutions on the Hadoop...  ...Develop and optimize large-scale data processing jobs using Apache Spark.      Manage and process structured data in HDFS, Hive.      Ensure... 
    Suggested
    Remote job
    Contract work
    Immediate start

    Trigyn Technologies

    work from home
    more than 2 months ago
  •  ...get your foot in the door with one of the most prominent players in the AI/LLM space today. We're primarily seeking JavaScript/React developers with 3+ years of experience to train large AI language models, helping cutting-edge generative AI models write better frontend code.... 
    Suggested
    Remote job
    Hourly pay
    Weekly pay
    40 hours per week
    Long term contract
    Full time
    Contract work
    Flexible hours

    G2i Inc

    work from home
    more than 2 months ago
  • Who we are: Motive empowers the people who run physical operations with tools to make their work safer, more productive, and more profitable. For the first time ever, safety, operations and finance teams can manage their drivers, vehicles, equipment, and fleet related ...
    Suggested
    Remote job
    Full time

    Motive

    work from home
    more than 2 months ago
  •  ..., Microsoft Azure, or Amazon Web Services is preferred, with GCP experience being a strong advantage.Key Responsibilities :- Design, develop, and maintain scalable data pipelines and ETL workflows.- Write optimized and efficient SQL queries for data transformation and analysis... 
    Suggested

    Ixceed Solutions

    work from home
    23 days ago
  •  ...platform into a production-grade DataOps ecosystem.Key Responsibilities : - Design, build, optimise, and maintain scalable data pipelines- Develop and manage ELT pipelines, orchestration, and automation using Python- Capture and onboard metadata into enterprise data catalogues-... 
    Suggested
    Immediate start

    Evnek

    work from home
    9 days ago
  •  ...Work with PostgreSQL/PostGIS for spatial data storage and querying Use QGIS / ArcGIS for spatial analysis and visualization Develop and maintain workflows using FME (Feature Manipulation Engine) Collaborate with cross-functional teams to ensure data accuracy and... 
    Suggested
    Remote job
    Immediate start

    MyData Insights

    work from home
    17 days ago
  •  ...to the Data Engineering landscape Implement the Data Quality Framework in the two pilot ETLs Skills: Strong Data Engineering (Spark, Databricks, Synapse, etc.) Intermediate / Advanced Databricks or similar (Azure Synapse, etc.) experience Databricks experience... 
    Suggested
    Hybrid work
    Work at office
    Remote job
    Flexible hours

    NTT Data

    work from home
    22 days ago
  • Rs 12 - 18 lakhs p.a.

    Responsibilities Create and manage ETL workflows using Apache Airflow Write efficient SQL queries for data processing Build automated data pipelines using Python (Pandas, PySpark) Deploy and maintain pipelines on cloud platforms Monitor, fix, and improve pipeline...
    Suggested

    Antara sarkar (Proprietor of Amit consultants)

    work from home
    a month ago
  •  ...landing through cleansed, conformed staging tables, including deduplication, standardization, code mapping, and entity resolution. # Develop Automated Ingestion Pipelines: You will use Snowpipe, Matillion, or custom solutions with reliability, observability, and minimal... 
    Remote job
    Full time

    Precision Medicine Group

    work from home
    a month ago
  •  ...platform (SKP) technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and manage data migration solutions using the Syniti Knowledge Platform. Configure and optimize Syniti Data Replication (SDR) and... 
    Remote job
    Full time
    Local area

    Emedgene, an Illumina Company

    work from home
    more than 2 months ago
  •  ...the data flow activities on cloud data warehouse environments. Develop data pipeline code using Python, Java, AWS Lambda and/or Azure...  ...PowerBI Snowflake Data warehouse, MS SQL Data warehouse Apache Spark or Hadoop SparkR Linux/PowerShell scripting AWS Lambda... 
    Remote job
    Contract work

    TWO95 International, Inc

    work from home
    more than 2 months ago
  •  ...understand scraping task requirements and report issues. Prepare and share periodic reports on scraping activities with stakeholders. Develop necessary pipelines to ingest data into the Datalake and perform required transformations. Requirements What you will bring... 
    Remote job
    Full time

    Cxg

    work from home
    a month ago
  •  ...CLI. - Implementation experience with container orchestration solutions (Kubernetes/OpenShift). - Knowledge of Big Data (Hadoop/Hive/Spark) and Cloud technologies (AWS, Azure, GCP). - Understanding of distributed system architecture, high availability, scalability, and fault... 
    Weekend work
    Afternoon shift

    QC HR Services

    work from home
    27 days ago
  •  ...network KPIs, usage, billing)- Design and enforce data modeling standards using DBT, including fact/dimension models and telecom KPIs- Develop executive-level dashboards and operational reports in Tableau and Amazon QuickSight- Partner with network, billing, product, and... 

    Veltris

    work from home
    23 days ago
  • Rs 8.5 - 25 lakhs p.a.

     ...Strong SQL Azure Cloud SAP BW / HANA (Good to have) Responsibilities: Stabilize and optimize Snowflake data pipelines Develop/enhance Power BI dashboards Troubleshoot and resolve data/reporting issues Support data integration and short-term project needs... 
    Contract work
    Temporary work
    Hybrid work
    Remote job
    Shift work

    Diverse Lynx India Private Limited

    work from home
    11 days ago
  • Rs 3 - 12 lakhs p.a.

     ...RESPONSIBILITIES: Design, develop, and maintain data infrastructure, databases, and data pipelines Develop and implement ETL processes to extract, transform, and load data from various sources Ensure data accuracy, quality, and accessibility, and resolve data-related... 

    Coders Brain

    work from home
    20 days ago
  •  ...AI/ML techniques and big data processing frameworks like Apache Spark and PySpark. Responsibilities Adhere to coding and...  ...Work closely with Business Analysts and Senior Data Developers to consistently achieve sprint goals Assist in estimation... 
    Start today

    Numerator

    work from home
    6 days ago
  •  ...concepts and data pipelines.   ~ Exposure to cloud platforms (AWS, Azure, or GCP) is a plus.   ~ Familiarity with big data frameworks (Hadoop, Spark, Kafka) is an advantage.   ~ Good problem-solving skills and ability to work independently in a remote setup.... 
    Internship
    Remote job
    work from home
    3 hours agonew
  •  ...and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of...  ...big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on... 
    Remote job
    Full time

    Emedgene, an Illumina Company

    work from home
    more than 2 months ago
  •  ...the world. In this vital role you will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata...  ...technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies Proficiency... 
    Remote job
    Full time
    Hybrid work
    Local area

    Emedgene, an Illumina Company

    work from home
    more than 2 months ago
  •  ...processes, and data modeling. This position will concentrate on developing and refining data pipelines, ensuring data fidelity, and facilitating...  ...streaming solutions for real-time data processing. Improve Spark job performance by addressing memory management, partitioning... 
    Remote job
    Flexible hours

    Mindera

    work from home
    more than 2 months ago
  • Senior Software Engineer – (Big Data, GenAI): Experience: 5 to 12 Years Location: Bangalore, India Remote   Are you energized by the idea of innovating with Generative AI? Do you want to create global impact while tackling challenges at the forefront of Artificial...
    Remote job
    Full time
    Worldwide
    Shift work

    Extreme Networks

    work from home
    more than 2 months ago
  •  ...Responsibilities : Technical Leadership : - Lead the design, architecture, and implementation of end-to-end data pipelines using Python, Databricks, Spark, and Delta Lake.- Provide technical direction on data modeling, ETL/ELT frameworks, and best practices.- Mentor and guide junior and... 
    Long term contract

    EdgeVerve

    work from home
    20 days ago