Average salary: Rs2,824,157 /yearly
More statsGet new jobs by email
- ...extraction, transformation, and loading (ETL) processes using Python and SQL. Feature Engineering: Collaborate with data scientists to develop and optimize features that enhance model performance and drive business insights. AWS Management: Utilize AWS services (such as S3...SuggestedRemote job
- .... Job Title: Principal Clinical Data Science Programmer [Elluminate] Location : Bangalore/Remote What you will be doing: Develop, implement and maintain date review and data cleaning capabilities for sponsors led Phase I-IV clinical trials using sponsors technologies...SuggestedRemote jobPermanent employmentFlexible hours
$ 500 p.a.
...replace existing third-party solutions. Pipeline Development: Develop and optimize data pipelines for event data ingestion and... ...Evaluate and integrate open-source technologies like Apache Druid, Spark, or similar tools based on project requirements and performance needs...SuggestedRemote jobFull timeWork from homeHome officeFlexible hours- ...database technologies (e.g., PostgreSQL, SQL Server, MySQL). ~ Hands-on experience with data processing frameworks (e.g., Apache Spark, Apache Kafka). ~ Familiarity with cloud services (e.g., AWS, Azure, Google Cloud) and data warehousing solutions. ~ Strong programming...SuggestedRemote jobFull time
- The Role We're looking for a Junior Data Engineer to support client data integrations and master data management. You'll unify data across systems like Stripe, QuickBooks, HubSpot, and Gusto using Syncari — an MDM (Master Data Management) platform — ensuring clean, consistent...SuggestedRemote jobFull timeFlexible hours
- Data Center Engineer Start Date Starts Immediately CTC (ANNUAL) Competitive salary Competitive salary Experience ...SuggestedRemote jobImmediate start
- ...will work closely with data scientists, analysts, and application developers to leverage the power of graph databases for complex data... ...Platforms: AWS, Azure, or GCP Big Data Technologies: Hadoop, Spark, Kafka Strong understanding of data modelling, ETL pipelines,...SuggestedRemote jobImmediate start
- ...Data Engineer with 4-5 years of hands-on experience in Big Data to develop and maintain scalable data processing solutions on the Hadoop... ...Develop and optimize large-scale data processing jobs using Apache Spark. Manage and process structured data in HDFS, Hive. Ensure...SuggestedRemote jobContract workImmediate start
- ...get your foot in the door with one of the most prominent players in the AI/LLM space today. We're primarily seeking JavaScript/React developers with 3+ years of experience to train large AI language models, helping cutting-edge generative AI models write better frontend code....SuggestedRemote jobHourly payWeekly pay40 hours per weekLong term contractFull timeContract workFlexible hours
- Who we are: Motive empowers the people who run physical operations with tools to make their work safer, more productive, and more profitable. For the first time ever, safety, operations and finance teams can manage their drivers, vehicles, equipment, and fleet related ...SuggestedRemote jobFull time
- ..., Microsoft Azure, or Amazon Web Services is preferred, with GCP experience being a strong advantage.Key Responsibilities :- Design, develop, and maintain scalable data pipelines and ETL workflows.- Write optimized and efficient SQL queries for data transformation and analysis...Suggested
- ...platform into a production-grade DataOps ecosystem.Key Responsibilities : - Design, build, optimise, and maintain scalable data pipelines- Develop and manage ELT pipelines, orchestration, and automation using Python- Capture and onboard metadata into enterprise data catalogues-...SuggestedImmediate start
- ...Work with PostgreSQL/PostGIS for spatial data storage and querying Use QGIS / ArcGIS for spatial analysis and visualization Develop and maintain workflows using FME (Feature Manipulation Engine) Collaborate with cross-functional teams to ensure data accuracy and...SuggestedRemote jobImmediate start
- ...to the Data Engineering landscape Implement the Data Quality Framework in the two pilot ETLs Skills: Strong Data Engineering (Spark, Databricks, Synapse, etc.) Intermediate / Advanced Databricks or similar (Azure Synapse, etc.) experience Databricks experience...SuggestedHybrid workWork at officeRemote jobFlexible hours
Rs 12 - 18 lakhs p.a.
Responsibilities Create and manage ETL workflows using Apache Airflow Write efficient SQL queries for data processing Build automated data pipelines using Python (Pandas, PySpark) Deploy and maintain pipelines on cloud platforms Monitor, fix, and improve pipeline...Suggested- ...landing through cleansed, conformed staging tables, including deduplication, standardization, code mapping, and entity resolution. # Develop Automated Ingestion Pipelines: You will use Snowpipe, Matillion, or custom solutions with reliability, observability, and minimal...Remote jobFull time
- ...platform (SKP) technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and manage data migration solutions using the Syniti Knowledge Platform. Configure and optimize Syniti Data Replication (SDR) and...Remote jobFull timeLocal area
- ...the data flow activities on cloud data warehouse environments. Develop data pipeline code using Python, Java, AWS Lambda and/or Azure... ...PowerBI Snowflake Data warehouse, MS SQL Data warehouse Apache Spark or Hadoop SparkR Linux/PowerShell scripting AWS Lambda...Remote jobContract work
- ...understand scraping task requirements and report issues. Prepare and share periodic reports on scraping activities with stakeholders. Develop necessary pipelines to ingest data into the Datalake and perform required transformations. Requirements What you will bring...Remote jobFull time
- ...CLI. - Implementation experience with container orchestration solutions (Kubernetes/OpenShift). - Knowledge of Big Data (Hadoop/Hive/Spark) and Cloud technologies (AWS, Azure, GCP). - Understanding of distributed system architecture, high availability, scalability, and fault...Weekend workAfternoon shift
- ...network KPIs, usage, billing)- Design and enforce data modeling standards using DBT, including fact/dimension models and telecom KPIs- Develop executive-level dashboards and operational reports in Tableau and Amazon QuickSight- Partner with network, billing, product, and...
Rs 8.5 - 25 lakhs p.a.
...Strong SQL Azure Cloud SAP BW / HANA (Good to have) Responsibilities: Stabilize and optimize Snowflake data pipelines Develop/enhance Power BI dashboards Troubleshoot and resolve data/reporting issues Support data integration and short-term project needs...Contract workTemporary workHybrid workRemote jobShift workRs 3 - 12 lakhs p.a.
...RESPONSIBILITIES: Design, develop, and maintain data infrastructure, databases, and data pipelines Develop and implement ETL processes to extract, transform, and load data from various sources Ensure data accuracy, quality, and accessibility, and resolve data-related...- ...AI/ML techniques and big data processing frameworks like Apache Spark and PySpark. Responsibilities Adhere to coding and... ...Work closely with Business Analysts and Senior Data Developers to consistently achieve sprint goals Assist in estimation...Start today
- ...concepts and data pipelines. ~ Exposure to cloud platforms (AWS, Azure, or GCP) is a plus. ~ Familiarity with big data frameworks (Hadoop, Spark, Kafka) is an advantage. ~ Good problem-solving skills and ability to work independently in a remote setup....InternshipRemote job
- ...and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of... ...big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on...Remote jobFull time
- ...the world. In this vital role you will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata... ...technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies Proficiency...Remote jobFull timeHybrid workLocal area
- ...processes, and data modeling. This position will concentrate on developing and refining data pipelines, ensuring data fidelity, and facilitating... ...streaming solutions for real-time data processing. Improve Spark job performance by addressing memory management, partitioning...Remote jobFlexible hours
- Senior Software Engineer – (Big Data, GenAI): Experience: 5 to 12 Years Location: Bangalore, India Remote Are you energized by the idea of innovating with Generative AI? Do you want to create global impact while tackling challenges at the forefront of Artificial...Remote jobFull timeWorldwideShift work
- ...Responsibilities : Technical Leadership : - Lead the design, architecture, and implementation of end-to-end data pipelines using Python, Databricks, Spark, and Delta Lake.- Provide technical direction on data modeling, ETL/ELT frameworks, and best practices.- Mentor and guide junior and...Long term contract
