Average salary: Rs1,448,602 /yearly
More statsGet new jobs by email
Rs 3.5 - 12 lakhs p.a.
...Job Summary: The Spark Developer is responsible for designing and developing big data applications using Apache Spark and related technologies. This role involves working on large-scale data processing, real-time analytics, and building efficient data pipelines for various...Suggested- Python&spark developers with snowflake exp, preferred with airflow Location: Hyderabad client: TechM End client: Creditone Exp:4-5 yrs exp Python&spark developers with snowflake exp, preferred with airflow Location: Hyderabad client: TechM End client...Suggested
Rs 4 - 7 lakhs p.a.
...The developer must have sound knowledge in Apache Spark and Python programming. Deep experience in developing data processing tasks using Pyspark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Create...SuggestedRs 5 - 7 lakhs p.a.
...Job Description Role: Spark & Scala Developer Experience: 5+ years Our expectations is candidate should have: Strong Data frame and programming skills. Should have experience in complex objects and scala or datasets. Also, decent communication to express views...Suggested- ...development, and technical troubleshooting Building pipelines in spark, tuning spark queries Writes secure and high-quality code... ...least one programming language with limited guidance Designs, develops, codes, and troubleshoots with consideration of upstream and downstream...Suggested
- ...requirements gathering, to testing for quality and performance. CECM develops intelligent , data driven tools which enable strategic... ...massive volumes of operational data. The technology stack includes Spark, Hive, Presto, Airflow, Docker, Postgres, and microservices with...Suggested
- ...you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We... ...Key Responsibilities Data Pipeline Development : Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks...SuggestedImmediate start
- ...software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service... ...of ability to analyze data to drive solutions Exposure to Spark, Bigdata related technologies Exposure to cloud technologies (...Suggested
- ...the Cloud technologies like OpenShift /PCF etc. Proficiency in developing web-based applications, should have experience in UI and Server... ...: Big Query, Iceberg, Cloud Storage, Kubernetes Engine, Apache spark flow, Airflow Strong programming skills in Python, Java, and...SuggestedContract work
- ...unstructured data. Design and build complex data pipelines using Apache Spark (Scala & PySpark), Kafka Streams (Java), and cloud-native... ...for high performance, scalability, and cost-effectiveness. Develop and optimize real-time data streaming applications using Kafka Streams...Suggested
Rs 7 - 12 lakhs p.a.
...in batch and real-time pipelines. Support and maintain data pipelines involving Yellowbrick/ Netezza or other RDBMS platforms. Develop and enhance Unix Shell Scripts to automate operational and support tasks. Manage job scheduling, dependencies, and batch cycles using...SuggestedLong term contract- ...data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy...Suggested
- ...to data stories for internal and external consumption. Serve as the subject matter expert for the Enterprise Data Model Design, develop, and extend dbt code to enhance the Enterprise Dimensional Model. Create and maintain architecture and systems documentation....SuggestedWork at office
- ...technologies and methodologies in data engineering and machine learning Develops and maintains scalable data pipelines to support machine learning... ...(SQL and NoSQL), distributed data processing (e.g., Hadoop, Spark), and cloud platforms (AWS, GCP, Azure). ~ Experience working...SuggestedPermanent employment
- ...initiatives for DLP platforms Provide direct support for DLP products Coordinate with other organizations and manufacturer support Develop and implement automation and monitoring to improve system availability and supportability Education and/or Work Experience...SuggestedFlexible hours
Rs 6 - 8 lakhs p.a.
...scalable and efficient data pipelines using Databricks for data ingestion, transformation, and processing. Programming & Scripting: Develop robust data solutions and automation scripts primarily using Python . Cloud Data Warehousing: Work with and optimize data...- ...Engineer – Neo 4J The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms... ...and experience with RDBMSs. Project Experience in Python, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs. Deep understanding...
- ...Data Collection and Ingestion: Work with continuously running AWS APIs to capture client/user events or keyword-based logs. Develop AWS CLI scripts and automation logic to reliably collect and store logs. Data Preprocessing Clean, sanitize, and preprocess raw...
- ...and ideas matter. A team where everyone makes play happen. Electronic Arts Inc. is a global leader in interactive entertainment. We develop games, content and online services across platforms. We have a broad portfolio of brands that span the most popular genres. We...Work at officeLocal area
Rs 3 - 10 lakhs p.a.
# No of years experience 5+ Detailed job description - Skill Set: # Bigdata Testing - Hadoop, HDFS, Hive, Kafka, Spark, SQL UNIX Mandatory Skills # Bigdata Testing - Hadoop, HDFS, Hive, Kafka, Spark, SQL UNIX Good to Have Skills # Bigdata Testing - Hadoop, HDFS,...- Responsibilities Analyze and optimize slow or inefficient SQL and NoSQL queries across legacy systems. Profile and tune MySQL and Cassandra clusters for high throughput and low latency. Work closely with application teams to modernize schema designs and improve access...US shift
- ...helping to change millions of lives. Ready As Principal Data Management Programmer within our Hyderabad Hub, you'll be responsible for developing program for data validation, data review and protocol deviation deliverables for assigned projects, providing timely support to...
- ...Business Intelligence Platforms to enable users to easily toggle between reporting and analysis tasks, promoting efficient workflows. Develop Reports: Develop reports that can be distributed over the web, via email, or through portals or file servers, ensuring flexibility in...Long term contractHybrid workWorldwide
- ...a versatile Data Engineer with 2+ years of experience to build and scale the data infrastructure powering our organization. You will develop robust pipelines and optimize architectures that bridge the gap between traditional analytics and next-generation AI. In this role, you...Full timeWorldwide
- ...reporting platform that powers data-driven decision-making. This is an opportunity to work with BigQuery, Spark, Airflow, and Python , designing and developing robust ETL pipelines and APIs that ensure high data quality, reliability, and performance . Qualifications...
- ...Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy...Full timeWork at officeImmediate start
- ...that are changing the world and having fun doing it! Your Role Develop and maintain data pipelines while achieving high reliability and... ...databases Deep understanding and prior experience with Spark Deep understanding and prior experience with Spark/pySpark...Fixed term contractCasual workWork at office
Rs 2 - 5 lakhs p.a.
...Job Description : Develop and maintain data pipelines, ELT processes, and workflow orchestration using Apache Airflow, Python and PySpark... ..., and data modeling. ~ Proficiency in using Apache Airflow and Spark for data transformation, data integration, and data management....Permanent employmentFull time- ...data techniques. We are looking for a passionate data engineer to develop a robust, scalable data model and optimize the consumption of... ...Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS...Immediate start
Rs 5 - 10 lakhs p.a.
...Engineer to support mission-critical projects leveraging Palantir Foundry. In this role, you will work closely with stakeholders to develop scalable data pipelines, build robust applications, and integrate various data sources to drive impactful decision-making. You'll contribute...Full time
