Average salary: Rs1,454,325 /yearly
More statsGet new jobs by email
Rs 3 - 10 lakhs p.a.
...Engineers Experience : 3 to 8 Yrs of exp Location : Chennai / Pune / Mumbai / Bangalore / Hyderabad Mandatory Skills : Big Data | Hadoop | Java | spark | sparkSql | Hive Qualification : ~ B.TECH / B.E / MCA / Computer Science Background - Any Specification...Suggested- ...industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business... ...applications. ~5+ years of building, debugging and fine tuning Apache Spark based applications. ~ Strong fundamentals of Data Engineering...SuggestedFull time
- ...Engineering, or equivalent ~ Proficiency in Python ; working proficiency in Scala. ~ Strong experience with Big Data processing (Spark, Databricks) and event streaming (Kafka). ~ Experience with orchestration and platform tooling such as Airflow ; ability to...SuggestedFull timeLocal areaFlexible hours
- ...is able to communicate issues to managers and team. Leads and develops internal standards for data administration and data system documentation... ...Exposure to real-time/streaming platforms (Kafka, Spark Streaming, Flink) Familiarity with CI/CD and version control (Git...Suggested
- ...candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives.... ...Skills needed: Strong Experience in Data lake – Spark, distributed file system, Yarn, Cloud services (preferably GCP /...SuggestedFull timeWork at office
- Job Title: Senior Data Engineer (Database Design & Optimization Expert) Location: Chennai Experience: 10+ years Employment Type: Full-time Work model: In-office About Linarc: Linarc is revolutionizing the construction industry. As the emerging leader in ...SuggestedFull timeFor contractorsWork at officeFlexible hours
- ...Design, build, and optimize scalable data pipelines using Azure Data Engineering services and Databricks . Big Data Processing: Develop and maintain large-scale data transformations using PySpark , ensuring performance, reliability, and efficiency. Data...SuggestedHybrid workImmediate start
Rs 6 - 10 lakhs p.a.
RARR Technologies is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey # Liaising with coworkers and clients to elucidate the requirements for each task. # Conceptualizing and generating infrastructure that allows big data to be...SuggestedPermanent employmentFull time- ...organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About...SuggestedFull timeHybrid work
Rs 2 - 5 lakhs p.a.
Seeking for a strong data analyst who can perform independent. Good to have Prior experience in banking domain is plus and understanding the finance various data transactions. Should be strong in the below: Python Pyspark SQL Airflow Trino Hive Offshore...SuggestedRs 2.5 - 6 lakhs p.a.
...Accountabilities: Develop, implement, and maintain data pipelines using technologies like Snowflake, DBT, and Fivetran. Automate and orchestrate workflows and data processes using Airflow. Develop scalable data infrastructure using AWS services (such as S3, RDS, and...SuggestedRs 3 - 7 lakhs p.a.
...finetuning of knowledge base and user data, ensuring data quality, scalability, and efficiency. Data Processing & Transformation : Develop data transformation processes to prepare data for Natural Language Processing (NLP) models, facilitating personalized and accurate...SuggestedPermanent employmentFull timeRs 3 - 5.5 lakhs p.a.
...Responsibilities: Should be able to write code for the given scenario. Should have knowledge on Spark related queries MUST Have Flink coding/troubleshoot pipeline level knowledge Core Python (example : apply validation rules to csv file, String comparison, collections...SuggestedRs 7 - 10 lakhs p.a.
...pipelines, and fixing of defects. Constantly looking for opportunities to optimize data pipelines to improve performance Must Have: Must have coding skills in Spark/Pyspark, Python and SQL Must have Knowledge of AWS tools, Glue, Athena ,step functions and S3...Suggested- ...dedicated to offering our customers the highest quality products and services and are looking for people with the inspiration and talent to develop with us as we pursue our ambitious growth strategy. We are a leader in developing state-of-the-art technology and it is this...SuggestedFull timeNo agency
Rs 3 - 6 lakhs p.a.
...organize and store that data in the project applications. Participates in meetings with Project Managers to determine client needs, develops customized solutions within the technology platform, creates estimates, timelines and development goals, and designs, codes, and...- ...the Hadoop ecosystem and Big Data technologies . Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr) . System level understanding - Data structures, algorithms, distributed storage & compute . Exposure and /...Hybrid workWork at officeRemote jobFlexible hours
- Role : Data Engineer Type : Freelance/Contract Duration : 6+ month Experience : 6-8 Years Requirements: Expertise in Data Vault 2.0 and dimensional modeling principles. Expertise in modular ELT pipelines using dbt for data transformation and version control...Contract workFreelance
- ..., Lambda, RDS) and Azure Databricks Build and optimize end to end ETL/ELT workflows for structured and semi structured data Develop Spark transformations using PySpark and SQL with strong performance and cost practices Translate business needs into data models, curated...Contract workShift work
Rs 2.5 - 11 lakhs p.a.
...project we are working on the bleeding-edge of Big Data technology to develop high performance data analytics platform, which handles petabytes... ...Scala, Java, or Python. In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (...Rs 5 - 10 lakhs p.a.
...Key Responsibilities: Data Engineering Development: Design, develop, and test PySpark-based applications to process, transform, and analyze... ...mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including...Permanent employmentFull time- ...to join our diverse and dynamic team. As a Lead Clinical Data Science Programmer at ICON, you will play a key role in designing and developing clinical data solutions, ensuring the efficient handling, analysis, and reporting of complex clinical trial data. You will collaborate...Flexible hours
Rs 1 - 3 lakhs p.a.
...technical discipline with a minimum of 4 years professional experience in software development Design, develop, and maintain large-scale data pipelines using Hadoop, Spark, and SQL. Collaborate with cross-functional teams to gather requirements and deliver high-quality...- ...modelling, analysis, reporting and BI. Key Accountabilities Develop and maintain ETL/ELT pipelines to ingest data from ERP, CRM,... ...through partitioning, clustering and tuning of MS Fabric, SQL and Spark workloads, monitor pipeline health and costs. Collaborate with...For contractorsLocal areaWorldwide
Rs 3 - 6 lakhs p.a.
...~ Excellent communication and collaboration skills within an international team environment Key Responsibilities: Design, develop, and maintain high-performance data solutions within SAP BW/4HANA, ensuring data quality and integrity Develop efficient ETL processes...Rs 3 - 6 lakhs p.a.
...Roles & Responsibilities: - Design, develop, and maintain data solutions for data generation, collection, and processing using Scala. - Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. - Collaborate with...Rs 3.5 - 6 lakhs p.a.
...platform you operate in allows knowledge sharing and distribution Required Skills ~5+ years of experience in distributed computing (Spark) and software development ~3+ years of experience in Spark-Scala ~5+ years of experience in Data Engineering ~5+ years of...- Role: Senior Data Engineer -GCP, SQL, Architect, Data Modeller Work Mode: Onsite Location: Chennai Year of Experience: 7 - 9 Years Notice Period: Immediate Primary Skills: GCP, SQL, Architect, Data Modeller Job Description: 1. Technical role ...Immediate start
- Hiring – Data Engineer Experience: 3–5 Years Work Mode: Hybrid Location: Chennai Notice Period: Immediate to 20 Days Mandatory Skills Python SQL PySpark Strong Data Engineering fundamentals (ETL/ELT, data pipelines) Good to Have LLM Engineering...Hybrid workImmediate start
Rs 2.5 - 5 lakhs p.a.
...optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage,...
