Average salary: Rs431,666 /yearly
More statsHadoop/Omnia Dev Engineers To develop and deliver codes for the work assigned in accordance with time| quality and cost standardsJob responsibilities... ..., configuring and supporting Hadoop.- Transform data using spark & scala- Translate complex functional and technical requirements...
...coder with good experience in programming languages like Java, Python or Scala.- Hands-on experience on the Big Data stack like Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch.- Good understanding of programming principles and development practices like checkin policy, unit...
...Platform including BigQuery, Cloud Storage, Cloud Composer, DataProc, Dataflow, Pub/Sub.- Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark)- Experience Developing DAGs in Apache Airflow 1.10.x or 2. x- Good Problem-Solving Skills- Detail Oriented- Strong...
...Services- Founding Year : 1991- No of Employees : 5001-10000Role : Spark DeveloperExperiences : 6+ yearsLocation : Bangalore ,... ...Experience with data storage and processing technologies, such as Hadoop, Hive, or HBase.- Familiarity with data warehousing and ETL processes...
...understanding of the mechanism necessary to successfully implement a change.Good to have :- Experience in data management and data lineage tools like Collibra, Alteryx and Solidatus- Experience in Data Vault Modeling.- Knowledge of Hadoop, Google Big Query is a plus. (ref:hirist.tech)
...with 5+ Years on Pyspark/NoSQL is Mandatory)1. Person should be strong in Pyspark2. Should have hands on in MWAA (Airflow) / AWS EMR(Hadoop, Hive) framework3. Hands on and working knowledge in Python4. Knowledge in AWS services like EMR , S3, Lamda, Step Function, Aurora -...
...training or certification on software engineering concepts, Java Spark and 3+ years applied experience
Hands-on practical experience... ...extraction, transformation and loading (ETL) of data using Java/Spark on Hadoop platform
Experience in developing, debugging, and maintaining...
Primary Skills : Databricks with Pyspark/Spark/ Python and SQLSecondary skills : ADFJob Overview :- 7+ years of experience with detailed knowledge of data warehouse technical architectures, ETL/ ELT and reporting/analytic tools.- 4+ years of work experience with very large...
...tuning, and deploying the apps to the Production environment.Should have good working experience on : - Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet - Spark - Batch Processing - Setting ETL pipelines - Python or Java programming language is mandatory. -...
Key Skills :- 4-7 years of experience in DE development.- Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)- Python programming language is mandatory.- PySpark - Excellent with SQL- Excellent with Airflow is a plus.Good to Have :- Airflow- Good aptitude, strong...
Job roles and responsibilities :- 3+ years served as Data Engineer or DataOps role.- 3+ years of experience working with SQL and Spark.- 2+ years of experience with Microsoft Azure Data Lake, Azure Data Factory, Azure Synapse and SQL Database products or equivalent products...
...coder with good experience in programming languages like Java or Python.- Hands-on experience on the Big Data stack like PySpark, Hbase, Hadoop, Mapreduce and ElasticSearch.- Good understanding of programming principles and development practices like checkin policy, unit...
...with common business use cases of analytics.Nice-to-Have:- Experience with data engineering and big data tools (e.g., Google Big Query, Hadoop/AWS EMR, Kafka).- Programming in NOSQL environments.- Experience with data visualization and presentation using tools like Tableau,...
...Job Description:
We are seeking a skilled and experienced Hadoop Developer with expertise in Java and Kafka to join our team. The ideal... ...: Leverage Hadoop technologies such as HDFS, MapReduce, Spark, Hive, and HBase for batch and real-time data processing.
Data...
...equivalent designing large data-heavy distributed systems and/or high-traffic web apps.- Must have 4+ experience using big data tools such as Spark / PySpark- Must have experience in Airflow pipeline orchestration.- At least 4 years of hands-on implementation experience working...
...Experienced Apache Spark & Java developer will be responsible for supporting and enhancing enterprise wide data platform. In this role,... ...performance streaming frameworks
• Should have experience working with Hadoop ecosystem components – Hive, Pig, MapR, Sqoop
• Utilizing...
...We are looking for exceptional Spark Scala engineer with 5+ yrs experience who will be responsible for:
Experience- 5+ Yrs
Location- Hyderabad & Indore
Responsibilities:
Implementing the large scale Spark applications and finetune at runtime
Design and implement...
Location : Chennai, Pune, Noida, Kochi, Hyderabad, Trivandrum.Job Description :- Proficient in SQL, Spark, Scala, and AWS, with a strong command over these technologies.- Minimum 6-7 years of relevant experience in Spark and SQL, plus 2-3 years of hands-on practice in AWS.-...
...Designing and implementing data pipelines using Azure Data Factory (ADF).- Developing and maintaining data processing scripts using Python, Spark, and Scala.- Building and optimizing data storage solutions using Azure Data Lake Storage (ADLS), Blob Storage, and Synapse.-...
...implementing, and maintaining data processing systems. You will work with large-scale data sets using technologies such as Apache Spark, Scala, Hadoop, Jenkins CI/CD, and Microservices. This is an excellent opportunity to contribute to the development of cutting-edge data...
...Primary Skill – Scala Spark Total Exp – 5.2 Years to 13 Years
Notice Period – ONLY 0 to 60 Days Joiners
Job Location – Pune, Mumbai... ...to develop analytics models
• Continuous delivering on Hadoop and other Big Data Platforms
• Automating?processes where possible...
Job Description :Primary & mandatory : PYSPARKSecondary : GCP- At least 5 years of experience in Big Data, Hadoop Data Platform (HDP) and ETL and capable of configuring data pipelines.- Possess the following technical skills - SQL, Python, Pyspark, Hive, Unix, ETL, Control-M...
...Consultant-Databricks/Spark Developer - ITO073879
With a startup spirit and 115,000+ curious and courageous minds, we have the expertise to go deep with the world’s biggest brands—and we have fun doing it. We dream in digital, dare in reality, and reinvent the ways companies...
...Drive in Hyderabad, Telangana on 4th May [Saturday], 2024 , and we believe your skills in Databricks, Data Factory, SQL, and Pyspark or Spark align perfectly with what we are seeking.
Experience Level: 3 years to 25 years
Details of the Walk-in Drive:
Date: 4th...
...Knowledge of data science toolkits such as R, NumPy, and MatLab
Experience with big data analytics technologies such as Spark/Databricks and Hadoop
Experience with data visualization tools such as PowerBI and Tableau
Expertise in data mining and machine learning...
...Experience with data technologies like Azure Data Explorer (Kusto), Databricks, Azure HDInsight, Azure Data Lake, Data Factories, Hadoop, or Spark
• Familiarity with NoSQL document stores (e.g. MongoDB, Azure Cosmos DB) and/or graph DBs
• Front-end experience with Angular...
...processing pipelines using AWS services like AWS Lambda, Apache Spark, and Python.
Design and implement data security measures, including... ....
Familiarity with data processing tools like Apache Spark, Hadoop, or Pig.
Strong understanding of data modeling, ETL processes,...
...Science Studio) environment would be a plus
~ Strong experience on Spark with Scala/Python/Java
~ Strong proficiency in building/... ...libraries (pandas, numpy, scikit-learn, etc.)
Experience with Apache Hadoop / Spark (or equivalent cloud-computing/map-reduce framework...
...software systems.
Proficient in Python/PySpark
Proficient in SQL (Spark SQL preferred)
Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred)
Experience with Scrum/Agile development methodologies...
Rs 50000 - Rs 100000 per month
...methodologies.
Desirable but not necessary:
Proficiency with python is preferable.
Understanding of big data technologies like spark, hadoop is a plus
Personal Attributes:
Passionate about Data Problems and Projects. Customer obsession and Project
Execution,...