Average salary: Rs437,999 /yearly
More statsSearch Results: 300 vacancies
...Technologies should be Hive, PIG, Sqoop, Zookeeper,Cloudera - these are Hadoop Ecosystem tools2. Candidate should have worked in at least one... ...understanding or theoretical knowledge3. Should have basic of Spark and Kafka -- at least good POC experience. At least basic...
...Build processes supporting data transformation, data structures, metadata, dependency and workload management- Big data tools like Hadoop, Spark, Kafka, etc.- Message queuing, stream processing, and highly scalable - big data- data stores- Building and optimizing - big data-...
Rs 12 - 16 lakhs p.a.
...process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Microsoft SQL Server,Unix to Linux Migration... ...actions, spark configuration and tuning techniques B:Knowledge of Hadoop architecture; execution engines, frameworks, applications tools C...
Rs 7 - 11 lakhs p.a.
...process and application requirements. Must Have Skills : Apache Spark Good To Have Skills : Job Requirements : Key Responsibilities: AStrong... ...query tuning and performance optimization BGood understanding of Hadoop architecture and distributed systems eg CAP theorem partitioning...
Rs 12 - 16 lakhs p.a.
...Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Hadoop, Job Requirements : Key Responsibilities : A: Create scala/spark jobs for data transformation and aggregation B: Write...
...understanding of the mechanism necessary to successfully implement a change.Good to have :- Experience in data management and data lineage tools like Collibra, Alteryx and Solidatus- Experience in Data Vault Modeling.- Knowledge of Hadoop, Google Big Query is a plus. (ref:hirist.tech)
...- 2 years of experience working with Cloud Platforms (e. g., Microsoft Azure, AWS, GCP).- 2 years of experience in Python, Databricks/Hadoop.- 2 years of experience in MLOps.Good to have :- Curiosity, consultative mindset, and eagerness to explore new technologies.- Ownership...
...position include :- Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, .NoSQL... ...Engineer :Desired skills for lead data engineer include :- Python- Spark- Java- Hive- SQL- Hadoop architecture- Large scale search...
Job Description :- 4-7 years of experience in DE development.- Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)- Python programming language is mandatory.- PySpark - Excellent with SQL- Excellent with Airflow is a plus.- Good to Have Airflow- Good aptitude...
We are seeking a skilled AWS Hadoop Administrator to join our team and manage our Hadoop infrastructure on the AWS platform. In this role... ...ecosystem on AWS such as EMR,S3,EC2,IAM policy, MWAA/Airflow,Hadoop,yarn,spark, Mlflow. Experience on Linux enviroment and bash...
Rs 7 - 15 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good To Have Skills : Python Programming Language Job Requirements : Key Responsibilities: a Lead the team and contribute to the...
...tuning, and deploying the apps to the Production environment.Should have good working experience on :- Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)- Spark - Batch Processing- Setting ETL pipelines- Python or Java programming language is mandatory.- Worked...
Rs 20 - 21 lakhs p.a.
...Responsibilities
Roles And Responsibilities Of Cloudera & Spark Developer
Secure data management and portable cloud-native data analytics... ...Enterprise includes CDH, the world’s most popular open source Hadoop-based platform, as well as advanced system management and data...
...Must have Skills: Apache Spark
Good to Have Skills: Data Warehouse ETL Testing
Key Responsibilities:
A: The resource will write and review complex SQL statements
B: The resource will work on ETL preferably on OWB
C: The resource will work on...
...Mumbai, Pune, Nagpur, Indore, Delhi/NCR, Ahmedabad.Job Description :- 6+ years of overall Data Analytics and BI experience- Experience in Spark, Hive, Scala.- Build data pipelines for ETL that fetch data from variety of sources such as flat files relational databases and APIs-...
Rs 12 - 20 lakhs p.a.
...Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills : Apache Spark Good to Have Skills : No Technology Specialization Job Requirements : Key Responsibilities : 1 Set-up and configure data bricks for an...
...Skills : Python , Pyspark, Azure data bricks, data factor, data lake, SQL.- Deep knowledge and experience working with Python/Scala and Spark- Experienced in Azure data factory, Azure Data bricks, Azure Data Lake, Blob Storage, Delta Lake, Airflow.- Experience working with...
Key Responsibilities :Develop, implement, and maintain Spark applications for data processing and analytics.Design and optimize Spark jobs for performance and scalability.Implement Delta lake solutions for efficient data storage and management.Build streaming solutions for...
Job Description :Technical Expertise :- Should have experience of working on Microsoft Azure tools like Spark, Databrick, Synapse (knowledge of these will be an added advantage).- Should be very strong on BI and EDWH concepts.- Must have good experience on working on Microsoft...
...Proven real-time exposure and use of contemporary data mining, cloud computing, and data management ecosystems like Google Cloud, Hadoop, HDFS, and Spark. - Proficient in Data Modelling that can represent complex data structures while ensuring accuracy, consistency, and...