Average salary: Rs1,795,863 /yearly
More statsSearch Results: 200 vacancies
...Professional Services Role Big Data Infrastructure Implementation and Support
Experience: 8 to 12 years of experience in Big Data, Apache Hadoop System Implementation, and administration.
Education: BE (IT/Computers), B.Tech, M.Tech, MCA
Work Location: Riyadh, Saudi Arabia...
...experience with RedHat Linux and Cloudera is mandatory.- Experience installing, configuring, upgrading, managing, and administering Cloudera Hadoop- Responsible for deployments, and monitor for capacity, performance, and/or troubleshooting issues- Work closely with data scientists...
Job Description :1. Technologies should be Hive, PIG, Sqoop, Zookeeper,Cloudera - these are Hadoop Ecosystem tools2. Candidate should have worked in at least one Production project, not only understanding or theoretical knowledge3. Should have basic of Spark and Kafka -- at...
...datasets- Build processes supporting data transformation, data structures, metadata, dependency and workload management- Big data tools like Hadoop, Spark, Kafka, etc.- Message queuing, stream processing, and highly scalable - big data- data stores- Building and optimizing - big...
Job Description :- 4-7 years of experience in DE development.- Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)- Python programming language is mandatory.- PySpark - Excellent with SQL- Excellent with Airflow is a plus.- Good to Have Airflow- Good aptitude...
...Role - Hadoop and Big Data Administrator Location - Indore, MP/Noida
Years of Experience required - 2-5 Years
Job Description-
You will work in a multi-functional role with a combination of expertise in System and Hadoop administration. You will work in a team that...
...Solutions Architect (Data & BI) with around 10 - 15 years of experience in the following areas,Mandatory Skills :- Tableau, Big Data, Hadoop, Data Warehousing, Legacy BI Migrations, Cloud technologies.- Experienced in BI Implementation, BI migration involving Tableau.Preferred...
...volumes of structured and unstructured data.- Implement data processing solutions using distributed computing frameworks such as Apache Hadoop, Apache Spark, or Apache Flink.- Develop and maintain ETL (Extract, Transform, Load) processes to ensure data quality and consistency...
...or another quantitative field. They should also have experience using the following software/tools:- Experience with big data tools: Hadoop, Spark, Kafka, etc.- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.- Experience with data pipeline...
...candidate should have a minimum of 3+ years of experience in big data engineering and possess strong expertise in Python, SQL, Pyspark, Hadoop, Hive, Delta Lake, and Airflow.Responsibilities : - Design, develop, and maintain large-scale data processing systems - Implement data...
...and testing using (AWS Dynamo DB, EKS, Kafka, Kinesis/Spark/Streaming/Python) to enable seamless data ingestion to process on to the Hadoop platform.
Data Governance and Data Discovery on Cloud Platform
data processing framework using Spark, Glue, pyspark, Kinesis...
...necessary.
# Data Wrangling - Should be comfortable with data fetching in SQL, python - numpy, pandas, matplotlib and Py-Spark etc. from Hadoop and SQL server. Ability to work with the large data-sets in an efficient manner. Should have a knack to take care of the quality of...
...for analytic systems.
Job Description
. Develop high quality, secure and scalable data pipelines using spark, Scala/ python on Hadoop or object storage.
. Leverage new technologies and approaches to innovate with increasingly large data sets.
. Drive automation...
...Capabilities
•Experience with Artifactory and Jenkins for deployment of Production code
•Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, DataBricks.
•Exposure to AWS technologies...
...on Databricks and related services/functionalities and how to utilize them across the DE & Analytics spectrum
~ Strong knowledge on Hadoop, Hive, Databricks and RDBMS like Oracle, Teradata, SQL server etc
~ Expectation is to write SQL to query metadata and tables from different...
...deploying, administrating, tuning, monitoring, and maintaining database technologies including but not limited to MongoDB, Oracle NoSQL, Hadoop(Cloudera/HortonWorks) Solid experience in database tuning, design, security, backup, recovery, and archival concepts. A record of...
...regression, deep learning, NLP, etc.)
Experience with a ML/data-centric programming language (such as Python, Scala, or R) and ML libraries (pandas, numpy, scikit-learn, etc.)
Experience with Apache Hadoop / Spark (or equivalent cloud-computing/map-reduce framework...
...on AWS/Azure cloud data stores and it’s DB/DW related service offerings.
have knowledge and experience of Big Data Technologies (Hadoop ecosystem) and NO SQL databases.
have technical expertise and working experience in at least 2 Reporting tools among Power BI, Tableau...
...connectors, etc.
Strong experience with Apache Spark, Spark Streaming, and Spark SQL.
Solid understanding of distributed systems, Databases, System design, and big data processing frameworks.
Familiarity with Hadoop ecosystem components (HDFS, Hive, HBase) is a plus....
Rs 15 - 18 lakhs p.a.
...functional team. Excellent communication and collaboration skills
Bonus points for: Experience working with big data platforms (Spark, Hadoop, etc.)
Experience with cloud platforms (AWS, GCP, Azure, etc.) Experience with A/B testing and experimentation methodologies...