Search Results: 97 vacancies
Job Description :Hadoop/Omnia Dev Engineers To develop and deliver codes for the work assigned in accordance with time| quality and cost standardsJob responsibilities :- Interact with business stake holders and designers to implement to understand business requirements.- Hadoop...
...with 5+ Years on Pyspark/NoSQL is Mandatory)1. Person should be strong in Pyspark2. Should have hands on in MWAA (Airflow) / AWS EMR(Hadoop, Hive) framework3. Hands on and working knowledge in Python4. Knowledge in AWS services like EMR , S3, Lamda, Step Function, Aurora -...
Skills :- Hadoop- Python- Spark- PySpark- ETL (Extract, Transform, Load)Roles & Responsibilities :- Data Ingestion: Develop and maintain data pipelines for ingesting raw data from various sources into the Hadoop ecosystem.- Data Processing: Utilize Python and Spark to process...
Rs 15 - 20 lakhs p.a.
...Role Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Hadoop Good to Have Skills : Banking Strategy Job Requirements : 1: Responsblities: A: Understanding the requirements of input to output transformations...
Job Description
Company Description
Jobs for Humanity is collaborating with FIS Global to build an inclusive and just employment ecosystem. We support individuals coming from all walks of life.
Company Name: FIS Global
Job Description
Position Type :
Full...
...software systems and building them in a way that is scalable, maintainable, and robust
Experience in designing application solutions in hadoop ecosystem
Deep understanding of the concepts in Hive, HDFS, yarn, Spark, Spark sql, Scala and Pyspark
HDFS file formats and...
...Responsibilities
Design, develop, and deploy scalable Big Data applications using Hadoop ecosystem technologies (Hadoop, HDFS, Hive, etc.).
Collaborate with data scientists and business analysts to understand requirements and translate them into technical solutions...
..., and Spark Streaming.
~ Proficiency in working with large-scale distributed systems and big data technologies.
~ Experience with Hadoop ecosystem components such as HDFS, YARN, and Hive is a plus.
~ Knowledge of software development methodologies such as Agile or Scrum...
...Airflow for orchestrating and
scheduling ETL workflows.
Big Data Technologies: Familiarity with big data technologies like Spark Hadoop
and related frameworks.
Data security best practices and compliance standards.
Data Modelling: Understanding of data...
Rs 15 lakh p.a.
...order with respect to data structure, availability, scalability and accessibility.
Implementing real-time analytics use-cases on the Hadoop ecosystem.
Providing expertise on the various components & features of the Hadoop ecosystem (such as Spark, Map/Reduce, YARN, Hive,...
..., and Spark Streaming.
~ Proficiency in working with large-scale distributed systems and big data technologies.
~ Experience with Hadoop ecosystem components such as HDFS, YARN, and Hive is a plus.
~ Knowledge of software development methodologies such as Agile or Scrum...
...region applications, especially using load balancing, API gateways, using distributed processing platforms (like AWS EMR, Kafka, Spark, or Hadoop) and Kubernetes or serverless container fabrics (like AWS Fargate)
Application Programming in at least two of the following: C#/.NET...
...techniques, with practical experience in applying them to real-world problems.
Experience working with big data technologies (e.g., Hadoop, Spark) and cloud computing platforms (e.g., AWS, Azure, GCP).
Excellent problem-solving skills and the ability to think creatively...
...process and tools.
● Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture.
● Experience in writing Python programs and SQL queries.
● Experience in SQL Query tuning.
● Experienced...
...Experience in Troubleshooting the issues related to Data and Infrastructure issues.
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
- Knowledge of distributed...
...and retrieval of data in a data lake environment. Integrate web applications with data lake solutions using technologies such as Apache Hadoop, Apache Spark, or Amazon S3.
Data Pipeline Development: Develop and maintain data pipelines to ingest, process, and analyze large...
...Code, Unit test, Hive, HDFS, KAFKA and SPARKScala/Pyspark.
· Build libraries, user defined functions, and frameworks around Hadoop/Spark.
· Exposure to cloud platform such as AWS or equivalent is desired.
· Develop user defined functions to provide custom Hive...
...platforms - AWS/GCP/Azure. - Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. - Exposure to Hadoop & Shell scripting a plusResponsibilities :- Design, implementation, and improvement of processes & automation of Data infrastructure -...
...project architecture, parallelism, meta-programming, vectors, performance tuning, debugging, SMP/MPP architectures, platform integrations (Hadoop, Java, C, etc.).
• Ability to work in collaborative teams located across the globe and cross-functional technologies.
•...
Rs 7 - 11 lakhs p.a.
...data processing pipeline Technical Experience: AExperience in Spark query tuning and performance optimization BGood understanding of Hadoop architecture and distributed systems eg CAP theorem partitioning replication consistency and consensus Professional Experience: A Ability...