Average salary: Rs1,556,998 /yearly
More stats ...Position Overview
Job Title: Data Engineer (ETL, Big Data, Hadoop, Spark, GCP)
Corporate Title: Assistant Vice President
Location- Bangalore, India
Role Description
Senior Engineer is responsible for developing and delivering elements of engineering solutions...
8+ yrs of proven experience in Kafka, PySpark and Hadoop platform.
3+ years of Hands on Experienced in Hadoop Ecosystem (Spark, Scala, Hive, PIG )
Exposure of automation and performance tuning in Hadoop eco-system.
Exposure of Data ingestion into Data Lake on Big Data...
Primary skill : Scala, Spark, Hive, Hbase, SQL
Secondary skill : Teradata
Notice Period : maximum 30 days
Need candidates who have experience in application development, data analytics.
Candidates who has their major experience in Data importing, Data Loading, ...
...Proven leadership skills with experience leading a team of data engineers.
Technical Skills:
Extensive experience in big data tools: Hadoop, Hive, and Spark.
Proficiency in Scala, Python, SQL, and PySpark.
Experience with Unix/Linux systems with scripting experience...
...Big Data(HDFS, Hive, Kafka) testing. Good knowledge and hands on experience of SQL and DB concepts.
Worked on Batch Testing, API Testing, Integration Testing, SQL, Unix, Data Preparation, Automation, Bigdata, Hadoop, HDFS, Hive, SQL & UNIX
Open for contractual role...
...experience is mandatory for the GCP, Python/Java, Spark, SQL and Bigdata. Expereince- 5 to 12 years
Location- Gurgaon & Bangalore... ...engineering or computer science or equivalent.
o Deep understanding of Hadoop and Spark Architecture and its working principle.
o Deep...
...Hands on with In-depth knowledge in BigData ETL Testing
Experience in Hadoop Hive is a must
ETL test automation experience
Cloudera experience is good to have
Python/spark experience is added advantage
Plan and execute activities in relation to...
...stakeholders across the organization and take complete ownership of deliverables.
~ Experience in using big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, Neo4J, Elastic Search
~ Adept understanding of different file formats like Delta Lake, Avro, Parquet...
...thinking and problem-solving skills along with an ability to collaborate Technical and Professional Requirements:
Primary skills: Bigdata- HDFS, Bigdata- Hadoop, Bigdata- Hive, Bigdata- Pyspark, Bigdata- Python, Bigdata- Scala, Bigdata- Spark, Opensource- Apache Kafka
...of the fast-paced, entrepreneurial team that enables batch/ real-time analytical solutions leveraging transformational technologies (Hadoop/BIG DATA, Data Science - R, Python, Scala, Perl , UNIX , Shell Scripting) to deliver innovative solutions across multiple lines of business...
...distributed computing principles
~ In-depth knowledge of data mining, machine learning, and information retrieval
~ Expertise in Hadoop, Spark, and similar frameworks
~ Knowledge of Lambda architecture, its benefits, and shortcomings
~ Expertise in multiple scripting...
...on in managing the Engineering team and own Architecture best practices.
Excellent coding experience in Big Data technologies like Hadoop, KAFKA, Hive, Spark , Flink, Storm, etc.
Experience in any two of the cloud services - GCP/AWS/Azure.
Good understanding of cloud...
Rs 6 - 15 lakhs p.a.
...BigData, Hive, Scala, Spark, HDFS , GCP with Java knowledge Big data developer provides care and feeding of our Big Data environments and its interfaces built upon technologies in the Hadoop Ecosystem including Hive, Hbase, Spark, and Kafka.
Collaborate with like-minded...
...jobs, and data integration solutions. You will be working in a dynamic and collaborative environment, leveraging your expertise in Hive, Hadoop, and PySpark to unlock valuable insights from our data.
Key Responsibilities:
Data Ingestion and Integration:
Develop and...
...technical position
- Requires minimum of 2+ yrs experience on Hadoop/ML/Large Dataset handling with DB/DW experience
- Advanced experience... ...computing environments
- Lead technical discussions on BigData systems architecture and design
- Strong analysis and troubleshooting...
Job Description: BigDataLocation_ Pan India
Responsibilities :
A day in the life of an Infoscion
As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction.
You will understand requirements, create...
...Professional Services Role Big Data Infrastructure Implementation and Support
Experience: 8 to 12 years of experience in Big Data, Apache Hadoop System Implementation, and administration.
Education: BE (IT/Computers), B.Tech, M.Tech, MCA
Work Location: Riyadh, Saudi Arabia...
...Required Qualifications:
~3-5 years of experience with Hadoop (required)
~5 years of Unix/ Linux admin activities related to the Hadoop platform (required)
~3-5 years of Spark and Hive (required)
~ Healthcare Industry experience
~ Internal client-facing experience (required...
...identify improvement areas and suggest the technology solutions
One or two industry domain knowledge
Client Interfacing skills
Project and Team management Technical and Professional Requirements:
Primary skills:Technology- Big Data - Hadoop- Hadoop Administration
...Degree (e.g. Masters, MBA, JD, MD) or 2 years of work experience with a PhD
Design and Coding skills with Big Data technologies like Hadoop, Spark, Kafka, Scala, Java, Hive, Data APIs, Streaming.
Minimum of 5+ years of experience in building large-scale applications...
...Job Description
Overall Responsibilities:
Lead a team of Big Data developers(Hadoop) and architects to deliver high-quality projects for clients
Manage project scope, timeline, budget, and resources
Act as a technical advisor to clients, providing guidance on...
PFB JD :
Primary Skill : PySpark , Scala Spark
Should have good technical exposure to above Primary skills/technologies Good Client interfacing skills & should have involved in client interactions.
Should be willing to work in implementation & support projects Above...
...bring your talent and ambition to make a difference. We will create a world of opportunities for you.
Job Details
Job Title : Hadoop Administrator
Location: Bangalore / Pune / Hyderabad / Noida / Kolkata
Quick joiners needed
Minimum Requirement :
Must...
...Hand-on Development experience with Hadoop with PySpark, ScalaSpark and Distributed computing. Development and implementation experience of applications for 4-6 years.
4 to 6 years' experience designing and developing in Python.
4 to 6 years' experience in Hadoop Platform...
EIC Tech SYS is looking for Hadoop Developer to join our dynamic team and embark on a rewarding career journey.
A Hadoop Developer is responsible for designing, developing, and maintaining big data solutions using Apache Hadoop. Key responsibilities include:
Designing...
...Type Of Hire :
Experienced (relevant combo of work and education) Education Desired :
Bachelor's Degree Travel Percentage :
0% Hadoop Administrator - Bangalore/ Pune/Chennai/Gurgaon- 6-12 Years
Are you curious, motivated, and forward-thinking? At FIS you’ll have...
T he Team:
Content Externalization team at S&P Ratings responsible for data distribution to both external & internal clients through various pipeline. .
Each of our employees plays a vital role-uncovering the essential intelligence that our clients rely on day...
We are looking for a Software Engineer to join our IMS Team in Bangalore . This is an amazing opportunity to work on Big Data technologies involved in content ingestion. The team consists of 10-12 engineers and is reporting to the Sr Manager. We have a great skill set in ...
Job Description :- Must have working experience Designing, building, installing, configuring and supporting of Hadoop.- Good to have Teradata, Cloud & Snowflakes Knowledge- Must have working experience in IntelliJ IDEA, AutoSys(Control M), WinSCP, Putty & GitHub.- Translate...