Search Results: 441 vacancies
Skills :- Hadoop- Python- Spark- PySpark- ETL (Extract, Transform, Load)Roles & Responsibilities :- Data Ingestion: Develop and maintain data pipelines for ingesting raw data from various sources into the Hadoop ecosystem.- Data Processing: Utilize Python and Spark to process...
...engineering, with a strong focus on large-scale data platforms and data products.- Strong experience with big data technologies such as Hadoop, Spark, and Hive- Proficiency in either Scala or Java programming language.- Experience leading and managing teams of data engineers-...
...Have a Good-day…!!!!
We have immediate opportunity for Hadoop Data Engineer – 4 to 8 years
Job Role: Hadoop Data Engineer... ...data tech stack
Should have experience in PySpark and Scala + Spark for 6+ years. Scala should be primary coding language.
Proficient...
Rs 12 - 16 lakhs p.a.
...process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Microsoft SQL Server,Unix to Linux Migration... ...actions, spark configuration and tuning techniques B:Knowledge of Hadoop architecture; execution engines, frameworks, applications tools C...
Rs 12 - 16 lakhs p.a.
...Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Hadoop, Job Requirements : Key Responsibilities : A: Create scala/spark jobs for data transformation and aggregation B: Write...
Mandatory Skills:- 4+ years of hands-on experience - Hadoop, System administration with sound knowledge in Unix based Operating System internals... ...in setting up services like YARN, HDFS, Zookeeper, Hive, Spark, HBase etc.- Willing to work in 24x7 rotating shifts including weekends...
...Greetings From Maneva!
Job Description
Job Title Cloudera Hadoop Architect
Location Chennai / Pune / GNDC
Experience 8 15 Year
Relevant Experience 10 Years
Job Requirements:
Strong knowledge and Hands on experience...
...Position Overview
Job Title: BigData-Hadoop/Scala Engineer
Corporate Title: Associate
Location: Pune – Business Bay
Role Description... ...with MapReduce framework.
Experience with Hive, Pig, Impala, Spark SQL etc
Experience with Apache Spark and Scala (preferably...
...scale distributed data processing systems with one or more technologies such as MS SQL Server, PostgreSQL, MongoDB, Cassandra, Redis, Hadoop, Spark, Hive, Teradata, or MicroStrategy- A high-level understanding of automation in a cloud environment: AWS experience preferred.-...
...Open Source, data-intensive, distributed environments with minimum 5 years of experience in big data related technologies like Spark, Hive, HBase, Hadoop, etc. Programming background : - Mandatory- Scala, Spark and Java / Python - Experience in following technologies -...
...maintaining large-scale data processing pipelines using Python and Spark/PySpark.- Your expertise in distributed computing frameworks and... ...tasks (if applicable).- Manage and maintain data pipelines on Hadoop infrastructure.- Develop and integrate RESTful APIs for data access...
...Platform including BigQuery, Cloud Storage, Cloud Composer, DataProc, Dataflow, Pub/Sub.- Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark)- Experience Developing DAGs in Apache Airflow 1.10.x or 2. x- Good Problem-Solving Skills- Detail Oriented- Strong...
Rs 12 - 20 lakhs p.a.
...Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills : Apache Spark Good to Have Skills : No Technology Specialization Job Requirements : Key Responsibilities : 1 Set-up and configure data bricks for an...
Job Description :- 4-7 years of experience in DE development.- Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)- Python programming language is mandatory.- PySpark - Excellent with SQL- Excellent with Airflow is a plus.- Good to Have Airflow- Good aptitude...
...understanding of the mechanism necessary to successfully implement a change.Good to have :- Experience in data management and data lineage tools like Collibra, Alteryx and Solidatus- Experience in Data Vault Modeling.- Knowledge of Hadoop, Google Big Query is a plus. (ref:hirist.tech)
...developing, and maintaining our data infrastructure and analytics solutions. The ideal candidate will have strong expertise in Hadoop, Scala, and Spark, along with a proven track record of building scalable and efficient data pipelines.Key Responsibilities : - Design, develop,...
...designs.- Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python and Hadoop Platform.- Utilize multiple development languages/tools such as Python, SPARK, Hive, Presto, Java to build...
...Location
Offer in hand if any:
Pan Card No:
Notice period/how soon you can join:
- Should have 4-8 years Experienced in Java, Spark SQL and worked in Unix/Linux Background.
- Should have Good Knowledge in Oracle Database SQL Queries and Views.
- Experienced to...
...Experience working with distributed technology tools, including Spark, Presto, Scala, Python, Databricks, Airflow- Developed the Pysprk... ...jobs and for EMR.. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution..- Developed Python and...
...tuning, and deploying the apps to the Production environment.Should have good working experience on : - Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet - Spark - Batch Processing - Setting ETL pipelines - Python or Java programming language is mandatory. -...