Search Results: 126 vacancies
...Experience : 7-10 YearsLocation : Anywhere in India Education : BE, B.Tech, Any Tech GraduateMust-Have Technical Skills : including 3+ years Spark or Scala,- 2+ years of Hadoop/Big Data using tools like Hive, Spark, PySpark, Scala, and RDBMS/SQL Strongly- Preferred: GCP, including...
Job Description: Senior Data EngineerLocation: Hyderabad, Chennai, BangaloreExperience: 4 to 10 YearsPrimary Skills:- Hadoop- Spark (Mandatory)- ScalaSecondary Skills:- Python- SQLRole Summary/Purpose:The Senior Data Engineer will join our dynamic scrum teams to perform functional...
Rs 7 - 11 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must Have Skills : Apache Spark Good To Have Skills : Job Requirements : Key Responsibilities: AStrong experience in creating ScalaSpark jobs for data transformation...
Rs 12 - 16 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Microsoft SQL Server,Unix to Linux Migration,Data Warehouse ETL Testing,Amazon Web Service Job Requirements :...
Skills :- Hadoop- Python- Spark- PySpark- ETL (Extract, Transform, Load)Roles & Responsibilities :- Data Ingestion: Develop and maintain data pipelines for ingesting raw data from various sources into the Hadoop ecosystem.- Data Processing: Utilize Python and Spark to process...
...Design, develop, and maintain data processing pipelines using Apache Spark and Scala.
Optimize Spark jobs for performance, scalability, and reliability.
Work closely with data engineers and data scientists to implement data-driven solutions.
Develop and maintain...
...Responsibilities
Design, develop, and deploy scalable Big Data applications using Apache Spark.
Collaborate with data scientists and business analysts to understand requirements and translate them into technical solutions.
Write efficient and optimized code...
...on the Azure platform. Collaborate with the data engineering team to ensure data quality and performance.
Good to have knowledge in Spark concepts.
Knowledge on AWS services like S3, lambda, CloudWatch
Any Relational database (3 to 6 Years) and SQL experience....
...Experience with cloud computing platforms such as AWS, Azure, or Google Cloud.
~Familiarity with big data technologies such as Hadoop, Spark, etc.
~Contributions to open-source projects related to AI/NLBachelor's degree in Computer Science, Engineering, or related field....
...role is for you. We collect billions of events a day, manage petabyte scale data on Redshift and S3, and develop data pipelines using Spark/Scala EMR, SQL based ETL, Airflow and Java services.
We are looking for talented, enthusiastic, and detail-oriented Data Engineer,...
...Job description
Role: Azure Data Engineer
Experience : 3-5 Years
Skill set:
Pyspark / Scala Spark, Python, Azure Databricks and Azure Data Factory
Roles and Responsibilities:
Should have programming skills with the ability to write optimized and...
...Please refer to the below JD:
# Lead candidate with minimum 8Years of experience.
# Strong knowledge on Scala spark or Pyspark (Python Spark)
# knowledge in Writing Pyspark or relevant code to process files use loops.
# Working knowledge in DataBricks...
...Scalable Data Infrastructure: Conceptualize, design, and implement resilient data pipelines and scalable solutions utilizing Apache Spark and cloud-native services such as Azure Databricks, ensuring streamlined data processing and real-time analytics functionality at scale...
...consumption by the data science team.
Skillful in ETL process and tools.
Clear understanding and experience with Python and PySpark or Spark and SCALA with HIVE Airflow Impala and Hadoop and RDBMS architecture.
Experience in writing Python programs and SQL queries....
...o HDP/CDP Cluster Installation (DEV, TEST, STAGING, PROD)
o Installation & configuration of the componants : Zookeeper, HDFS, Spark, Hive, Kerberos, Sentry, Anaconda, Hue, Kafka…
o HDP/CDP clusters maintenance, patching & upgrade
o Monitoring & Administration...
...preferably on AWS.
Devops with Jenkins, Shellscripting.
Must have experience / knowledge in Cluster Management Frameworks (e.g., Spark), Kafka,
Elastic Search, Docker and database, build-and-test (preferred).
Demonstrate skills in problem-solving and decision-...
...Git versioning, Docker).
~ Familiarity or experience with working on large data sets and distributed computing (e.g. Hive, Hadoop, Spark)
~ Working knowledge of Cloud platforms (e.g. AWS, Azure, GCP).
~ Excitement to collaborate with diverse stakeholders across the organisation...
...learn/Keras/Theano/TensorFlow and SQL
Strong programming skills
Prior experience in working with big data platforms like Hadoop, Spark, Hive, Pig
Good knowledge of statistical techniques
Knowledge of NLP techniques and image processing
Knowledge of AWS ML...
...responsibilities include extracting data, troubleshooting and maintaining the data lakehouse
Tech Stack
Python,SQL and NoSQL databases, Spark-SQL, ADF , Databricks
Experience with Azure: ADLS, Databricks, SQL DW, Azure Functions, Serverless Architecture, ARM Templates,...
...pipeline creation and CI/CD maintenance.- Migrate Datastage jobs to Snowflake, optimize performance.- Work with HDFS, Hive, Kafka, and basic Spark.- Develop Python scripts for data parsing, quality checks, and visualization.- Conduct unit testing and web application testing.-...