Average salary: Rs141,000 /yearly
More statsSearch Results: 23 vacancies
...We are looking for exceptional Spark Scala engineer with 5+ yrs experience who will be responsible for:
Experience- 5+ Yrs
Location- Hyderabad & Indore
Responsibilities:
Implementing the large scale Spark applications and finetune at runtime
Design and implement...
...Data Engineer, we are looking for candidates who possess expertise in the following:
Databricks
Data Factory
SQL
Pyspark/Spark
Roles and Responsibilities: As a part of our dynamic team, you will be responsible for:
Designing, implementing, and maintaining...
...uplift modeling etc.
• Experience in several visualization tools such as Tableau, PowerBI, Qlik, BO
• Advanced experience in Azure, Spark, and git-Basic as well as understanding of web application framework (Django, Flask, HTML, JavaScript, CSS, Ajax, jQuery etc.)
•...
...engineering trucks, shuttle vans, electric carts) including checking oil, fluid levels, tire pressure/wear, charging batteries, and replacing spark plugs. Perform preventative maintenance on tools and equipment, including cleaning and lubrication. Maintain proper maintenance...
...processing software like Kafka
● Algorithms and software design optimized for large scale distributed software systems
● Experience with Spark/Pandas
● Have experience with Google Cloud Platform/ AWS
● Knowledge of Other languages like C++/Java
● Strong algorithmic...
...Classical Machine Learning, Deep Learning, NLP and computer vision.
• Experience with Large Scale/ Big Data technology, such as Hadoop, Spark, Hive, Impala, PrestoDb.
• Hands-on capability developing ML models using open-source frameworks in Python and R and applying them...
...on at least one of the 3 major cloud platforms- AWS, Azure, GCP
Hands-on expertise with data engineering tools in –
open source (Spark, Python),
cloud (EMR, Glue, Azure Data Factory),
commercial ETL tools (Informatica, Ab-Initio, FiveTran)
Experience working with...
...operationalizing large scale enterprise data solutions and applications using one or more of AWS data and analytics services – Databricks, Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue, Athena. Leading and Designing production data pipelines/ETL jobs from ingestion to...
...programming language experience in Java/Scala or Python.
Knowledge of Hadoop ecosystem and strong hands-on experience on Hive and Spark.
Understanding on workflow orchestration tools like oozie, Apache Airflow etc.
Experience on working with monitoring and logging...
...detailed design conforms to user expectations.
Design, Build and Test data processing pipelines in a GCP environment using Python, Spark, PySpark, Scala code.
Provide support with application testing, UAT, and application migration in GCP.
Areas of expertise we are...
...Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc.
~ Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies
~ Understanding in designing analytical solutions leveraging AWS...
...defining and implementing cloud based large solutions.
Big-data Engineering experience setting up data lakes ( Hadoop, Hive, HDFS, Spark, API's, Collibra, etc)
Applicant must have working experience in AWS IaaS, PaaS, storage, network and database, Analyzing and identifying...
...techniques.
Experience in Cloud Computing.
Technical Skills -
Programming language - Java and Scala
Good understanding of Spark Internals
Good understanding of Unix Internals
Should have experience of working in Trading, Telecom, gaming, or Risk engines in...
...Profound programming language experience in Python.
Knowledge of Hadoop ecosystem and strong hands-on experience on SQL,Hadoop, Hive and Spark.
Must have understanding on messaging services and their integrations.
Good to have experience in streaming technologies such as...
...includes necessary features for implementation.
Drive adherence to coding standards, CI/CD processes, and continuous improvement efforts while staying updated on latest technologies.
If this position sparks your interest, please reach out to me at [email protected] ....
...# Data Processing: Design, develop, and maintain scalable and efficient data processing pipelines using technologies such as Apache Spark, Hive, and Hadoop.
# Programming Languages: Proficient in Python, Scala, SQL, and Shell Scripting for data processing, transformation...
...Experience with exploratory data analysis using tools like iPython Notebook Pandas & matplotlib etc;
Familiarity in Hadoop pipelines using Spark Kafka;
Familiar with GIT;
Familiar with Adobe Analytics (Omniture) or Google Analytics;
Digital marketing strategy including...
...Experience Required- Above 12 Years
Job Description:
Tech Stack: Java 1.6+, Spring Boot, Spring MVC, Hadoop, Hive, HDFS,
Map Reduce, Spark Batch & Spark Streaming, Scala, Kafka
~ Experience developing enterprise-grade data integration solution
~ Good knowledge of Java...
...~ Experience in Azure PaaS Services i.e. Data Factory Synapse Fabric Data bricks etc.
~ Good knowledge & handson experience in Spark with Python (2 years).
~2 years in Power BI should be well versed with concepts of building Dashboards/Reporting.
~ Must have worked...
...tools such as: Databricks, Sage Maker, Google Colab, Jupyter, etc
~ Hands on experience with streaming tools such as: Kafka, Flume, Spark Streaming, or Flink
~ Expectation is that you support the code and products yourself and your team create
What You'll Do:...