Average salary: Rs1,390,538 /yearly
More stats ...Spark & Scala Developer Exp- 7-13 yrs
Shift- Malaysian / EUR
Full time Contract Opportunity
Fully Remote
please share resume at ****@*****.***
Responsibilities-
Create Scala/Spark jobs for data transformation and aggregation
Produce unit...
Rs 12 - 16 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Microsoft SQL Server,Unix to Linux Migration,Data Warehouse ETL Testing,Amazon Web Service Job Requirements :...
Rs 12 - 20 lakhs p.a.
...Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills : Apache Spark Good to Have Skills : No Technology Specialization Job Requirements : Key Responsibilities : 1 Set-up and configure data bricks for an...
Rs 12 - 16 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Hadoop, Job Requirements : Key Responsibilities : A: Create scala/spark jobs for data transformation and...
Rs 7 - 11 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must Have Skills : Apache Spark Good To Have Skills : Job Requirements : Key Responsibilities: AStrong experience in creating ScalaSpark jobs for data transformation...
Rs 7 - 15 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good To Have Skills : Python Programming Language Job Requirements : Key Responsibilities: a Lead the team and contribute to the...
...Required : 6 To 10 years.Location : Bangalore/Hyderabad/PuneWork Experience Required : ML Engineer with STRONG Databricks experience, Spark, Python, SQLAbout the Role :Are you a passionate Databricks engineer looking to join a top MNC and revolutionize the way data is processed...
...Location
Offer in hand if any:
Pan Card No:
Notice period/how soon you can join:
- Should have 4-8 years Experienced in Java, Spark SQL and worked in Unix/Linux Background.
- Should have Good Knowledge in Oracle Database SQL Queries and Views.
- Experienced to...
Rs 20 - 21 lakhs p.a.
...Responsibilities
Roles And Responsibilities Of Cloudera & Spark Developer
Secure data management and portable cloud-native data analytics delivered in an open, hybrid data platform. Whether you're powering business-critical AI applications or real-time analytics at scale...
...languages commonly used in data engineering, such as Python, Java, or Scala, and experience with data processing frameworks such as Apache Spark, Apache Kafka, or Apache Beam.- Hands-on experience with cloud-based data platforms and services, such as AWS, Azure, or Google Cloud...
...Science, Engineering, or a related field.- Minimum of 5-7 years of experience in data engineering.- Expertise in big data technologies like Spark, and Kafka.- Strong programming skills in Python- Proven experience in designing and implementing real-time data solutions.- Excellent...
...We are looking for exceptional Spark Scala engineer with 5+ yrs experience who will be responsible for:
Experience- 5+ Yrs
Location- Hyderabad & Indore
Responsibilities:
Implementing the large scale Spark applications and finetune at runtime
Design and implement...
Rs 10 - 16 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Java Enterprise Edition,Apache Kafka,AAAP (Accenture Advanced Analytics Platform) Job Requirements : Key...
...client-facing projects, including working in close-knit teams- 3+ years of experience and interest in Big Data technologies (Hadoop / Spark / Relational DBs)- 3+ years of experience working on projects within the cloud ideally AWS or Azure- Data Warehousing experience with...
...stewards / data custodians / end users etc- Demonstrated experience of building efficient data pipelines using tools or technologies like - Spark, Cloud native tools like EMR, Azure data factory, Informatica, Abinitio etc.- Understand the need for both batch and stream ingestion...
...Job Description
Mission As a Apache Spark™ Backline Engineer you will help our customers to be successful with the Databricks Data Intelligence platform by resolving important technical customer escalations and the support team. You will be the technical bridge between...
...working experience in Elasticsearch to join our team. The ideal candidate should have a minimum of 2 years of experience in Hadoop and Spark development. This role will involve working with large datasets and implementing data processing solutions using Hadoop and Spark,...
...working in an agile environment (e.g.user stories, iterative development, etc.).- Knowledge and working experience in elastic search is mandatory.- 3-5 Years of experience in Hadoop & Elastic Search mandatory.- 3-5 Years of experience in Spark is mandatory. (ref:hirist.tech)
Exp : 4-9 YrsLocation : Mumbai & BangaloreNotice : Immediate-Max 30 daysRole Profile :- Minimum 4 yrs exp out of which Proven experience working with Scala programming language and its ecosystem 3+ years- Strong understanding of functional programming concepts and design patterns...
...Designing and implementing data pipelines using Azure Data Factory (ADF).- Developing and maintaining data processing scripts using Python, Spark, and Scala.- Building and optimizing data storage solutions using Azure Data Lake Storage (ADLS), Blob Storage, and Synapse.-...
Skills :- Hadoop- Python- Spark- PySpark- ETL (Extract, Transform, Load)Roles & Responsibilities :- Data Ingestion: Develop and maintain data pipelines for ingesting raw data from various sources into the Hadoop ecosystem.- Data Processing: Utilize Python and Spark to process...
...EngineerLocation : BangaloreExperience : 3+ Yrs.Responsibilities :- Develop and maintain scalable data pipelines for batch processing using Apache Spark in Big Data projects.- Utilize Scala programming language to implement efficient data processing solutions.- Collaborate with cross-...
Job Description :- Design and develop real-time data ingestion pipelines using Databricks and Spark Streaming to enable timely processing of large volumes of data.- Implement complex data transformations and aggregations to extract actionable insights from streaming data sources...
...Skills : Python , Pyspark, Azure data bricks, data factor, data lake, SQL.- Deep knowledge and experience working with Python/Scala and Spark- Experienced in Azure data factory, Azure Data bricks, Azure Data Lake, Blob Storage, Delta Lake, Airflow.- Experience working with...
...our team.- You will play a key role in designing, developing, and maintaining large-scale data processing pipelines using Python and Spark/PySpark.- Your expertise in distributed computing frameworks and DevOps tools will be instrumental in building efficient and scalable...
...range of enterprise technology transformations and solutions at some of the world's leading multinational organizations.Skills - Apache Spark.Location - BangaloreYears of Experience - 7.5 yrsAs an Application Developer, you will be responsible for designing, building, and...
...-scale enterprise data solutions and applications using one or more AWS data and analytics services in combination with 3rd parties - Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, and Glue.- Experience with AWS cloud data lake for the development of real-time or near-real...
...is a global investment banking company having its headquarters in the US with 75000+ employees globally and is looking for a Scala / Spark to join their Bangalore regional team.Note : Looking for candidates who can join within 30daysWork Mode : Hybrid Experience : 5 to 8 yearsSome...
Job Description :As a Kafka Architect specializing in Spark and Apache Server, you will play a key role in designing, architecting, and implementing real-time data streaming solutions using Apache Kafka, Apache Spark, and related technologies. You will work closely with our...
...engineering, with a strong focus on large-scale data platforms and data products.- Strong experience with big data technologies such as Hadoop, Spark, and Hive- Proficiency in either Scala or Java programming language.- Experience leading and managing teams of data engineers-...