Average salary: Rs1,405,554 /yearly
More stats ...client-facing projects, including working in close-knit teams- 3+ years of experience and interest in Big Data technologies (Hadoop / Spark / Relational DBs)- 3+ years of experience working on projects within the cloud ideally AWS or Azure- Data Warehousing experience with...
Exp : 4-9 YrsLocation : Mumbai & BangaloreNotice : Immediate-Max 30 daysRole Profile :- Minimum 4 yrs exp out of which Proven experience working with Scala programming language and its ecosystem 3+ years- Strong understanding of functional programming concepts and design patterns...
...We are looking for exceptional Spark Scala engineer with 5+ yrs experience who will be responsible for:
Experience- 5+ Yrs
Location- Hyderabad & Indore
Responsibilities:
Implementing the large scale Spark applications and finetune at runtime
Design and implement...
...Design, develop, and maintain data processing pipelines using Apache Spark and Scala
Optimize Spark jobs for performance, scalability, and reliability
Work closely with data engineers and data scientists to implement data-driven solutions
Develop and maintain ETL...
...EngineerLocation : BangaloreExperience : 3+ Yrs.Responsibilities :- Develop and maintain scalable data pipelines for batch processing using Apache Spark in Big Data projects.- Utilize Scala programming language to implement efficient data processing solutions.- Collaborate with cross-...
...learning, statistical analysis, and data visualization tools and technologies- Experience with big data processing frameworks, such as Spark or Hadoop- Strong analytical and problem-solving skills, with the ability to analyze complex data sets and identify trends and...
...IPO- Headquarter Location : Mumbai- Nature of Offering : Product & Services- Founding Year : 1991- No of Employees : 5001-10000Role : Spark DeveloperExperiences : 6+ yearsLocation : Bangalore , HyderabadPrimary Skills : spark, apache sparkKey Responsibilities :- Design, develop...
...range of enterprise technology transformations and solutions at some of the world's leading multinational organizations.Skills - Apache Spark.Location - BangaloreYears of Experience - 7.5 yrsAs an Application Developer, you will be responsible for designing, building, and...
...developing, and deploying big data solutions on-premises or in the cloud.- In-depth knowledge of distributed processing frameworks like Apache Spark.- Strong understanding of open-source big data technologies, including Hadoop ecosystem tools.- Familiarity with data warehousing...
...designing, building, and maintaining data pipelines, with expertise in setting up and optimizing data processing frameworks such as Apache Spark or similar, Kafka/Pulsar, and Data Lake.Responsibilities: Design and develop robust, scalable, and efficient data pipelines to collect...
...engineering workflow- Experience with S3, EC2, EMR, Lambda, Glue, Athena, Airflow, Redshift.- Strong experience with SQL and Scala skills using Spark framework.- Experience with enterprise Data warehousing implementations using Batch and real time systems.- Expertise in working with...
...our team.- You will play a key role in designing, developing, and maintaining large-scale data processing pipelines using Python and Spark/PySpark.- Your expertise in distributed computing frameworks and DevOps tools will be instrumental in building efficient and scalable...
...Experience : 7-10 YearsLocation : Anywhere in India Education : BE, B.Tech, Any Tech GraduateMust-Have Technical Skills : including 3+ years Spark or Scala,- 2+ years of Hadoop/Big Data using tools like Hive, Spark, PySpark, Scala, and RDBMS/SQL Strongly- Preferred: GCP, including...
...construct, test, and maintain data architectures (e.g., databases, large-scale processing systems)- Build high-quality data pipelines using Spark/Scala- Implement best software engineering practices, including Git version control, code reviews, and unit testing- Collaborate with...
...are seeking a talented and motivated Data Engineer stream processing, particularly utilizing Flink and Kafka, alongside expertise in Spark and AWS technologies. As a Data Engineer, you will play a crucial role in designing, implementing, and maintaining robust stream processing...
..., including working in close-knit teams - Overall, 5 + years of experience, with at least 3+ years in Big Data technologies (Hadoop / Spark / Relational DBs) and similar experience in working on projects within the cloud ideally AWS or Azure - Data Warehousing experience with...
...CD pipelines.- Work with Kubernetes.- Basic understanding of MS Azure and Azure DevOps is a plus .- Hands-on experience with Scala and Spark- Participate in key meetings with stakeholders like Quantexa, Data team and Delivery team.- Act as Escalation point for technical...
Location : Chennai, Pune, Noida, Kochi, Hyderabad, Trivandrum.Job Description :- Proficient in SQL, Spark, Scala, and AWS, with a strong command over these technologies.- Minimum 6-7 years of relevant experience in Spark and SQL, plus 2-3 years of hands-on practice in AWS.-...
Job Description :We are seeking a highly skilled and motivated Data Lead with expertise in Spark, Scala, Kafka, Big Data, and Batch Processing. As a Data Lead, you will play a key role in leading and managing the end-to-end data processing pipeline, ensuring the reliability...
Job Description :As a Spark Engineer you will help customers to be successful with the Data Intelligence platform by resolving important technical customer escalations and the support team. You will be the technical bridge between support and engineering and the first line...
Job Description :Mandatory Skills : Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py sparkRequirements :- Experience working with distributed technology tools for developing Batch and Streaming pipelines using SQL, Spark, Python, Airflow, Scala, Kafka- Experience...
Must Have Skills : Spark streaming, Kafka, AWS, Python, Data Engineering, PySpark, CI/CD, Kubernetes, Docker, Dynamo DB.Job Description :1) Proven experience as a Senior Data Engineer.2) A well-rounded engineer with good appetite for learning and ramping up on cutting edge...
...good experience in programming languages like Java, Python or Scala.- Hands-on experience on the Big Data stack like Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch.- Good understanding of programming principles and development practices like checkin policy, unit testing,...
...Gurgaon /Bangalore
Job Description
Experience with design and coding across one or more platforms and languages (e.g. Java, Python/Spark/SQL) as appropriate
Hands-on expertise with application design, software development and automated testing Proficient in Big Data...
Skills :- Hadoop- Python- Spark- PySpark- ETL (Extract, Transform, Load)Roles & Responsibilities :- Data Ingestion: Develop and maintain data pipelines for ingesting raw data from various sources into the Hadoop ecosystem.- Data Processing: Utilize Python and Spark to process...
Summary :We are seeking a highly skilled consultant with expertise in Spark/Scala and Python to join our team. You will play a key role in developing and maintaining big data applications using Spark and Scala with a focus on functional programming principles. Additionally,...
...Production environment.Should have good working experience on : - Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet - Spark - Batch Processing - Setting ETL pipelines - Python or Java programming language is mandatory. - Worked in data Lake , Data...
...concepts and 2+ years applied experience
Hands-on experience in designing, developing and testing data applications using Python/ Spark with technical experience with large multi-terabyte, warehouse, and Datalake/Lakehouse systems.
Experience across the data lifecycle...
Rs 7 - 11 lakhs p.a.
...Description : Design, build and configure applications to meet business process and application requirements. Must Have Skills : Apache Spark Good To Have Skills : Job Requirements : Key Responsibilities: AStrong experience in creating ScalaSpark jobs for data transformation...
Rs 12 - 20 lakhs p.a.
...Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills : Apache Spark Good to Have Skills : No Technology Specialization Job Requirements : Key Responsibilities : 1 Set-up and configure data bricks for an...