Average salary: Rs1,384,714 /yearly
More statsSearch Results: 37,866 vacancies
...Role: Python PySpark Developer
Experience: 3+ Years
Location: Bangalore
Job Description
Hands on experience on Python Development
Hands - on experience on PySpark and Javascript
Excellent Programing skills in python with NumPy, Panda
Sound technical knowledge...
Job Role: Sr Python Developer. Skills: PySpark / Flink / Apache Spark / Scala. Experience: 6+ years. Location: Pan India. JD: - Sr Python Developer.- Must have Hands on experience- in python.- Develop pipeline objects using PySpark / Flink / Apache Spark / Scala. - Candidates...
MUST HAVE • Minimum of 4 to 8 years of Azure DE experience.
• In-depth technical knowledge of tools like Azure Data Factory, Databricks, Azure Synapse, SQL DB, ADLS etc.
• Experience in collaborating with business stakeholders to identify and meet data requirements
• ...
...Job Title: Big Data Engineer (PySpark Developer)
:
We are seeking a highly skilled Big Data Engineer with expertise in PySpark to join our dynamic team. As a Big Data Engineer, you will be responsible for designing, developing, and maintaining our big data infrastructure...
...AM-Pyspark Developer - ANA009537
With a startup spirit and 115,000+ curious and courageous minds, we have the expertise to go deep with the world’s biggest brands—and we have fun doing it. We dream in digital, dare in reality, and reinvent the ways companies work to make...
...Python Pyspark Developer_Pan India (Mumbai/Pune/Bangalore/Chennai/Kolkata/Delhi NCR) (Work from Office)_Full-Time_Direct Hir e
Job Title: Python Pyspark Developer
Experience: 5 to 12 years
Locations: Pan India (Mumbai/Pune/Bangalore/Chennai/Kolkata/Delhi NCR)...
...Strong expertise in Python programming, particularly in object-oriented programming, is essential for developing machine learning models and algorithms efficiently.- Pyspark : Experience with PySpark, which is a Python API for Apache Spark, is important for handling large-...
PySpark,Glue,Hudi,Python3 x Preferred ,Nodejs,AWS, REST API,Postgresption
Qualifications
Job Responsibilities
PySpark,Glue,Hudi,Python3 x Preferred ,Nodejs AWS, REST API, Postgresption
Years of Experience- 4 yrs To 6 yrs
..., ChenniaInterview mode is Walkin driveRequirements :Mandatory skills : (8+ Years of experience in Data engineering with 5+ Years on Pyspark/NoSQL is Mandatory)1. Person should be strong in Pyspark2. Should have hands on in MWAA (Airflow) / AWS EMR(Hadoop, Hive) framework3....
...standards on Big Data with a focus on AWS cloud technologies and Python/Pyspark from an application development perspective.- Be aware of DevOps... ...for Unit testing and Functional testing when the application is developed- Conduct and participate in coding and design reviews.- Manage...
...analytical and problem-solving skills paired with the ability to develop creative and efficient solutions; tolerance in dealing with bad... ...experience as a Data engineer with extensive working experience using Pyspark, Advanced SQL, Snowflake, complex understanding of SQL and...
...Dynamic Team. The Ideal Candidate Will Be Responsible For Designing, Developing, And Maintaining Data Solutions On The Microsoft Azure Platform.... ...:- Design, Develop, And Maintain Data Pipelines Using PySpark On The Microsoft Azure Platform.- Collaborate With Cross-Functional...
...learn new technologies quickly and excel in a fast-paced environment. In this position, you will be a key contributor and partner in developing analytics and insights that global executive management teams and business leaders will use to define Ad strategies and deep dive...
...Job Role: Software Developer (Python Pyspark AWS Devops )
Experience: 5 Years Mandatory Skills: Python Pyspark AWS Devops
Location: Pune Work mode: Hybrid
Budget: 16 LPA Education: Graduate
Technology Skills:
Strong knowledge of Python/Pyspark and Bash...
...organizing Agile teams with minimal supervision.Responsibilities : - Develop and maintain data processing pipelines to integrate and process... ...development.Big Data Processing : Experience with tools like Pyspark for processing and analyzing large-scale data sets.Terabytes :...
...working experience in ETL scalable data pipeline usin61Scala, Python, Pyspark, Hadoop, Apache Spark, Spark SQL, Kafka, Nill, and incremental... ...like Oracle, and Netezza and have strong SQL knowledge.- Developing scalable streaming solutions based on Hadoop/ Big Data stack such...
Job Description :- 7 years' experience in developing scalable Big Data applications or solutions on distributed platforms.- Able to partner with others in solving complex problems by taking a broad perspective to identify innovative solutions.- Strong skills building positive...
...knowledge in object-oriented Python programming.- Proficient in pushing code to GIT for version control.- Strong Expertise in DataBricks and PySpark for data processing and analysis.- 4 to 5 years of experience in data science or related field.- Knowledge of AB testing...
...collection and crediting.- Contribute to analysis plan design, develop underlying data pipelines, execute analyses and summarize results... ...tell a story from data through analyses.2. Programming in Python, PySpark and SQL for data pipeline development used to support analytics3...
...and maintain data pipelines and impactful data products in close cooperation with users and business owners
Ensure solutions are developed and implemented following standards, using high-quality code, focusing on simplicity, performance and maintainability
Analyse data...